UK +44 (0)1865 849841
Malaysia +60 3 2117 5193

Automatic AI-powered Tumor Budding

Tumor Budding has been shown in numerous studies to be an independent prognostic factor in colorectal cancer (CRC) for the risk of distant metastasis and, in stage II CRC, for overall survival. Fraunhofer IIS and the Institute for Pathology, University Hospital Erlangen are developing an AI system to automatically detect tumor budding in CRC patients.

Tumor buds are believed to be epithelial cells having undergone Epithelial Mesenchymal Transition (EMT). A bud is defined as a cell cluster of 1-4 cells in the center (intratumoral budding) or the invasive margin (peritumoral budding) of a primary tumor.

Stage II patients with high budding shall be considered for adjuvant therapy.

Researchers of Fraunhofer IIS and University hospital Erlangen-Nuremberg have set out to fully automate the scoring process.

How is budding determined? 

The International Tumor Budding Consensus Conference (ITBCC 2016) has recommended implementing a three-tier hotspot scoring system: in H&E staining the pathologist shall first locate the field of view (10x objective) along the entire invasive margin with the highest bud density and then count the buds (20x objective) inside the hotspot (round, 1mm diameter).

Table 1– ITBCC Cutoffs

This number is then categorized into either low, intermediate or high budding according to predefined cutoffs (Table 1).

Computer-assisted Tumor Budding

In contrast to the ITBCC recommendation, this system is developed for IHC pan-cytokeratin (PanCK) staining, where buds are more easily detectable.

The first attempt has been to chain a number of classical image processing filter steps that (1) locate the tissue, (2) binarize the image (DAB vs. H), (3) detect the main tumor and (4) small “tumor islands” along the invasive front and eventually identify buds by their (5) minimum and maximum area as well as (6) their distance to the main tumor. However, this approach did not yield a satisfying accuracy.

The solution to the problem was to use this approach only as a pre-processing step for identifying “bud candidates” and then feed these bud candidates with a bit of surrounding context into a Convolutional Neural Network (CNN) trained to distinguish true buds from false buds or other artefacts.

In the evaluation presented below, the invasive margins where budding is evaluated have been annotated by hand.

In order to facilitate an entirely autonomous system, the margins can be automatically detected by segmenting PanCK positive regions, grow them by the desired width and subtract again the original regions. This will include a margin around non-tumorous mucosa. Either this will be tolerated, given that no buds will be located there, or another CNN is trained to distinguish healthy from tumorous regions.

Data Set         

The CNN XceptionNet (36 layers, 23 Mio. parameters, 299×299 input patches) was trained on 5.558 hand-annotated true-positive buds and 20.469 false-positives (“bud candidates” falsely identified by the preprocessing algorithm) from 49 whole-slides. The hotspot comparison is carried out on another disjoint set of 49 slides with 15.498 buds.

Evaluation

A man-vs-machine hotspot-comparison was carried out on a disjoint set of slides. Pathologist A first hand-annotated the invasive margin.

Then two pathologists, A (consultant) and B (PhD candidate), independently annotated the hotspot and number of contained buds.

Finally, the computer, denoted “AI”, detected all buds in the entire invasive margin and calculated the top-3 hotspots. It was then evaluated how well the auto-detected hotspots correspond to the manually detected hotspots, both in terms of location and number of buds.

Results

The results show that in 33 and 36 out of 49 slides, the AI’s top hotspot overlaps with that found by pathologists A and B. In 41 and 45 cases one of the top-2 auto-detected hotspots overlap.

The hotspots’ mean bud counts for A/B/AI are 29/36/27 (σ=16,6/30/24). The detected absolute hotspot’s bud count vs. pathologist A (B) is off by 26% (24%), which lies in between the mean inter-observer difference of 35%.

The mean signed difference is -0,65 (8,16) buds vs. A (B), indicating that with regard to the senior pathologist A the proposed system on average tends to find neither too many nor too few buds.

The findings demonstrate that the system largely succeeds in detecting hotspots, but should be re-trained on a broader ground truth to counter the inter-observer variability.

Outlook

Currently, ongoing work aims at improving accuracy, repeating the evaluation with an automatic invasive margin detection and validating the results on a multi-center database.

From a medical research perspective, appropriate cut-off values for low/intermediate/high budding in PanCK staining need to be determined and an alternative risk prediction score that better leverages the higher capabilities of a fully automatic tumor budding quantifier should be investigated.

Benz et al. presented their research in a poster at the Digital Pathology and AI Congress: Europe, you can view the poster on our resources page.

Volker Bruns is heading Medical Image Processing at Fraunhofer IIS, Michaela Benz is a senior scientist at Fraunhofer IIS, Matthias Bergler is a scientist at Fraunhofer IIS and Carol Geppert is a consultant pathologist at University Hospital Erlangen.

The 6th Digital Pathology and AI Congress: USA will provide a collaborative forum to discuss the latest advances and applications in digital pathology. Find out more here.

Leave a Reply

Subscribe to Our Newsletter

Get free reports and resources from our world class speakers.
  • This field is for validation purposes and should be left unchanged.

Life Sciences Twitter Feed

Archive