Tuning Algorithms to the Histology Lab
Posted 26th November 2018 by Kieran Chambers
The promise of an effective set of tools based on deep learning or other machine learning algorithms is the current buzz of the digital pathology markets. While the evolving tools, models and techniques are producing strongly positive results, there are still many factors which impact the utility and portability of models and tools being created across real-world data sets.
Many of the factors impacting portability of models have roots in the analog pathology world, and some are artifacts of the digital data creation process.
In the analog world, processing similar tissue by two different histology labs, using equivalent techniques, can produce slides which are significantly different to an observer. That visual difference is either maintained or exacerbated during the image-capture process. While a histology lab has procedures around tissue thickness, stain handling and saturation, these varying procedures result in a general laboratory normal product. That is, the slides, and images resulting from those slides, have a typical appearance, such as saturation or relative saturation of stain, thickness, focal plane(s), placement and orientation.
Tuning the Analog Model – The Pathologist
A pathologist is an expert system, well tuned to the output of their own laboratory and processes. In the analog world, the pathologist, over time, adapts to (and even influences) the protocols implemented by their own histology laboratory. The depth and types of staining are a function of the preferences of the pathology group associated with the lab.
In the analog world, the pathologist also adapts to (and influences) the tools and environment under which they can review their slides in the analog world. Finally, with the first transition to the digital world, the pathologist adapts to (and influences) the tools, workflows and environments used to create and review whole slide images (WSIs) by the existing expert system, the pathologist.
Time and experience make the pathologist a model that has undergone significant tuning for their routine workflows and environments. However, a pathologist is an extremely adaptable model, and has the capability to accept inputs that differ from the norm. Even when slides or WSIs are not identical to those produced by their own lab, or if there are changes in the environment, the human brain ‘normalises’ the data to perform a successful review (although not necessarily at an identical success rate).
An example of this phenomenon is when imaging technologists reject WSIs as inadequate because of tissue folding, areas that are out of focus, etc. However, many pathologists are perfectly comfortable with and capable of rendering accurate diagnoses despite these technical “inadequacies.”
Tuning the Digital Model – Deep Learning
Similar to the analog model, the digital model is quite adept at analysing data that is similar to what it has routinely seen in its “experience” (the training period of the model). The effectiveness of models and algorithms on new data from new sources frequently results in decreased performance numbers relative to data like that which it was trained on. This decrease in performance observed when introducing an algorithm into an organisation can be handled in one of three ways; through process standardisation, model tuning or by creating a model that is tolerant to inter-lab differences.
The first possible remediation, process standardisation, is to implement a strict protocol for slide, and resulting WSI, production in a laboratory in support of the algorithm. Requiring specific tissue handling, staining protocols and WSI production can reduce the differences between images from multiple labs, but this often adds increased cost and work for the lab, as these requirements are beyond the norm.
Secondly, we can require model tuning with data from the laboratory into which we deploy a model. In doing so, it shifts the model to be more effective on the WSIs from that lab, but less effective on images from the original (or other) training sets. While this improves local model performance, it does not produce a generic, transferable tool that is useful on WSIs produced beyond that lab’s walls.
Finally, we can perform additional steps during the original model creation process. This can involve gathering training data representing a broad range of histopathology preparation and handling techniques. Additionally, that data can be processed via shifting color models, deformation, perturbation and through the addition of noise to produce a model more tolerant to differences in any singular image, instead keying on the salient features which define the structures or regions, detection of which is the goal of the algorithm.
All expert systems, be they a human expert or a deep learning model need tuning and exposure to vast amounts of different types of data to be effective in the real world. In the analog world of pathology, experience makes for a better pathologist; similarly, in the digital world, exposure of the algorithm across a variety of specimen preparations remains of key importance to provide that same experience to produce accurate results.
Eric Wirch is Chief Technology Officer and Managing Director of Corista.
Have you seen the agenda for the upcoming 5th Digital Pathology & AI Congress: USA? With presentations on computational pathology, AI and the implementation of digital pathology, it’s well worth a look. Check it out here.
Leave a Reply