How deep learning is transforming medical imaging: image search
Posted 29th July 2020 by Liv Sewell
Five or six years ago, I was looking for a strategic direction for the KIMIA Lab. We started with the problems. We asked, ‘What problems do we have in medical imaging?’
After consulting many clinicians, radiologists, pathologists and companies, we came to the conclusion that we should focus, not on classification, segmentation, or prediction – as important as they are – but on image search. And particularly focus on the issue of, if we have large archives of images available, how can we go inside and search within the images in the archives to find similar cases without primarily relying on text?
Providing value means solving problems
We wanted to use AI to provide maximum value. We realised we could create tools which focus on classification or segmentation, but even if a smart algorithm were to diagnose a certain cancer, the pathologist would still have to sit down and write a comprehensive report to justify the result and enable the right treatment planning.
The fact that AI software can tell us what is occurring in a tissue sample obtained through biopsy is good for publishing papers, but that might be all. Your AI agent is achieving 99% accuracy, and this is 5% better than the pathologist on average. But can your AI agent write sophisticated reports?
Not at the moment.
That means we still need the pathologist to do their job and write those expertly written and detailed reports. So, we saw with clarity that AI-driven image search could provide value in that direction by being an assistant to the pathologist.
The cases in the archive of hospitals have already been evidently diagnosed. We know the diagnosis was right, we know what treatment the patient received, and we know the outcome of the treatment. We realised that developing a tool which could tap into the knowledge latent in hospital archives to assist pathologists and clinicians would be enormously valuable.
So, the focus of the KIMIA Lab, and the problem we wanted to solve developed: how can we take a patient’s image and then go inside the archive of the hospital containing millions of images, and somehow compare that image with the images of millions of other patients and find the most anatomically similar images and retrieve them? That would allow us access to that existent and unused medical wisdom of already dragonised cases by looking up corresponding reports for matched images.
The concept and value of image search: a virtual peer review
The pathologist provides an image and gets a report of the top 3, 5, 10, 100, similar patients, based on images stored, and then the pathologist can choose which one really matches their patient by visual inspection and take a look at the accompanying data of that evidently diagnosed case/s to gain more confidence. That is meaningful for diagnostically difficult cases. For “easy” cases the search results can directly provide a reliable consensus based on majority vote among the diagnoses of matched cases. Clinicians could also use image search to support prognosis and treatment planning. With image search, clinicians will be able to retrieve similar cases simply with an image within a matter of seconds and benefit from the knowledge of colleagues’ diagnosis, prognosis, treatment planning and the corresponding outcome.
It is basically like a “peer review”, a consultation that you do with your colleague, but it’s not physical, it’s virtual peer review in the absence of your colleagues. And, of course, the archives of medical images are growing and growing because we are going more digital, and imaging techniques are improving and capturing increasing numbers of images. Who should analyse all of these images? We are getting more specialists with time, and we are getting more images. We should make sure that we provide clinicians with smart agents to tap into the wisdom that is already available in those databases. There are many cases that have been already diagnosed and if a new doctor comes, he or she should not make the same mistakes, again and again. We can avoid that if we can search within large archives. For common cases, that usually takes a large portion of the diagnostic work, that would make perfect sense.
Image search will help everywhere we capture and store images
We are sure that search will be one of the technologies that will make it to the hospital beside other fantastic tools we have in computer vision and AI – including tools for classification, counting, segmentation and prediction – to enable searching in images, reports and molecular information.
We hope that image search will add value throughout imaging processes, from triaging cases in the lab, through to diagnosis, treatment and drug development. Search can help, and should help, everywhere that images are involved.
Everywhere that we capture and save images, we should have the functionality to search. My cell phone can identify faces in photos using image search, but we cannot search for a specific cancer type using an image. This is not acceptable. It’s not right that the technology is available for entertainment purposes but not for alleviating human suffering.
We envision our image search tool will be used in triaging cases, diagnosis, treatment planning, drug discovery, probably in other ways too – for example, finding the bridge between images and genetics, and beyond that, in many other ways we cannot possibly imagine today. Once we have manufactured the tool, we will see further translational values and applications. But we know even now that the concept will be transformational.
In the same way, the rotating engine’s concept is everywhere (the electric toothbrush, the washing machine, hairdryers, the aeroplane etc.) and has transformed the way we live; image search will transform medical imaging. A malignancy or abnormal pattern detected and diagnosed once should not be missed by any other physician. I want to see image search in the lab for triaging, setting priorities for different cases, assisting pathologists by providing similar cases; in the research lab for drug discovery; in the clinic for verification of treatment plans; everywhere we capture and store images.
Hamid Tizhoosh is Professor at the Faculty of Engineering at the University of Waterloo in Canada and the director of KIMIA Lab, the Laboratory for Knowledge Inference in Medical Image Analysis. He is a keynote speaker at the 6th Digital Pathology & AI Congress: USA.
Join Hamid and hundreds of others to engage with the latest developments in image analysis and AI applications for digital pathology at the 6th Digital Pathology & AI Congress: USA. Discover the Programme.
One Response to “How deep learning is transforming medical imaging: image search”
Leave a Reply
The use of machine learning in general and deep learning in particular within healthcare is still in its infancy, but there are several strong initiatives across academia, and multiple large companies are pursuing healthcare projects based on machine learning.