What does the future hold for AI in Digital Pathology?
Posted 1st March 2019 by Joshua Broomfield
This is the second of a two-part blog post. In his first post, Liron wrote on embedding AI in Digital Pathology workflows.
Digital Pathology AI apps are certainly feasible, but exactly when they will be ready for clinical use is less clear.
There are potentially hundreds or thousands of algorithms that will need to be developed. Currently, there are only a handful of algorithms that are approved by regulatory bodies for clinical practice, so we’ve got a long way to go.
Alongside these new apps, we also need them validated and their reliability demonstrated in peer reviewed literature. We also need to look at outcome data. Did they actually make a difference? More importantly, are they safe? As well as doing what they are intended to do, it is crucial that there are no unintended consequences that cause harm.
Current barriers to AI app development
Firstly, there is a certain amount of inertia to using an algorithm. One needs to first get onto a digital pathology platform, which has many barriers itself. Once on that platform, the lab information system will need to be connected. Even once that is accomplished, there is the issue of figuring out how to use the algorithm, which will require buy-in from your colleagues.
Secondly, buy-in from Pathologists is essential. Many pathologists may be resistant to using AI. They may not trust black box concept or know how it works.
Thirdly, gaining FDA approval. These algorithms are not yet approved in the US for pathology, and there are conversations going on between the Digital Pathology Association and the FDA about how to move this field forward. We’re potentially going to have hundreds of thousands of apps: if these apps are going to continuously update, learn from themselves and improve deep learning networks, how can that be managed? It will be a challenge to have constant FDA approval in a manner that is timely and inexpensive.
Lastly, currently there is the lack of business use cases. We know that these apps could make us more efficient and accurate in our work, but there is no real proof yet. We will need to wait and see the results from the business use cases.
Current barriers to pathologist buy-in
Firstly, there is a lot of hype around AI in general, because AI is permeating our world from all angles, even in our private lives (e.g. social media). This generates excitement about the possibilities of using AI. However, once people start looking into it and start having conversations with their own IT teams or the vendors, they realise that using this for every day routine practices is actually quite far off on the horizon. This can lead to some disappointment.
Following on from that, there are accounts where AI has been implemented and used in other areas of healthcare, and ultimately been unsuccessful. For example, IBM Watson in radiology – the fact that early efforts failed raises concern that there is a lot of misplaced hype.
Secondly, there are concerns about computers replacing people. In this case people are thinking of ‘strong AI’, where computers are going to come in and completely replace pathologists. It needs to be conveyed to pathologists that we’re talking about ‘weak AI’, where AI tools are used to just assist in tasks such as computer-aided diagnosis rather than replace us outright. By making it clear that it is about computers augment what pathologists already do there may be less resistance and more buy-in in the profession.
Developing a ‘digital fellow’ for the future
My hope is that in time AI will come to act as my ‘digital fellow’.
I’ve been fortunate to work in an academic medical centre, and I often work with a fellow, who fills the role of a great assistant. For example, they can organise all my work and screen mental network before I get to it. As they get more advanced in their training they’re able to start initiating important things such as order the correct ancilliary studies and even generate a preliminary report. By the time I get that case, it has often been worked up to a fairly high level.
This all makes the process much more efficient and accurate, and I am less likely to miss small nuances. The fellow also takes care of things for me after the case has been reviewed: adding content into the pathology addendum report, and managing everything downstream.
Currenlty, that requires two people. But without needing another person present, the AI tool could do all of that for me. It could organise my cases, triage them appropriately, order what’s required, and then present them to me like my fellow would have. Once I have confirmed my interpretation, AI can take care of everything downstream from there. I can then spend more time with my human fellow on more important things.
Can we achieve that? I hope so.
Liron Pantanowitz is Professor of Pathology and Biomedical Informatics at the University of Pittsburgh. He will be elaborating on his work validating AI apps in a presentation at the 5th Digital Pathology & AI Congress: USA.
See who else will be joining Liron at 5th Digital Pathology & AI Congress: USA. The agenda is packed full of a wide range of topics around the future of Digital Pathology and the integration of AI. Download the agenda and avoid missing out!
Leave a Reply