How do we embed AI into Digital Pathology workflows?
Posted 22nd February 2019 by Joshua Broomfield
This is an exciting time in pathology: now that digital pathology is mature, we have noticed an uptake in a lot of AI start-up companies. Most of the algorithms have been developed on a research basis or in a test environment, and only recently applied. What follows is a summary of the work we are doing at the University of Pittsburgh Medical Center testing these AI apps, and some of the questions arising therefrom.
I’ve been working with one or two of these image algorithms that provide AI tools in order to do the following:
1. Validate these algorithms on our own images.
Although we produce an H&E stain, it’s probably unique to our environment, so it would be important to make sure that it would also be available in an UPMC AI app. Therefore we have been looking at whether it is accurate in our lab.
2. Analytical validation studies: co-developing some algorithms.
We have a lot of data to work with and we know some of the algorithms we’d like to co-delevop. We’ve been partnering with those who have the deep learning skills, attempting to match up clinical expertise with machine learning expertise.
3. Explore possibilities to embed these AI algorithms into clinical work flow.
Assuming that its been proven that they’re feasible, there is the issue of how exactly to embed these apps into clinical workflow. That brings up two further questions: (1) where do you insert it into your workflow and (2) what kind of infrastructure is required in the environment allow that to happen?
Where can these apps be embedded into work flow?
One solution is to insert these AI tools before the pathologist even gets to the case. This would allow screening and pre-diagnosis once the image is acquired. Then the pathologist will use the case together with the AI data.
One alternative is that once the pathologist has seen the case, they could use the AI tool function and direct what is needed. For example, seeing mitotic score. If the pathologists doesn’t want to spend time doing this, they can use the AI tools to perform mundane functions.
Another option – which I haven’t yet explored, but have been thinking about – is can AI perform in the background once the case has been finished, signed out, and the image archived in the database? That would be great for QA purposes such as cytological or histological correlation. For example, you could use radiology pathology correlation if that was set up in an enterprise table.
What kind of infrastructure is required for these apps?
The other significant question is, how do we embed AI into workflow from an infrastructure perspective?
This means using GPUs, which most hospitals do not use. We use CPUs, and the conversation we are having with our data centre now is whether or not we can access GPUs in order to run these deep learning algorithms. The alternative would be to ship the data out and run it on an AWS using one of the latest count environments.
However, when we have those conversations with our information services security team, concerns will come up regarding privacy security issues. Particuarly, issues such as having identifiable patient data on an external count environment, and also the expense related to this. Preferably, behind our own firewall seems best for us.
Questions for further analysis
To summarise the work we have done so far:
- Code development, i.e. analytical validation of algorithms.
- Clinical validation to see if these algorithms are accurate and useable.
- Thinking about necessary developments in infrastructure needed to support this technology.
Further questions that have come up related to this are as follows: how do the pathologist actually look at AI data? Do they have the information system to do this? Is it LIS centric or an internal digital pathology system? If that’s the case, are these AI apps that we’re validating interoperable? Can they plug into any of these systems that we have here?
This has been the focus of our work and these are some of the questions that we have attempted to answer, all of which are necessary for AI to be adopted in practice.
This is the first of a two-part blog post. The next installment will look further into the potential future of these AI apps in Digital Pathology.
Liron Pantanowitz is Professor of Pathology and Biomedical Informatics at the University of Pittsburgh. He will be elaborating on his work validating AI apps in a presentation at the 5th Digital Pathology & AI Congress: USA.
The 5th Digital Pathology & AI Congress: USA will examine the latest advancements in digital imaging technology, image analysis techniques, approaches to implementation and strategies for adoption, and the latest success case studies. The agenda is available to download here.
Leave a Reply