Big data, knowing and AI: do they influence drug discovery?

Posted 8th November 2017 by Jane Williams
The quest for magic bullets started with Paul Ehrlich. He wanted to target the syphilis bacteria without harming the patients, which led to Salvarsan. Ehrlich was a foundational thinker, and we owe him modern concepts such as receptor and pharmacophore. His impact on the field – seeking the ideal molecules – pervades drug discovery.
From a chemical standpoint, there is no strict definition of “drug”, although this changes abruptly in regulatory science. Indeed, unless a stamp of approval is given, no molecule is a medicine. From a computational perspective, this is the biggest challenge. Thus, mimicry is key in drug discovery, and the world of medicines is rife with me-too drugs.
In many scientific fields, software-driven machines are at the root of data-driven revolutions. For the most part, these powerful success stories deal with knowable (e.g., terrain) or closed-world (e.g., Go) problems. However, because many drugs are invented, drug discovery remains an open-world problem. From hundreds of billions of molecules, we remain challenged to pick the right ones, given the multi-parametric optimisation problem.
We can imitate and we can predict. but we mostly stumble across the “right ones”. However, trapped between the influx/efflux pumps, efficacy and safety in distinct populations, food and drug interactions, genetic variants and microbiome influence, dynamic interactions and adaptive pathways, the usage of AI in drug discovery remains an aspiration, more than anything.
Key to limiting AI impact is the advent of massive data sources. How much is evidence? How much is truth? How often do facts change? To navigate the sea of data leading to knowledge, the “Illuminating the Druggable Genome“, an NIH-funded program, focuses on accurate data wrangling, processing and analytics for target discovery. Accessible via the Pharos portal, IDG has shown that there is a knowledge deficit with respect to 2 out of 5 human proteins.
Big data, as long as it’s incomplete, will paint partial images: a skewed reality upon which the foundation of the house of AI will be flighty, as if on quicksand. Our key goal, given recent improvements in machine learning, ought to be to increase the veracity of big data, from molecular to population sciences, to improve knowledge, and to increase our confidence in this knowledge. In that context, AI is likely to lead drug discovery.
Tudor Oprea is a Professor and Chief of Translational Informatics at the University of New Mexico. Next month, Tudor will explore the quest for new drug targets at the Global Pharma R&D Informatics Congress.
If you found this interesting, you might like “What’s Driving Change in Drug Discovery and Development?“
Leave a Reply