Why do biomarkers fail?
Posted 12th June 2019 by Joshua Broomfield
When conducting an experiment to identify biomarkers, it is crucial to design the experiment properly. 80-90% of all biomarker populations for the last 20 years have not and cannot be reproduced, and the main reason that biomarkers fail is that these experiments are not designed properly. In this post, I will outline two ways in which experiments are poorly designed, and then outline the technological and methodological solution in a later blog.
Labelling of Cancer types
One reason why experiments fail in cancer is that cancers are labelled in an unhelpful manner. For example, in ovarian cancer, there are really 4-5 subtypes of cancer due to the various sub-tissues in the ovary with different underlying structures. The pathologist and the clinician may label them all as ovarian cancer, but from a histological point of view, the various cancers are very different.
A lot of projects fail because patient samples are labelled collectively as ‘ovarian cancer’ and then look at the liquid biopsy for signatures. However, they do not find the biomarker common to so-called ovarian cancer.
But if the experiment is designed to get a more accurate histological sub-diagnosis, then patients can be grouped according to sub-types. This is almost like running multiple different experiments, but when designed this way it is possible to find a pattern that is unique to the patients who have cancer from the same specific tissue. This might then yield multiple sets of biomarkers with multiple proteins for each one, making it possible to perform tests with biomarkers that would distinguish the sub-types of cancer.
A lot of money is wasted by companies or academia because they do not make these distinctions based on the labelling by clinicians. If the experiment is designed according to the subtypes, then it will produce a signature that works. It is, of course, more expensive to do it this way, but it provides much more positive results.
Using sufficient sample numbers
To use the example of ovarian cancer again, if you need to screen at least 200 patients, then sometimes that will mean screening 80 of each subtype. Then it will also require 200 healthy samples in order to distinguish anything. In total, this then requires at least 500 patients to produce significant results.
For the majority of funded research projects, there is only enough money to screen roughly 100 patients. That might take a postdoc and a technician 2-3 years to do that, and yet they still won’t have enough patients to identify anything significant. They might also produce the results in such a way that it isn’t repeatable: buying arrays from different vendors with no internal controls or reproducibility.
So much of the research is done that way and nothing can come of it, rendering it a waste of time, money and resources.
The main challenge in this area is to get people to design experiments that produce useful results, and are not simply routine tests. A lot of money has been spent that could be used in a more beneficial way, given that nothing much has emerged from biomarker study in the last 20 years that has developed a meaningful diagnostic biomarker from the liquid biopsy.
In my next post, I will look at a way forward in meeting this challenge.
Dolores Cahill is Professor of Translational Science at University College Dublin, Ireland.
At the Liquid Biopsies Congress: Europe, talks will focus on circulating biomarkers such as cell-free DNA, (cfDNA), circulating tumour cells (CTCs), and extracellular vesicles (EVs). Click here to download the agenda and view the presentation titles.
Leave a Reply