UK +44 (0)1865 849841
Malaysia +60 3 2117 5193

Busting Common Myths about Quantitative PCR & Digital PCR

Busting Common Myths About Quantitative PCR & Digital PCR

qPCR and dPCR are mature analytical methods for quantifying selected genetic materials in a sample. Both utilise the PCR method to amplify amounts as small as a single molecular fragment (template) to levels where they can be easily detected, usually by fluorescence techniques. 

In dPCR, the sample is partitioned into many subsamples (typically 104), with each such partition containing about 1 template on average.  After amplification the partitions are detected as positives (1 or more templates) and nulls, giving a precision of estimation that can be better than 2% in the optimal range 1-2 templates/partition (see Fig. 1). Precision remains good down to μ = 0.1 template/partition. For 104 partitions, each of volume 1 nL (typical for commercial digital drop PCR instruments), this μ translates into 100 templates/µL. Through predilution, there is no upper limit on the initial concentration in the sample.

In qPCR, quantitation is obtained either absolutely or relatively through comparison with reference materials. For absolute quantitation, the accepted best procedure is calibration with samples containing the target substance at a series of known concentrations chosen to encompass that expected for the unknown. Each sample is characterised by its quantitation cycle Cq, which may be extracted from the sigmoidal growth profiles (fluorescence signal vs cycle number) in several ways:

  • Threshold ̶  the signal reaches a designated level above baseline
  • FDM ̶  the cycle where the maximum in the first derivative of this growth curve occurs
  • SDM ̶  the maximum in the second derivative

Figure 1.  Percent standard deviation in estimate m of average copy number μ for 104 partitions.  The curve shows expected results for monodisperse partitions, where m = μ.  Points are from Monte Carlo simulations for 0-20% polydispersity.  [adapted from Fig. 1 in Anal. Chem. 2016, DOI 10.1021/acs.analchem.6b03139.]

Another definition of merit is “Cy0”, obtained as the intercept with the cycle axis of a line tangent to the growth curve at its FDM. Historically, the threshold marker (usually written Ct) has been most widely used; however, this is arguably the worst choice (see below). Following from the exponential nature of growth — N = N0 Ex, where N is the number of amplicons at cycle x, N0 the initial copy number, and E the amplification efficiency (range 1-2) — the calibration curve is commonly taken as a straight line least-squares (LS) fit of Cq vs. log(N0) for the knowns.

There is no practical lower concentration limit for qPCR: Through “limiting dilution” procedures, growth profiles have been recorded for single templates. However, even with the standard procedure of recording knowns and unknowns in triplicate, precisions have typically been poorer than those expected in dPCR, with workers often being happy with ~25% results. In principle, we can do much better.

A misplaced emphasis in dPCR

In the development of dPCR technology, very much effort has been focused on getting the partitions as nearly constant in volume (monodisperse) as possible. As I previously noted, this is “barking up the wrong tree!”. Emphasis could better be put on higher throughput and lower cost if these can be achieved through techniques that relax the monodispersity demand. This is the essential result shown in Fig. 1: The precision for estimating m is independent of the level of polydispersity.

There is a problem, though. With polydispersity, m becomes a biased estimate of μ . The solution is calibration with reliable reference materials, so establishing such references must be a part of polydisperse dPCR. Of course, the distribution of partition volumes must remain constant run-to-run for such calibration procedures to be feasible, just as a monodisperse partition generator must give the same volume run-to-run. Alternatively, the biases are not great for μ < 1 for moderate polydispersity and might be negligible in many applications. I will discuss these results in detail at the upcoming San Francisco meeting.

The results in Fig. 1 seem unintuitive since clearly, a large partition must have a greater probability of containing one or more copies than a small one, so a mixture of large and small must give reduced precision. This reasoning is flawed because of the role of binomial statistics in determining the curve in Fig. 1. Consider a simple Monte Carlo (MC) experiment:  Generate 1000 random numbers (RNs) uniformly distributed on the interval 0-1 and sort them into two groups having value < 0.5 and > 0.5. The expected result is 500 in each group, but the binomial standard deviation is (npq)1/2 where n is the number of trials (1000), p the probability of “success” and q of “failure” (both 0.5 here).

Thus we can expect both groups to fall in the range 500±16 about 2/3 of the time if we repeat this experiment. Now consider a second experiment where each trial consists of two RNs. The first plays the role of polydispersity in dPCR and sets the probability for the second. For example, if the first RN = 0.718, then we call the second a null if it is < 0.718 and a positive if greater. The average for the first RN is 0.5, and we get exactly the same statistics for the second RN in this experiment as in the initial one.

Flawed analysis procedures in qPCR

My colleague Spiess and I have critiqued qPCR analysis procedures in several papers [in Anal. Biochem. in 2014 (2), in Anal. Chem. in 2015, and in Sci. Repts. (Nature) in 2016 (coauthors Rödiger, Burdukiewicz, & Volksdorf)]. To cut the quick, we have found: (1) Ct is the worst choice for Cq marker as usually implemented. (2) In LS calibration fits of Cq vs log (N0), nonlinearity is more the rule than the exception, meaning that linear-based unknown estimates are biased and the amplification efficiency is not constant. (3) The precision for estimating Cq is limited by imprecision in pipetted volumes under most conditions rather than by the quality of the fluorescence data. Space concerns dictate that a fuller discussion of these matters be postponed to a later feature.

 

Joel Tellinghuisen is Professor of Chemistry, Emeritus at Vanderbilt University in Nashville and has published papers on data analysis problems in HPLC, in soil phosphate sorption work, and in isothermal titration calorimetry.

 

Joel will be presenting at the 4BIO Summit: USA on monodispersity and results in dPCR. To see the full list of presentations, take a look at the agenda now.  

Leave a Reply

Subscribe to Our Newsletter

Get free reports and resources from our world class speakers.
  • This field is for validation purposes and should be left unchanged.

Life Sciences Twitter Feed

Archive