The hidden limits of digital PCR and what comes next

Eleen Shum │
Eleen Shum │ Apr 28, 2025

Digital PCR (dPCR) marked a meaningful step forward in molecular quantification. By partitioning reactions into thousands of compartments it enabled more sensitive detection, removed the need for standard curves, and improved the quantification of rare targets. For many applications, it became the go-to for higher accuracy.

But despite its advantages, dPCR still carries fundamental limitations — especially when it comes to dynamic range, assay flexibility, and true single-molecule resolution.

Statistical estimation in dPCR introduces uncertainty in quantification at the edges

While dPCR introduced the concept of isolating targets, it doesn’t directly observe single molecules. Partition occupancy is statistical, not binary — and multiple molecules can occupy the same droplet or well. As a result, dPCR fundamentally relies on Poisson correction to estimate molecule counts.

But for Poisson statistics to work, strict assumptions must be met — including consistent droplet or well size across the entire run. In practice, this is vulnerable to variation. Manufacturing inconsistencies in dPCR consumables, shifts in droplet generation between batches, or even changes introduced in software updates can affect partition volume. These subtle variations propagate into quantification errors, making reproducibility more difficult than it appears. Ultimately, it’s not a true counting method — it’s an inference model built on assumptions.

A constrained dynamic range forces more complex workflows

Many biological samples contain targets that vary widely in abundance — from rare mutations to highly expressed housekeeping genes. dPCR’s fixed partition capacity makes it difficult to capture that full range in a single run. High-abundance targets can saturate partitions, while low-abundance ones may be missed due to limited sample input.

This is why many researchers still run qPCR and dPCR side by side: dPCR for sensitivity, qPCR for dynamic range. While qPCR doesn’t offer true single-molecule resolution, it’s widely accessible and relatively fast to set up. Multiplexing is challenging, but many scientists simply run multiple singleplex SYBR Green reactions to get “OK enough” data — a practical workaround that feels easier, even if it’s inefficient.

But relying on two platforms means more time, more sample usage, more experimental cost, and more complexity — especially when it comes to normalization and reproducibility across workflows.

Dead volume in microfluidic systems silently reduces your analyzable sample

A lesser-known limitation of microfluidic systems is dead volume — the portion of sample lost before it ever reaches a partition.  Depending on the platform, that can be as much as 30–50%, which is especially problematic when working with low-input or precious samples like cfDNA, CSF, or rare tissue biopsies.

This means that even before amplification or data analysis, critical molecules may already be lost — not due to biology, but due to system design.

Countable PCR was built to address these systemic limitations, not iterate on them

Countable PCR was built to solve these limitations — not by refining microfluidics, but by removing them altogether. Instead of relying on partitioning and estimation, it uses a unique matrix-based system to isolate and amplify true single molecules in large reaction volumes.

These reactions are then imaged in full 3D, allowing direct molecule counting with no need for Poisson correction, thresholding, or curve fitting.

What's possible when you remove Poisson and count single molecules directly?

  • Broader dynamic range — from low-abundance targets to high-copy genes
  • Higher sensitivity, thanks to full-volume imaging and negligible sample loss
  • Fewer replicates, fewer workarounds, and cleaner data output
  • New assay possibilities — from fundamentally easier multiplexing to long-amplicon detection

Because each molecule is isolated in space, multiplexing on Countable PCR avoids the signal interference and competition seen in qPCR or dPCR. There’s no need for probe balancing, threshold tuning, or signal deconvolution. This opens the door to providing robust multi-target panels with minimal optimization effort — even with long amplicons or GC-rich targets that traditionally fail under partitioned or real-time conditions. What’s typically a high-effort assay development process becomes straightforward.

From estimation to certainty. A new era of molecular quantification.

dPCR made quantification more precise, but its reliance on estimation, constrained input, and rigid architecture leaves real gaps. Researchers have compensated with workarounds — more reactions, more platforms, more sample — but the tradeoffs are real.

Countable PCR offers a way forward: a single system designed to count what’s there, across the full range of biology, without the constraints of traditional limited partitioning — and with new capabilities that make previously-difficult assays routine.

Share this post