NEW YORK – In separate studies published this week in Science Translational Medicine, two independent research teams have presented approaches for optimizing pooled molecular SARS-CoV-2 testing.
One study, conducted by a team led by researchers at Hebrew University of Jerusalem, looked at that institution's experience using pooling to run more than 133,000 samples between April and September 2020. The other, led by researchers at the Broad Institute and Harvard University, presented methods for optimizing pooled testing by accounting for patient viral load and the trajectory of the outbreak within the population being tested.
Both studies found that pooling could significantly improve testing efficiency while maintaining sufficient sensitivity.
As labs globally have struggled with supply chain issues around molecular testing for SARS-CoV-2, pooling has been put forth as an approach that could help address these challenges. Commonly used in applications like blood banking, a pooled approach tests multiple samples at once, running them in a single reaction.
In the simplest pooling scheme, called Dorfman pooling, a set of samples are all tested together in a single run. If none of the samples test positive for the virus, then no further testing is needed, and multiple samples have been tested for the price (in terms of reagents and instrument time) of one. If one of the samples does test positive, then the lab must retest every sample in the pool individually to determine which caused the positive result.
More complex pooling schemes combine samples in multiple, overlapping pools such that in the event of a positive result, the positive sample can be determined by looking at which combined pools were negative and which were positive, allowing researchers to skip the second round of testing or at least reduce the number of samples that must be tested a second time.
The STM studies focused primarily on Dorfman approaches, which Moran Yassour, principal investigator at Hebrew University and senior author on one of the papers, said he and his colleagues decided to use for their testing due to the fact that "it is much simpler to pool and interpret."
As Yassour and his co-authors noted, while pooling has been much discussed, there have been few reports detailing the results of actual large-scale pooling programs. Between April and September of last year, the researchers tested 133,816 specimens using pooling at Jerusalem's Hadassah Medical Center.
Because pooled samples must be broken apart and tested separately in the case of a positive result, the optimal size of a given pool will depend on the prevalence rate in a community, with smaller pools being more efficient at higher rates. The Hebrew University team used either 8-sample or 5-sample pools, though Yassour said they have since begun using pools as large as 16 samples. In total, they managed to test all 133,816 specimens using 32,466 tests, using 76 percent fewer tests than they would have under a non-pooled scheme.
In addition to efficiency, sensitivity is a major concern with sample pooling, as samples are diluted when pooled, meaning that, for instance, an 8-sample pool with one positive sample would have one-eighth the amount of virus as the positive sample tested individually. The Hebrew University researchers noted, though, that pooled testing exceeded their sensitivity estimates, attributing this to what they called a "hitchhiker phenomenon" in which "strongly positive samples lead to individual testing of all samples in the pool, revealing weakly positive 'hitchhikers.'"
The Broad and Harvard researchers observed a similar phenomenon in which they wrote, "samples with lower viral loads, which would otherwise be missed due to dilution are… 'rescued' by coexisting in the same pool with high viral load samples and thus ultimately get individually retested."
They made this observation as part of a larger analysis of the influence of the dynamics of an outbreak on pooling effectiveness.
"The distribution of viral loads among infected people changes over the course of an epidemic," said Aviv Regev, an author on the study and formerly a primary investigator at the Broad and the Massachusetts Institute of Technology and now head of research and early development at Genentech. "Not just because more or fewer people are infected, but also the distribution [how many people have high versus low viral loads] is different when the epidemic is declining or growing, even at the same prevalence."
"The intuition is that when cases are rising, more infections are typically recent and have higher viral loads, whereas when the epidemic is declining, more infections are typically older with lower viral loads," she added. "This is important to consider when evaluating test sensitivity at different stages of the pandemic. If we expect more of our samples to have low viral loads, then diluting these samples through pooling may have a bigger impact on sensitivity. If more samples have high viral loads, then we can get away with pretty aggressive pool sizes and not expect to lose sensitivity."
An understanding of an outbreak's dynamics can also inform the sensitivity a lab should aim for in its pooling strategies, she said, noting that, for instance, if most missed cases are from patients with low viral loads at the end of their infections, this may not have a meaningful clinical or epidemiological impact.
"Even if pooling reduces sensitivity because it dilutes low viral load samples, we see that most of those low viral load individuals are well past the peak of their infection and therefore likely to be no longer infectious," she said.
Regev said she believed the group's STM study was the first analysis of pooling strategies "that account for epidemic dynamics," which, she added, could reduce the amount of tweaking labs have to do to their pooling strategies to account for changing infection prevalence.
"By projecting how we expect prevalence to change over time, we can find strategies that will remain effective for weeks or months," she said.