NEW YORK (GenomeWeb) – A team of prominent life science researchers has developed a set of proposed guidelines for antibody validation.
Published this week in Nature Methods, the guidelines are meant as initial recommendations in advance of a larger meeting later this month bringing together stakeholders including researchers, journals, and funding agencies, Mathias Uhlen, a Stockholm Royal Institute of Technology researcher who is first author on the paper and one of the leaders of the effort, told GenomeWeb.
The researchers are operating as the International Working Group for Antibody Validation (IWGAV), which they describe in the paper as "an ad hoc committee of international scientists with diverse research interests but the shared goal of improving antibody use and validation." The group's work is supported by Thermo Fisher Scientific, which this week announced that it plans to test the specificity of its antibodies using the guidelines put forth in the Nature Methods publication.
Antibody quality has long been a significant issue for proteomics and life sciences research more generally, but discussion around it has gained traction in recent years. In 2015, for instance, a group of 112 researchers published a commentary in Nature noting that an estimated $800 million is wasted annually on faulty antibodies and calling for better validation and standardization of these reagents.
This uptick in researcher concern has coincided with certain technical developments that could allow for more rigorous antibody validation. For instance, CRISPR gene-editing technology has made knockout validation — in which antibodies are tested for specificity in cell lines in which their target protein has been removed — feasible on a large scale. An immunoprecipitation-mass spectrometry workflow developed by University of Toronto research Aled Edwards, also a co-author on the Nature Methods paper, could aid validation of antibodies for immunoprecipitation and other experiments.
But while tools exist to improve antibody validation, "there are really no accepted universal guidelines" for when and how to do this validation, Uhlen said.
"This is very much illustrated by the fact that if you go to an antibody portal where they are selling antibodies, you will have more than 2 million antibodies to human proteins right now, and only a fraction of them have been validated," he said. "Therefore, it is not very easy for users to pick and choose what antibodies they should use for their particular application, and this is dangerous because we are wasting a lot of time and energy on things that might not work."
As perhaps one of the largest individual users of antibodies in the world, due to his role as head of the antibody-based Human Protein Atlas, Uhlen has a unique vantage point on just how unreliable these reagents can be.
"Every day I see the problems with validating antibodies," he said. In a 2014 interview with GenomeWeb, Uhlen noted that his group in the course of its work had tested more than 25,000 antibodies from roughly 50 different suppliers, and that the majority of these reagents have not worked in their hands.
In some cases, antibody problems are the result of what appears to be straightforward fraud. For instance, several researchers have in the past identified USCN Life Sciences, a Wuhan, China-based antibody firm, as a purveyor of faulty reagents.
In 2012, researchers from Harvard Medical School and the University of Alabama published a letter to the editor in the American Journal of Nephrology in which they expressed concern about a previously published study using a USCN ELISA for measuring HJV protein.
In their letter, the researchers wrote that through efforts to replicate this study, they had determined that the "USCN HJV ELISA does not detect human or mouse HJV, but is instead recognizing some other protein(s)."
More commonly, antibodies detect their stated target but have issues with specificity. Making matters more challenging is the fact that an antibody validated for use in one type of experiment may not work in a different experiment. The fact that an antibody works well in a Western blot, for instance, is no guarantee that it will work equally well in a flow cytometry assay.
This is not sufficiently appreciated by many researchers, Uhlen said, noting that one of the key messages of the proposal is "that validation has to be done in an application-specific way."
Yet to be determined is whether to recommend taking things a step further and propose that antibodies be validated not only for use in specific assays, but also for use in specific contexts.
"If you look at the protein in a liver extract and then move to a kidney extract, that [same antibody] might now not work," Uhlen said. "This is something we want to discuss at the [upcoming] meeting — how much we should require of this context-specific validation."
"I think that even if we can just do application-specific validation, we will have come a long way," he added.
Another question is how to enforce more rigorous validation. Here, Uhlen said he sees two main stakeholders with the power to encourage researchers to use only well-validated reagents. One is the journal community, which could in theory insist that researchers submit certain forms of antibody validation data as part of any published papers.
The other is funding agencies like the National Institutes of Health. "It would be very nice to say, if you get NIH funds and you use antibodies, you have to use antibodies validated according to this and this and this," Uhlen said. "That is my vision and what I am trying to push for, but we will see."
In any case, before validation can be enforced, the community must arrive at a set of standards for antibody validation. In the Nature Methods paper, Uhlen and his co-authors put forth five "conceptual pillars" for antibody validation: genetic strategies like siRNA and gene editing; orthogonal strategies comparing antibody measurements to those made by antibody-free techniques like targeted mass spec; independent antibody strategies comparing the measurements of multiple antibodies to the same target; validation using tagged proteins that can also be captured by previously validated affinity reagents; and immunocapture combined with mass spectrometry.
Some antibody providers have already begun validating their reagents using these techniques. For instance, last year Abcam started using CRISPR gene editing to enable knockout validation of its reagents. The company hopes to offer several thousand antibodies validated in this manner over the next few years.
Reagents vendor Proteintech began offering knockdown validation of its antibodies in 2014. As opposed to knockout, in which the gene for the target protein is edited out of the cell's genome, knockdown validation uses methods (siRNA in Proteintech's case) to silence the gene producing the target protein.
And Thermo Fisher has begun efforts to validate its library of roughly 50,000 antibodies according to the IWGAV guidelines. Matt Baker, direct of R&D and business development, RUO antibodies, at the company, told GenomeWeb that it is starting with the antibodies most important and widely used in the research community and then "systematically marching through our portfolio."
Baker said that the company's customers have expressed a desire not only for better validation by antibody vendors but also more consistency in validation processes across vendors.
"There are hundreds of antibody suppliers, and those antibody suppliers may validate in different ways, and the differences in the ways they validate is not necessarily clear to a lot of researchers," he said. "People really want to know the details of what you have done to validate a reagent."