NEW YORK (360Dx) – Google researchers have found that a deep-learning approach could help pathologists better detect cancer in patients, they recently reported on the company's research blog.
Applying deep-learning methods, they created an automated detection algorithm that could detect 89 percent of tumors from microscopy images compared to 73 percent for pathologists, who hypothetically would have an unlimited amount of time to examine pathology slides.
"Even more exciting for us was that our model generalized very well," the Google researchers wrote in the blog post.
They noted a few caveats to their results, including a less-than-perfect FROC score — a statistical method for calculating sensitivity — that increases the sensitivity of their algorithm as more false-positives are allowed. Their algorithm, and those developed by other groups, also don't have the breadth of knowledge and experience of human pathologists, who can detect abnormalities that the algorithms were not specifically trained for, the researchers said.
They further added that the algorithms should be incorporated into the diagnostic workflow in such a way that it complements the pathologist's workflow to ensure best clinical outcomes. If that is achieved, the method could "improve the efficiency and consistency of pathologists," by reducing false-negative rates, or allow pathologists to easily and accurately measure tumor size, which has been associated with disease prognosis.
The Google team described their work in the preprint of a paper that has been submitted for peer review, said a company spokesman, who declined further comment.
In it, the researchers described a method "to automatically detect and localize tumors as small as 100 pixels by 100 pixels in gigapixel microscopy images sized 100,000 pixels by 100,000 pixels."
Building on efforts carried out by researchers at Harvard, the Massachusetts Institute of Technology, and Beth Israel Deaconess Medical Center, the Google researchers also leveraged a more recent version of a deep-learning architecture called Inception.
They said that training and evaluating the models that they developed proved challenging due to the large number of patches and the tumor class "imbalance," and required careful sampling to avoid biases. To tackle tumor patches, they applied data augmentations.
For their evaluation, they said they used a metric called "area under receiver operating characteristic" that assesses slide level classification, and FROC, and applied it to datasets containing 400 slides.
According to the scientists, the method they developed "yields state-of-the-art sensitivity on the challenging task of detecting small tumors in gigapixel pathology slides, reducing the false-negative rate to a quarter of [that achieved by] a pathologist and less than half of the previous best result."
They added that their method could improve the accuracy and consistency of breast cancer diagnoses, improving patient outcomes. Work moving forward will use larger datasets to improve their algorithm, they said.