Skip to main content
Premium Trial:

Request an Annual Quote

As AI-Based Diagnostics Proliferate, Stakeholders Strive for Regulation Without Stifling Innovation

Premium
HealthAI

NEW YORK – As the wave of artificial intelligence-based diagnostic tests swells and investors make big bets on the tools, attention is being placed on how best to regulate such devices and strike the right balance between encouraging innovation and protecting patient health.

Of particular interest is how regulators assess in vitro diagnostic tests and platforms that leverage algorithms that continually update, as opposed to products that use stable algorithms. Regulators in the US and Europe are currently figuring out how to consider IVDs with updating algorithms, while developers will need to prove that they can maintain product safety and prevent their models from drifting in unpredictable ways.

AI-based tools entering the market today are used to quickly identify aberrant cells, identify biomarker combinations that are linked with disease or disease risk, and conduct multimodal analyses across test results and medical records to identify patient-specific risk factors, among other applications. To date, all AI-based diagnostic tests and platforms on the market use a stable, or locked, algorithm because they're easier to validate and bring to market quickly. As the market evolves and more diagnostic tools use algorithms that continually update, regulators will be forced to figure out how to allow such products to be marketed and sold.

Courtney Lias, acting director for the US Food and Drug Administration's Office of In Vitro Diagnostic Devices, said that the agency recognizes that AI-developed devices have some unique design aspects and noted that it has already granted market access to nearly 900 such devices. To meet the demand for such products, the agency has assembled a roster of personnel with AI-specific expertise including statisticians who understand the algorithms, their development, and their validation.

Among the 882 AI/ML-based medical devices that the FDA has approved or cleared, 671 of them, or 76 percent, are used in radiology. In contrast, eight are used for clinical chemistry, six each for pathology or microbiology, and one for immunology. Those figures don't include an unknown number of laboratory-developed tests (LDTs), which the FDA intends to bring under agency oversight over the next four years despite ongoing legal challenges.

"We at FDA understand the importance of striking a reasonable balance to make sure that AI devices for medical purposes that come through are safe and effective and the benefits do outweigh the risks while also being helpful to the community to enable the promise of this new technology," she said.

As the agency evaluates diagnostic tests and tools that use algorithms that continually update, Lias said that FDA officials have had talks with developers about models that would employ such algorithms.

"If we had a developer interested in developing an AI-enabled device that is designed in that way, we would really encourage them to engage with us early to talk about how their algorithms work and how they plan to go about demonstrating that the benefits of that device outweigh the risks," she said.

That evaluation will likely include the use of predetermined change control plans (PCCPs), which developers can use to describe how they will update their product and how they will validate those changes without going back to the agency for a new authorization. The agency is working on the final version of the draft guidance issued in April 2023, and Lias said that the final guidance is a high priority for the current fiscal year.

AI's long history in Dx development

The use of AI in medical equipment, generally, and diagnostic tools specifically is hardly new. Paavana Sainath, senior VP and head of core lab solutions R&D engineering for Siemens Healthineers, said that the company has more than two decades of R&D involving AI-based algorithms and more than 1,000 patents for AI/ML technologies and applications. Within the company's laboratory diagnostics and core lab solutions businesses, she said that the firm is applying AI-based tools for a mix of internal efficiency applications, laboratory automation software, and clinical support algorithms that are used to improve the efficiency of diagnosis.

She sees near-term opportunities to apply AI-based algorithms to stratify patients by their risk for metabolic disorders and cancers. By combining biomarker-based test results with demographic factors, algorithm developers could help healthcare providers to decide how to deliver targeted testing that is based on individual risk, she said.

"We understand the risk of complacency when you're relying on decision support systems and algorithms that seem to do the work for you, so we're trying really hard to strike that balance between automating manual tasks but not taking the human being out of the equation," she said. "So, there's a lot of oversight and checks and balances."

Mike Quick, VP of R&D for oncology and cytology for Hologic and president of the Digital Pathology Association, said that the company sees the potential for its AI-based digital cytology system to help reduce pressure on the cytology workforce by improving the efficiency and accuracy of cervical cancer screening as well as to augment the abilities of healthcare providers where cytology expertise is in short supply.

According to the World Health Organization, low- and middle-income countries have the highest rates of cervical cancer incidence and deaths and, as of 2022, about 94 percent of cervical cancer deaths occurred in those countries.

Hologic received US FDA clearance early this year for its Genius Digital Diagnostics System that incorporates deep-learning-based AI to aid the identification of precancerous lesions and cervical cancer cells. Quick said that the system's analysis distills an image of between 70,000 and 80,000 cells to 30 areas that are the most likely to be clinically relevant and improve the accuracy, speed, and availability of analysis by clinicians.

The firm is also applying AI-based tools in its breast cancer imaging business as well as evaluating AI in diverse use cases including test validation procedures, supply chain management, and project prioritization, Quick said.

Quick said that the Digital Pathology Association is also trying to help firms navigate regulatory pathways and strike the right balance between bringing innovative technologies to market quickly and ensuring those technologies work well and minimize risk.

"We ultimately want devices, tests, and procedures that are safe and effective for those patients, and we're working together with regulators to help establish that balance," he said.

And it's not just large, multinational firms leveraging AI-based technology for diagnostics. In April, Prenosis, a firm cofounded 10 years ago with point-of-care testing technologies licensed from the University of Illinois, secured de novo marketing authorization for its Sepsis ImmunoScore software to aid sepsis diagnosis and predict the risk of sepsis.

The firm has focused its algorithm development on quantitative inputs such as test results and temperature readings, although company researchers have studied the use of text-based inputs from patient medical records, said CEO and Cofounder Bobby Reddy.

Meanwhile, PreciseDx uses AI-based models in the company's LDTs to analyze pathology slide images for phenotypic patterns of invasive cancers and predict treatment outcomes and the risk of recurrence, beginning with tools for risk stratification of breast cancer patients. PreciseDx Cofounder and Chief Medical Officer Michael Donovan said that the company's products are meant to be used as adjunctive tools to help guide healthcare providers to find cancer in a tissue sample, maintain accuracy, and provide prognostic risk assessments for each patient.

As AI technologies mature, he expects that the tools brought to the market will be used to identify increasing numbers of clinically relevant features of pathology slides, elevating the field. The tools that are already available can also help to address the shortage of pathologists, meet patient needs, and expand access to pathology, he noted.

Donna Hochberg, partner and managing director for consulting firm Health Advances, said that a growing crowd of firms such as Inflammatix and MeMed have used AI/ML to develop algorithms that are incorporated into clinical tests that are on or coming to the market, and the availability of whole-slide image analysis software has helped to drive the adoption of digital pathology far more than did slide scanning technologies alone. While some payors have been hesitant to cover the costs of digital pathology, software that generates novel clinical content may drive use by securing payments for digital pathology practices, she said.

Pathology firms have also been inking partnerships and forming alliances that could help bring more AI-backed technologies to market. Cancer genomics companies, too, have been betting on AI/ML-based digital pathology tools to augment their clinical testing menus by identifying biomarkers in slide images.

Hochberg also sees the potential for expanded use of AI-based diagnostic tools beyond cancer into areas such as testing for infectious, CNS, autoimmune, and cardiac diseases by leveraging signatures of proteins, RNA, and other disease markers, though development has been slower for tests that incorporate learning algorithms into the tests themselves. It's unclear what regulatory procedures companies need to follow every time a learning model updates its algorithms, she said.

Bringing learning models to market

Sarah Fitzgerald, US program manager for the Underwriters Laboratories company Emergo, said her company consults with medical device companies that work in AI/ML and helps those firms to understand how their devices will be regulated and the regulatory pathways to market, typically 510(k) clearance or de novo authorization for moderate-risk devices. Since the FDA posted its PCCP guidance document a year ago, though, few firms have even approached the agency about implementing those types of plans.

"Right now, the only specific guidance document the FDA has released on that is specific to AI/ML because there is that desire to have that device reach the market and be able to continue learning once it is out in the public," Fitzgerald said.

Fitzgerald noted that every company wants to come to market quickly, and some firms have determined that they would be better off by bringing a locked version of their product to market as soon as possible and submitting a PCCP as part of a future 510(k) submission, Fitzgerald said. She noted that FDA officials have recommended that companies that are considering filing PCCPs also file a pre-submission regulatory consulting request, or Q-Submission, to receive feedback ahead of a full submission, which would add about two months to the existing regulatory processes.

"There's still, I would say, some growing pains in getting these PCCPs up to where they're really as helpful as I think both the industry and the FDA want them to be," she said.

Meantime, regulators in Europe are tackling similar concerns. The European Commission declined to make its experts available for an interview but said in a statement that the developers of AI-based diagnostics devices typically need to come to market with a well-defined, validated, and locked algorithm to ensure consistent and reproducible results while preventing unpredictable model drift that would endanger patient safety. The regulatory framework, however, allows the EC some flexibility on how to address state-of-the-art technologies, and it could consider allowing changes that have been outlined by a developer through predetermined plans that were evaluated during conformity assessment if the risks were managed through monitoring and surveillance, corrective actions, and human oversight.

"While continuous learning in real time could lead in principle to substantial changes of the devices, which require the device to undergo a new conformity assessment procedure, there are ongoing considerations for allowing for an update of AI algorithms within devices on the market, subject to certain conditions," the EC said in a statement.

The EC's Medical Device Coordination Group has been deliberating on how to best balance innovation and patient safety in allowing continuous learning models.

In general, the EU's regulations on the marketing of in vitro diagnostic devices include strict requirements for products that incorporate AI including clinical evaluations, conformity assessments, risk management, and post-marketing surveillance to ensure that algorithms maintain their safety and performance over time, the EC said. The EC also said that the recent AI Act is also meant to further reduce risks to health, safety, and rights. Medical devices are categorized as high-risk devices under that law, and, consequently, they are subject to strict regulation and conformity assessments.

The AI Act, passed by the EU parliament in March, establishes a broad legal framework across industries for how AI-based technologies must be used and evaluated according to risk. The developers of applications in the high-risk category, such as those used in medical devices, also have to implement quality and risk management systems, register in a public database, and participate in post-market monitoring through audits.

As the regulation of AI devices evolves, leaders in the healthcare community are eyeing the rapid advancement of AI/ML-based technologies with a mix of enthusiasm and trepidation.

The authors of an editorial published this spring in the Proceedings of the National Academy of Sciences advocated for the monitoring of and testing for biases in AI algorithms and their outputs with a goal of correcting the biases that could skew outcomes or their interpretation. The authors proposed that the National Academies of Sciences, Engineering, and Medicine establish the Strategic Council on the Responsible Use of Artificial Intelligence in Science to coordinate the scientific community and provide updated guidance on appropriate uses of AI.

Former FDA Commissioner Scott Gottlieb also recently wrote in JAMA Health Forum that current laws and regulations could eventually prove to be unworkable for the evaluation of AI-based medical devices. He said that the passage of the Verifying Accurate Leading-edge IVCT Development (VALID) Act could help the FDA to address in medical AI rapid cycles of innovation and constant modification as new information becomes available. The bill would create a risk-based framework for in vitro clinical tests (IVCTs) that are already under FDA oversight as well as laboratory-developed tests.

"The FDA's traditional regulatory approach, which depends on the agency's capacity to meticulously examine a product's construction, might prove infeasible in this context," he said.