Skip to main content
Premium Trial:

Request an Annual Quote

With BARDA Backing, RAIsonance Hopes to Prove Coughs Can Show COVID-19/Flu Status


NEW YORK – Machine learning firm RAIsonance is out to prove it can distinguish who has influenza, COVID-19, or neither by the sound of their cough recorded on a smartphone.

Last week, the Colorado-based firm secured a $749,000 contract through the US Department of Health and Human Services' Biomedical Advanced Research and Development Auithority (BARDA) for the development and testing of its AudibleHealth AI subsidiary's AudibleHealth Dx software, which the company hopes to eventually distribute worldwide through app stores for rapid identification of a suspected infection. Company officials conducted a validation study for the software last summer and will use the BARDA money for continued research that could support a submission for US Food and Drug Administration clearance under the agency's de novo regulatory pathway.

The firm makes an attractive pitch: Use any internet-connected smartphone to find out in two minutes what may be making you sick without leaving the house or digging kits out of a closet and hoping they haven't expired.

"Will anybody want to be able to use a diagnostic test on their own in two minutes without having anything jammed up their noses?" CEO Kitty Kolding said. "We know that this is a viable product, and we know that it will make a difference in … our ability to keep track of who's sick and where."

The tests will cost about $10 each.

RAIsonance officials hope to prove in this year's studies that their machine learning model detects characteristics distinct to adults and children who have a specific disease using the sounds of six to 10 intentional coughs. With the BARDA funding, the firm plans to collect additional cough samples from people with confirmed COVID-19 or influenza infections to further train the machine learning model as well as further test the accuracy of the test in comparison with RT-PCR.

Kolding said COVID-19, influenza, and active tuberculosis infections — another target in the firm's pipeline — each produce an auditory signal that is characteristic of the physiological changes caused by those diseases in the lungs, throat, vocal cords, and sinuses.

"The idea is that there's a very specific signature unique to each of these diseases and we teach our models to recognize that signature," she said.

Hearing COVID

RAIsonance's test works by downloading an app onto a smartphone. According to Kolding, the AudibleHealth Dx software temporarily disables the sound processing installed by phone makers and audio compression added by carriers, giving the firm consistent results across hundreds of device-and-software combinations tested to date. Those recordings are sent to the cloud servers where the firm applies its own sound processing, determines which recordings have acceptable sound quality, and converts those recordings into spectrograms for recognition of disease features.

In its 514-person, 2022 validation study, participants seeking COVID-19 testing arrived at testing sites and used phones with RAIsonance's app installed to provide cough recordings. They also submitted nasal swabs for use with a syndromic 15-target RT-PCR test from BioMérieux's BioFire Diagnostics to compare results between the two methods and found that its cough test had 84 percent positive agreement and 85 percent negative agreement with the PCR results.

Since the early days of the pandemic, various research groups have reported progress in developing machine learning-supported models for COVID-19 detection using the sound of a patient's cough, often using smartphone-based recordings. One Luxembourg-based team, for example, analyzed acoustic features of coughs and found that the dry coughs associated with COVID-19 and their physiological effects could explain an observed shift in audio frequencies associated with COVID-19.

In a review article published last year in Diagnostics, researchers found overall support for the use of cough as a biomarker for health status including the use of features such as cough duration and peak frequency to distinguish between health and COVID-19 positive patients.

But they also described limitations in existing methods and difficulties comparing the performance of different groups' machine learning models. They also saw shortages of studies that could test COVID-19 classification models against possible biases, as well as limited studies that evaluated the performance of those models in distinguishing coughs caused by COVID-19 from coughs caused by other respiratory illnesses. Their study, based on 35 papers, found that almost all the models examined were trained on crowdsourced recordings that were prone to including inconsistent and inaccurate labels and metadata information.

One researcher who has studied the use of cough-based COVID testing isn't convinced about their clinical utility. Harry Coppock of the Alan Turing Institute and Imperial College London, was one of 25 authors of a preprint article published in March in ArXiv that determined that audio-based methods of detecting COVID-19 had no advantage over making a diagnosis based on a patient's symptoms. He said in an interview that many cough-based screening tests developed using AI have been good at leveraging confounders to become accurate when tested on similar data rather than identifying a true COVID-19-specific acoustic biomarker.

Because many contributors to those crowdsourced audio banks knew their COVID-19 status when they made the recordings, an AI model developed using that crowdsourced audio could, for example, pick up the sounds of someone who has isolated themselves in a small, humid room. A model that incorporated those non-disease sound features would produce biased results if it were tested on other entries from the same or similar datasets. Coppock and his coauthors found that the accuracy of an AI-trained screening tool can decline from 90 percent to 60 percent once appropriate epidemiological methods and other statistical methods are implemented, and he noted that 50 percent accuracy is random classification.

"We actually have some pretty strong evidence to suggest that the remaining 10 percent is actually due to unmeasured confounders," he said. "Let's say I did have a sore throat but I didn't tick 'I have a sore throat' in the survey. We can't control for it through standard statistical practice."

Coppock said machine learning has the potential to screen for diseases using audio recordings, but he cautioned against a rush to use those tools without sufficient controls. To identify whether those methods are detecting features specific to the target disease, he wants to know whether each sound used as a biomarker is linked to the disease by a biologically plausible explanation and whether those sounds change during serial testing.

Regardless, Kolding said that reports about unreliable cough-based disease models show problems only with the methods examined, not that the overall concept is an unsolvable problem.

"We believe it is both solvable and that we have solved it," she said. "Obviously, we need to prove it, but I reject these broad generalizations."

She declined to disclose details about the company's sound processing but said the company's sound engineers have worked exhaustively to balance suppression of background and ambient sound artifacts and retain core cough sounds needed for its analysis. The firm also recorded more than 2,000 cough recordings during COVID-19 testing in a park, within a clinic, and at a small mobile testing.

Kolding said RAIsonance has trained its machine learning models on a mix of its own recordings and cough audio recordings from databases that had clinical validation for the COVID-19 status of submitters and demographic information. She added the company tried to focus training of its machine learning model on recordings made in its own studies and using its own sound processing.

RAIsonance is also eager to conduct serial testing to determine how soon after exposure the firm can identify the disease, when that occurs relative to symptom onset, and how long the test registers positive results, she said.

Establishing regulatory, payment paths

Founded in March 2020, RAIsonance received early funding of $256,000 through a Small Business Innovation Research grant from the National Science Foundation for the development of a COVID-19 cough classifier using artificial intelligence. In addition to the AudibleHealth Dx software under development, the firm's wellness-focused Guardian AIngel business offers subscription-based apps that use cough audio to help people track their respiratory health after they stop smoking or vaping, though those products are not medical devices. The firm has said the wellness business is also testing software to help long COVID sufferers track changes in their lung conditions through cough audio.

The firm has also secured investments totaling "in the sub-$10 million range" from private investors to date, which it has used to progress its test from concept to clinical trials in about 19 months, Kolding said.

"It's unlikely how much we've done and how far we've gotten for the amount of time that we've had and the very limited amount of money that we've raised," she said.

Late last year RAIsonance applied to the US Food and Drug Administration for Emergency Use Authorization for its COVID-19 test. The agency, however, has pivoted away from EUAs and has asked test makers, instead, to seek full market authorization for their COVID tests. In the case of RAIsonance, it may file a de novo submission. The company also has recently begun discussions with Health Canada about its regulatory submission there.

Kolding said that securing the BARDA contract opened opportunities to perform a larger study on COVID-19 identification and include the firm's influenza test in a future de novo filing. It also has given RAIsonance additional credibility, she said.

"Our ambition is to complete the full study as soon as we can this year — both flu and COVID — get new models set up and trained, add them to our validation, and get our results," Kolding said.

She said the firm wants to finish testing through the BARDA contract by the end of 2023 and, depending on the test's performance, RAIsonance may request FDA consideration for a breakthrough device designation. While the firm plans to market its test in North America initially, it has other commercial partnerships-in-waiting in Australia, India, Latin America, South Africa, and the United Kingdom.

Kolding said RAIsonance can help bring testing to places without existing testing infrastructure, especially areas without widespread access to COVID-19 vaccines and therapeutics. Because the results are digital, the company can give public health authorities information on disease outbreaks, and she noted that the firm is compelled to give the US Centers for Disease Control and Prevention some de-identified, generalized location and demographic information of people with positive tests.

"Our ability to reach virtually anyone anywhere, no matter what country you live in, is a hugely important part of our mission around impacting global health," she said.

Company officials are trying to secure meetings with CMS to describe their platform, and Kolding expects the firm will have to blaze a new trail for securing reimbursement from insurers. For now, she said the business model is based on a cost-per-test paid by individual users and institutions.

If the company can prove its AudibleHealth platform works and gets the go-ahead from regulators, Kolding said the AI-backed platform could help expand access to a tool to help identify the causes of disease in individuals and populations while avoiding the physical waste and supply chain risks of other types of tests.

"I don't think PCR tests are going to go away. I don't think antigen tests are going away," she said. "But I do think we have a prominent place in what's happening with diagnostic testing now."