top of page
Writer's pictureOzzie Paez

Validating AI in medicine - Biobeat

Biobeat’s announcement that its cuffless blood pressure measuring devices proved accurate across sex, BMI, and skin colors is a significant achievement with broader implications for Artificial Intelligence (AI) in medicine and beyond. AI is the most compelling technology rage to enter markets and the social zeitgeist in over a decade. Countless products and services make AI claims in their marketing strategies to distinguish themselves from competitors. Many customers are enticed by AI’s potential and just as many express uncertainties over what it is, its implications, and trade-offs. It’s understandable because most AI claims are difficult to verify.


AI’s verification challenges are ingrained in the technology and have implications for experts and novices alike. For example, unlike other types of software, AI systems can be well specified, designed, architected, and implemented, and yet fail to perform as expected. The causes are often poorly curated representative data sets, rather than common coding and bug issues. Companies designing Machine Learning (ML) and Deep Learning (DL) technologies need large sets of representative data with which to test, validate, and train their systems. Problems often begin with poorly defined data ‘representativeness,’ which in areas like medicine extends to contextual variables including sex, race, and BMI. Studies that demonstrate successful performance across these variables reflect the validity of AI applications and the data used to train them.


There are many barriers to developing and accessing representative health data sets. Privacy constraints like those in the Health Insurance Portability and Accountability Act of 1996 (US-HIPAA) and its European counterparts place strict controls on access to private patient data and expose violators to severe legal, civil, and reputational liabilities. It’s a significant barrier for developers who need access to large, curated, representative data with applicable legal and regulatory pedigrees. This is why we scrutinize AI claims and look for third-party validation and operational experience. Biobeat’s announcement reflects the company’s growing capacity to train its algorithms to perform across broader populations.


It’s difficult, even for experts, to verify AI claims. I checked Biobeat’s data strategy and discussed its use of AI when we first engaged with the company two years ago. We were encouraged by the scope of their reported representative data but still looked for third-party studies. This published peer-reviewed study offers compelling evidence that the company is thoughtfully expanding the applicability of its AI technologies and representative data across broader populations. It also helps illuminate the challenges and opportunities of leveraging AI to innovate healthcare delivery. Good show, Biobeat!


Recent Posts

See All

Comments


bottom of page