In the raging debate about how to regulate artificial intelligence in health care, “there seem to be two camps that are emerging,” says Raj Ratwani, vice president for scientific affairs at MedStar Health.
On one side is the heavyweight Coalition for Health AI, now home to over 4,000 health care organizations looking to be involved in the AI regulation conversation — even the Food and Drug Administration. CHAI is a proponent of the assurance lab network system: the idea that several different labs set up across the country can test AI developers’ algorithms on a pool of patient data and give a stamp of approval that the algorithm meets certain safety and quality standards.
The other camp thinks that the assurance lab model has an equity problem. Led by organizations like the Health AI Partnership, they say validation must be done at the local level to ensure that the algorithm works right for a given health system’s patient population. That micro-level work also has to be supported by lots of resources so that rural and other under-resourced hospitals don’t miss out on everything that AI has to offer. Brian Anderson, CEO of CHAI, has acknowledged that local validation is also necessary in addition to assurance labs.
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in