BEVERLY HILLS, Calif. — Artificial intelligence is increasingly infused into many aspects of health care, from transcribing patient visits to detecting cancers and deciphering histology slides. While AI has the potential to improve the drug discovery process and help doctors be more empathetic towards patients, it can also perpetuate bias, and be used to deny critical care to those who need it the most. Experts have also cautioned against using tools like generative AI for initial diagnosis.
Brian Anderson is the CEO of the recently launched Coalition for Health in AI, a nonprofit established to help create what he calls the “guidelines and guardrails for responsible AI in health.” CHAI, which is made of academic and industry partners, wants to set up quality assurance labs to test the safety of health care AI products. He hopes to build public trust in AI and empower patients and providers to have more informed conversations around algorithms in medicine. On Wednesday, CHAI shared its “Draft Responsible Health AI Framework” for public review.
But lawmakers have concerns over whether CHAI, whose members include AI heavy weights like Microsoft and Google, is essentially the AI industry policing itself, and other experts have supported alternative AI regulatory frameworks that are more localized.
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
To submit a correction request, please visit our Contact Us page.