As leaders across federal agencies swiftly advance regulations for AI in health care, one proposal now seems too big to fail.
That proposal is for the implementation of AI assurance laboratories — places where AI model developers can develop and test AI models according to standard criteria that would be defined with regulators.
Many of the biggest names and organizations in health AI have embraced the concept, which was described on stage at the annual Office of the National Coordinator for Health Information Technology meeting, published in a prominent JAMA Special Communication and featured in an exclusive STAT report. The proposal is being advanced by leaders of the Coalition for Health AI (CHAI) with strong support from two top regulators — the National Coordinator for Health IT and the director of the Digital Health Center of Excellence at the Food and Drug Administration. The proposal responds to President Biden’s recent executive order, which calls for the development of AI assurance infrastructure. The proposal in JAMA ends with a request for funding a “small number of assurance labs that experiment with these diverse approaches and gather evidence that the creation of such labs can meet the goals laid out in the Executive Order.”
This article is exclusive to STAT+ subscribers
Unlock this article — plus in-depth analysis, newsletters, premium events, and news alerts.
Already have an account? Log in
To submit a correction request, please visit our Contact Us page.