Developers of artificial intelligence models slowly making their way into medicine have long parried ethical concerns with assertions that clinical staff must review tech’s suggestions before they are acted on. That “human in the loop” is meant to be a backstop preventing potential medical errors conjured up by a flawed algorithm from harming patients.
And yet, industry experts warn that there’s no standard way to keep humans in the loop, giving technology vendors significant latitude to market their AI-powered products as helpful professional tools rather than as autonomous decision-makers.
Health record giant Epic is piloting a generative AI feature that drafts responses to patients’ email queries, but clinical staff must review the suggestions before they are sent out, the company has said. A flurry of AI-guided ambient documentation startups can rapidly transcribe and summarize patient visits and populate patients’ medical charts, but they require doctors and nurses to OK the generated entries first. Products predicting health risks — like overdose or sepsis — show up as flags in medical record software, and it’s up to clinicians to act on it.
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
To submit a correction request, please visit our Contact Us page.