Skip to Main Content

Developers of artificial intelligence models slowly making their way into medicine have long parried ethical concerns with assertions that clinical staff must review tech’s suggestions before they are acted on. That “human in the loop” is meant to be a backstop preventing potential medical errors conjured up by a flawed algorithm from harming patients. 

And yet, industry experts warn that there’s no standard way to keep humans in the loop, giving technology vendors significant latitude to market their AI-powered products as helpful professional tools rather than as autonomous decision-makers. 

advertisement

Health record giant Epic is piloting a generative AI feature that drafts responses to patients’ email queries, but clinical staff must review the suggestions before they are sent out, the company has said. A flurry of AI-guided ambient documentation startups can rapidly transcribe and summarize patient visits and populate patients’ medical charts, but they require doctors and nurses to OK the generated entries first. Products predicting health risks — like overdose or sepsis — show up as flags in medical record software, and it’s up to clinicians to act on it. 

STAT+ Exclusive Story

STAT+

This article is exclusive to STAT+ subscribers

Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.

Already have an account? Log in

Monthly

$39

Totals $468 per year

$39/month Get Started

Totals $468 per year

Starter

$30

for 3 months, then $399/year

$30 for 3 months Get Started

Then $399/year

Annual

$399

Save 15%

$399/year Get Started

Save 15%

11+ Users

Custom

Savings start at 25%!

Request A Quote Request A Quote

Savings start at 25%!

2-10 Users

$300

Annually per user

$300/year Get Started

$300 Annually per user

View All Plans

To read the rest of this story subscribe to STAT+.

Subscribe

To submit a correction request, please visit our Contact Us page.