News
Article
Author(s):
Study warns that artificial intelligence could increase legal risk and burnout if health systems fail to support doctors.
© ihorvisn - stock.adobe.com
Artificial intelligence (AI) was supposed to ease the burden on physicians — not add to it. But without proper legal protections and support structures, it may be doing just that, according to a new peer-reviewed brief published Friday in JAMA Health Forum.
The study, authored by researchers at Johns Hopkins University and the University of Texas at Austin, argues that assistive AI — while designed to help physicians diagnose, manage, and treat patients — could actually increase liability risk and emotional strain on clinicians. And unless health systems and lawmakers act, the consequences could include higher rates of burnout and medical errors.
“AI was meant to ease the burden, but instead, it’s shifting liability onto physicians — forcing them to flawlessly interpret technology even its creators can’t fully explain,” said Shefali Patil, PhD, associate professor of management at UT Austin’s McCombs School of Business and visiting faculty at the Johns Hopkins Carey Business School, in a university press release.
In other words, when AI-assisted tools get it wrong, it’s the physician who remains on the hook.
The authors warn that this dynamic — where physicians are expected to rely on decision-support tools they may not fully understand, while still bearing full accountability — creates a “no-win” scenario for clinicians. That’s especially true as machine learning algorithms become more opaque, making it harder to explain or justify how the system reached a particular recommendation.
“Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft — while they’re flying it,” said co-author Christopher Myers, PhD, associate professor and faculty director at the Carey Business School.
Rather than putting the onus on individual physicians, the authors urge health care organizations to take a more systemic approach. That includes building internal support systems, offering training and feedback loops, and creating a culture that supports thoughtful calibration between clinical judgment and AI input. In short: help physicians understand when to trust the tech — and when to question it.
The study also calls for legal reforms that clarify who is responsible when an AI tool contributes to a poor outcome. At present, the authors argue, the legal landscape hasn’t kept pace with the growing presence of AI in health care settings.
“Proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions,” the press release stated.
For practicing physicians — particularly those in primary care or independent practice — the stakes are high. AI-driven tools are already being integrated into electronic health records (EHRs), diagnostic platforms, and triage systems. While these tools promise efficiency and accuracy, they also introduce a new world of uncertainties.