News
Article
Author(s):
Guidance takes a pragmatic approach to managing AI systems in clinical practice
As artificial intelligence becomes increasingly prevalent in health care, organizations and clinicians must adopt measures to ensure its safe implementation in real-world settings, according to Dean Sittig, PhD, of UTHealth Houston, and Hardeep Singh, MD, MPH, of Baylor College of Medicine. They authored guidelines that offer a pragmatic approach to managing AI systems in clinical practice. The guidelines were published in the Journal of the American Medical Association.
“We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings,” Sittig said in a statement. “It is a tool that has the potential to revolutionize medical care, but without safeguards, AI could generate false or misleading outputs that could harm patients.”
The recommendations are based on expert opinions, literature reviews, and lessons from the safe use of health IT. Sittig and Singh emphasize the importance of robust governance, rigorous testing, and clinician training to ensure AI systems enhance safety and outcomes without introducing new risks.
“Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI,” Singh said in a statement. “All health care delivery organizations should check out these recommendations and start proactively preparing for AI now.”
Key recommendations
Sittig and Singh’s framework includes several critical steps:
The authors stressed the importance of collaboration between health care providers, AI developers, and electronic health record vendors to protect patients and ensure AI’s safe integration into clinical care.
“By working together, we can build trust and promote the safe adoption of AI in health care,” Sittig said.