News
Article
Author(s):
Stanford study finds AI support improves physician decision-making in complex cases.
© Summit Art Creations - stock.adobe.com
Artificial intelligence (AI)-powered chatbots are proving to be more than just diagnostic aids — they’re also helping physicians make better treatment decisions, according to a new study from Stanford Medicine, published in Nature Medicine on February 5.
The study, led by Jonathan H. Chen, MD, PhD, assistant professor of medicine at Stanford, found that large language model (LLM) chatbots outperformed physicians who relied solely on internet searches and medical references. However, when physicians had access to chatbots, their performance was equal to that of the AI alone, suggesting a synergistic effect between human expertise and AI assistance.
“For years I’ve said that, when combined, human plus computer is going to do better than either one by itself,” Chen said. “I think this study challenges us to think about that more critically and ask ourselves, ‘What is a computer good at? What is a human good at?’ We may need to rethink where we use and combine those skills, and for which tasks we recruit AI.”
Unlike past studies, this research examined “clinical management reasoning” — the nuanced decision-making required post-diagnosis, including decisions regarding when to stop blood thinners ahead of surgery or how to adjust treatment plans based on a patient’s drug history.
To evaluate performance, researchers designed a trial with three groups: an AI chatbot alone, 46 physicians with chatbot support, and 46 physicians using only traditional online medical resources. The participants were given five de-identified patient cases and asked to provide written responses explaining their clinical decisions. A panel of board-certified physicians then scored the responses against a rubric designed to assess medical judgment.
Results showed that physicians without AI assistance performed worse than both the chatbot and the physician-chatbot teams, and the physician-chatbot teams outperformed the chatbots alone. These findings suggest that AI support can refine clinical judgment and encourage a more structured approach to clinical decision-making — but AI chatbots are not physician replacements.
“This doesn’t mean patients should skip the doctor and go straight to chatbots. Don’t do that,” Chen said. “There’s a lot of good information out there, but there’s also bad information. The skill we all have to develop is discerning what’s credible and what’s not right. That’s more important now than ever.”
As AI evolves, studies like this highlight its potential to support — not replace — physicians. By integrating AI into clinical workflows, health care professionals can enhance decision-making and improve patient outcomes.