News
Article
Author(s):
Stanford researchers develop AI tool to analyze doctors’ notes to save time and improve care
Physicians and researchers at Stanford Medicine have harnessed artificial intelligence to analyze vast troves of doctors’ notes in EHRs. Their findings, published in Pediatrics on Dec. 19, demonstrate how AI tools can identify gaps in treatment, such as inconsistent follow-up care, and provide actionable insights for improving medical management.
The AI tool, based on large language models, was developed to analyze whether children with attention deficit hyperactivity disorder received appropriate follow-up care after starting new medications.
A faster path to answers
Reviewing medical charts for patterns is labor-intensive and time-consuming. While structured data like lab results are easily analyzed by computers, 80% of medical record content lies in unstructured notes written by physicians. This freeform information often escapes systematic analysis, requiring manual review.
Stanford’s research team trained the AI tool to read and interpret these notes, focusing on whether pediatricians asked families about medication side effects within the first three months of a new ADHD prescription. The model analyzed over 15,000 notes from 1,201 pediatric patients, a task that would have taken a human more than seven months of full-time work.
“This model enables us to identify some gaps in ADHD management,” said lead author Yair Bannett, M.D., assistant professor of pediatrics, in a statement.
The AI analysis uncovered trends that might otherwise have gone unnoticed. For example, pediatric practices varied widely in how often they asked about side effects during phone follow-ups. Additionally, doctors were less likely to inquire about side effects of non-stimulant ADHD medications compared to stimulant drugs, a discrepancy the team attributed to physicians’ greater familiarity with managing stimulants.
“This is something you would never detect without deploying this model on thousands of notes,” Bannett said.
The findings highlight both the strengths and limitations of AI. While the tool efficiently identified patterns, interpreting their significance required input from pediatricians.
The researchers acknowledged limitations in their study, including incomplete documentation in EHRs and the possibility of conversations about side effects happening outside recorded notes. The AI also occasionally misclassified notes on medications unrelated to ADHD.
Despite these challenges, Bannett is optimistic about AI’s potential. He envisions tools that not only detect trends but also assist with personalized medical decisions, such as predicting side effects based on a patient’s unique characteristics.
However, Bannett cautioned against overreliance on AI, emphasizing the importance of addressing biases in health care data and applying ethical considerations.
“With the right safeguards, AI can help doctors better manage their patients,” he said. “It allows us to leverage the knowledge from large populations while focusing on individual care.”
The study underscores the transformative power of AI in health care, particularly for tasks involving extensive data analysis. While AI will not replace human judgment, it has the potential to augment clinical decision-making, making personalized, high-quality care more accessible.
As Bannett and his colleagues note, the key lies in guiding AI’s development to ensure it complements rather than compromises the practice of medicine.