News
Article
Medical Economics Journal
Author(s):
A more human way of thinking can overcome legitimate concerns that doctors have about artificial intelligence.
© wladimir1804 - stock.adobe.com
Generative artificial intelligence (AI) has shown tremendous promise in assisting physicians and other clinicians in routine tasks, but there are still key questions about its reliability. Now neuro-symbolic AI is adding a layer of insights to achieve better results that — vitally — can be trusted.
The time for neuro-symbolic AI has come because, despite the widespread excitement about the future of AI, physicians harbor legitimate concerns about the technology. Many know that most generative AI solutions today aren’t reliable, raising the risk of compromised care and missed opportunities for patients.
This reality explains why many health care leaders are leery of deploying AI even though it can relieve administrative burdens for clinicians and other caregivers, improve diagnostics and accelerate lifesaving research.
Currently, for example, more than 70% of health care organizations are pursuing generative AI capabilities, according to McKinsey. But implementation is still lagging, the consultancy found, due to the trust gap.
Similarly, an American Medical Association survey of almost 1,100 physicians released in January 2024 found that nearly two-thirds saw the advantages of using AI. But more than 40% also said they were both equally excited and concerned about its potential uses.
Some major companies have even abandoned AI-powered health care efforts, at least for now, citing these trends. But we are not heading for another AI winter — if we can change how AI in health care is thinking.
Neuro-symbolic AI, which combines the power of neural networks with symbolic reasoning, now promises to eliminate hallucinations, or false results generated by large language models. This technology offers a bridge to the future of intelligent health care by combining generative AI (which can detect and recreate patterns from vast amounts of data) and symbolic AI (which mirrors human reasoning through symbols and logical rules).
Some of the most promising applications of AI in health care clinics are administrative tasks, such as paperwork and clinical summaries. Coupled with smart digital tools, AI can give doctors a comprehensive picture of patients at their fingertips and provide suggestions on the spot. Hallucinations commonly occur throughout AI-generated medical summaries, however.
These hallucinations, or what researchers have called AI misinformation, happen when generative or deep learning large language models try to create content that goes beyond their training data.
That’s where symbolic AI becomes crucial. While symbolic AI has its own limitations — it struggles to make sense of unstructured data that don’t fit into learned rules —it augments generative AI by grounding models with reasoning and logic to prevent inaccurate results, training bias and other misinformation.
Together, these two components of neuro-symbolic AI solutions can reason, learn and engage in cognitive modeling to understand context and other factors when performing. That means more reliable results that avoid hallucinations and other pitfalls like sycophancy bias, when large language models tailor responses to perceived user expectations.
Neuro-symbolic AI solutions can unlock 80% of the world’s clinical data through contextual understanding that transforms unstructured electronic medical records and research into analytics-ready data, according to our research. They can abstract data 27,000 times faster than the manual methods commonly used in health care.
The gold standard for physicians is an AI assistant equipped with clinical reasoning skills that reflect a true comprehension of clinical data and resemble how a doctor would read and interpret the same data.
Generative AI has shown a transformative ability to gather and synthesize information like an assistant. It struggles to understand concepts that go beyond the text on the page, however, such as intent. Neuro-symbolic AI, in contrast, can deliver comprehension, not mere recognition, to prevent inaccuracy and imprecision.
Symbolic logic models can draw inferences from limited data, learn from their mistakes, and grasp context when, say, understanding whether EGFR should mean “epidermal growth factor receptor” (a gene) or “estimated glomerular filtration rate” (a kidney test).
Physicians, after all, use shorthand, paraphrase and assume domain expertise, which can be difficult to map onto an ontology rooted in vocabulary alone. Clinical reasoning requires far more than just word recognition. To be maximally useful, an AI tool must understand not just what is written but what is implied. To support real comprehension, knowledge must be modeled differently.
In the messy world of electronic medical records, neuro-symbolic AI has incredible potential across the medical field, from pharmaceutical and cancer research to diagnostics and clinical care. In precision medicine, which requires intense focus on individual patients and data, it will drastically accelerate the genomic level matching between the problem and the cure.
It allows researchers to rapidly sift through patient records for clinical trial matches, speeding up the process of bringing lifesaving interventions to market. It is already empowering diagnostic companies to rapidly analyze medical images, lab results and patient histories to reach a quicker conclusion and start treatment sooner.
Crucially, these applications enhance a physician’s relationships with patients, creating more time for face-to-face interactions, better-informed patient visits and quicker diagnoses.
“Whatever the future of health care looks like, patients need to know there is a human being on the other end helping guide their course of care,” American Medical Association President Jesse Ehrenfeld, MD, MPH, said when the January 2024 AI survey was released. “That’s essential.”
Neuro-symbolic AI brings a more human way of thinking to the clinical space. This shift will close the trust gap and fulfill AI’s tremendous potential.
Karim Galil, MD, is the co-founder and CEO of Mendel, a clinical AI platform for life sciences and health care organizations.