Banner

Blog

Article

GenAI is making stuff up, and it will harm us and our health

Author(s):

Fact checked by:

Key Takeaways

  • GenAI's potential in healthcare is tempered by risks, notably its tendency to produce inaccurate data or "hallucinations."
  • AI hallucinations arise from GenAI's pattern recognition without true comprehension, risking misguided medical decisions.
SHOW MORE

For physicians, integrating GenAI without fully grasping its limitations could pose serious risks to patient care and professional integrity.

AI in health care can be dangerous: ©phonlamai photo – stock.adobe.com

AI in health care can be dangerous: ©phonlamai photo – stock.adobe.com

GenAI has quickly become a trending and influential hot topic across industries. Its appeal in health care is undeniable, brightly and altruistically promising to enhance diagnostics, streamline workflows, and bolster patient interactions. However, as with any technology, the potential risks must be clearly understood before physicians bring these tools into their practices. A huge flaw in GenAI is its ability to make up data that is not accurate, referred to as "hallucinations". These hallucinations cannot be completely eliminated due to the nature of GenAI, and they have obvious implications in every industry, not the least of which being health care. For physicians, integrating GenAI without fully grasping its limitations could pose serious risks to patient care and professional integrity.

Why do AI hallucinations occur?

AI hallucinations occur when GenAI tools, designed to predict and generate human-like text, produce content that is incorrect or simply made up. These models work by recognizing patterns in large data sets and predicting what comes next, but they do not have true comprehension. This means that while they can generate answers that seem plausible, there’s no guarantee that those answers are correct or based on evidence. For physicians, this can be particularly concerning. A suggestion generated by an AI tool might appear well-reasoned and reliable at first glance, but without rigorous scrutiny, it can lead to misguided decisions. In medical practice, where patient safety is paramount, even small inaccuracies can escalate into serious issues.

Several factors contribute to AI-generated hallucinations. GenAI relies on the data it has been trained on, which can range widely in quality and reliability. This means that if unreliable or non-peer-reviewed sources are included in the training data, the model might output flawed medical information. While GenAI is excellent at identifying correlations, it does not possess the clinical judgment needed to distinguish between coincidence and causation. This can lead to recommendations that seem logical on the surface but fail to stand up to clinical scrutiny. Although GenAI can parse vast amounts of medical literature, it lacks the nuanced understanding that a trained physician brings to patient care. The result is content that can misrepresent complex medical relationships or miss critical context.

Are we OK with solutions that are flawed when our health is at stake?

For physicians who own their practices, the stakes are high. Unlike larger health care systems that may have more robust review and oversight mechanisms, smaller practices are especially vulnerable when adopting new technologies without comprehensive evaluation. The primary concern is clear—patient safety. If an AI tool suggests an incorrect treatment plan, makes an erroneous diagnosis or transcribes fictional data into the patient record, the repercussions could be significant. Trust, once compromised, is difficult to regain and the risk to patients goes directly against the oath physicians take to ‘first, do no harm”.

The question of liability in AI-influenced medical errors is still evolving. If an AI’s output contributes to patient harm, the physician’s reliance on that output could be considered negligent. As the legal landscape continues to develop, physicians must navigate this uncertain terrain carefully. Independent practices often thrive on personal connection and local reputation. A single AI-related misjudgment could have lasting impacts, affecting patient trust and the practice’s standing in the community. Medical practices must adhere to regulations like HIPAA and FDA standards for medical software. GenAI-generated output that breaches patient privacy or suggests off-label treatments could lead to compliance issues and potential fines.

Where do we stand with AI as a whole?

While GenAI may not have the major place in the future of health care the typists claim, there are some areas where GenAI can add value so long as it’s carefully managed. For physician-owned practices, taking proactive measures can mitigate the risks while exploring the technology’s benefits. Organizations need to limit GenAI’s role to areas that don’t directly impact clinical outcomes, such as drafting patient education materials or administrative tasks. GenAI-driven suggestions related to patient care need to be thoroughly reviewed by experienced medical professionals. A well-informed team is better equipped to use GenAI responsibly and recognize when outputs should be questioned. Any and all AI programs need to be verified as being accurate, and are best placed in health care on the back-end, to help cut down on repetitive tasks that can be easily filled with non GenAI AI solutions.

Generative AI is neither inherently good nor bad—it is a tool, and like any tool, its impact depends on how it’s used. For physician-owned practices, adopting GenAI means weighing its potential benefits against its limitations and risks. While it can offer support in specific scenarios, physicians should always maintain the critical oversight that ensures patient safety and high standards of care. The most effective use of GenAI in medicine will be as a supplement, not a replacement. Physicians’ expertise, judgment, and intuition remain irreplaceable, so we should not be expecting or hoping robots take over in the doctor’s office.

Sarah M. Worthy is the CEO of DoorSpace.

Related Videos
Dermasensor
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot
Michael J. Barry, MD