Banner

News

Article

Want patients to trust AI in health care? Tell them humans are biased, too

Author(s):

Key Takeaways

  • Highlighting human biases can increase patient receptivity to AI in healthcare by enhancing perceptions of AI's fairness and integrity.
  • The study involved nearly 1,900 participants and showed that bias salience reduced resistance to AI-driven recommendations.
SHOW MORE

Study shows patients are more receptive to AI recommendations when they better understand the biases in human decision-making.

AI and bias: ©Metamorworks - stock.adobe.com

AI and bias: ©Metamorworks - stock.adobe.com

As artificial intelligence technology continues to advance across industries, health care remains an area where patients are hesitant to embrace AI-driven tools. While many are becoming more comfortable with AI in other sectors like financial advising and customer service, health care remains deeply personal, and most still prefer a human touch. A study by researchers at Lehigh University and Seattle University suggests that highlighting the biases in human decision-making can help patients become more receptive to AI recommendations in medicine.

Understanding bias salience

Published in the journal Computers in Human Behavior, the study explored how making the concept of bias more prominent in patients' thinking—referred to as "bias salience"—can shift perceptions of AI in health care. It turns out that when patients are reminded of the inherent biases that exist in human decision-making, they view AI as potentially offering greater fairness and integrity.

Lead researcher Rebecca J. H. Wang, an associate professor of marketing at Lehigh University, explained that bias is often viewed as a human shortcoming. "When the prospect of bias is made salient, perceptions of AI integrity—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced," she said in a statement.

Key findings

The study involved nearly 1,900 participants across six experiments, each designed to evaluate how patients responded to health care recommendations, such as coronary bypass surgery or skin cancer screening, when given by either a human provider or an AI-driven assistant. Some participants were primed to think about bias beforehand, including reviewing common cognitive biases or reflecting on personal experiences with bias in health care.

The results:

  • Participants who were reminded of potential biases in human health care rated AI as offering greater "integrity," meaning they saw it as more trustworthy and fair.
  • While most people still preferred human health care, bias salience reduced resistance to AI-driven recommendations, likely because people associated bias more strongly with human providers.
  • When bias salience was high, participants placed more value on AI’s perceived objectivity compared to the subjectivity of human providers.

Implications for the future of AI in medicine

As AI becomes increasingly integrated into health care, from diagnostics to treatment recommendations, this study suggests that addressing patient concerns about human bias could help ease their resistance to AI. For health care providers, discussing the limitations of human judgment and emphasizing the objectivity AI can offer may foster a more trusting relationship between patients and emerging technologies.

AI’s role in health care is only expected to grow, with billions expected to be invested in the coming years. Researchers say developers of AI systems will need to focus on minimizing bias in training materials and providing clear context about human biases as part of the patient experience with AI.

Wang says, "By addressing patients’ concerns about AI and highlighting the limitations of human judgment, health care providers can create a more balanced and trusting relationship between patients and emerging technologies."

Related Videos
Dermasensor
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot
Michael J. Barry, MD