News
Article
Author(s):
Even when the AI provided the same advice as a human, people still didn’t like it
People don't trust AI: ©Brian Jackson - stock.adobe.com
Artificial intelligence is advancing rapidly in health care, from diagnostic tools to administrative support. But when it comes to moral decision-making, new research suggests that the public may be reluctant to trust AI-driven artificial moral advisors (AMAs). A study led by the University of Kent’s School of Psychology, published in Cognition, found that people consistently preferred human advisors over AI when facing ethical dilemmas, even when the advice given was identical.
AMAs are being developed to provide moral guidance based on established ethical theories and principles. However, the research revealed a deep-seated skepticism toward AI’s ability to make high-stakes moral decisions, with concerns that these systems lack human experience and genuine understanding.
“Trust in moral AI isn't just about accuracy or consistency—it’s about aligning with human values and expectations,” said Jim Everett, who led the study at Kent. “Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust.”
The study also found that AI advisors faced even greater skepticism when they applied utilitarian principles—decisions aimed at maximizing positive outcomes for the majority. Participants trusted both human and AI advisors more when they adhered to moral rules rather than purely outcome-based reasoning, especially in cases involving direct harm.
Even when participants agreed with the AI’s advice, they anticipated disagreeing with AI in the future, highlighting an enduring resistance to machine-led ethical guidance. As AI systems continue to evolve and integrate into sectors like health care and law, the study underscores the importance of addressing public trust in AI’s moral reasoning.
For physicians, this research raises important questions about the role of AI in ethical decision-making within clinical practice. While AI has proven valuable in streamlining workflows and supporting clinical judgments, its expansion into moral reasoning may face strong resistance unless designers can bridge the gap between AI capabilities and human trust.