Commentary
Article
Author(s):
Communication and empathy are keys to overcoming technology-driven disasters
Doctors are dedicated to the Hippocratic Oath, often summarized as “first, do no harm.” But technology has taken no such oath.
Major brands are now utilizing and investing in generative and predictive artificial intelligence (AI), publicly backing its potential to reimagine many aspects of the health care experience—from patient charting to diagnoses and imaging analysis. And while it has great value and potential, it’s also not hard to imagine how an error in the data set, design or use of the technology could lead to patient injury and a reputational crisis.
In fact, we’ve already seen wide-scale failures. One AI-powered algorithm, used by hospitals and insurance companies across the country to predict patients in need of “high risk management programs” frequently overlooked Black patients, according to a research study published in Science.The model conflated an individual’s health care needs with their health care spending.
Two large research studies – published in the British Medical Journal and Nature Machine Intelligence – similarly reviewed hundreds of AI tools developed to diagnose COVID and triage patients. Their conclusion: Out of more than 600 models, none were found to be accurate enough for clinical use. (Many had already been utilized in hospitals and health systems throughout the pandemic.)
Even the World Health Organization warned this year about the lack of proper caution accompanying the “precipitous adoption of untested [AI] systems” in health care — and the errors and patient harm that could result.
While AI offers powerful possibilities for health care organizations, it’s only a matter of time before the technology causes a crisis — whether a privacy breach, patient injury or system-wide error.
And when it does, leaders must be prepared with a communications plan that can bring humanity into a technological crisis and adapt to the situation’s unique challenges. “AI made a mistake” is not an explanation that that will foster trust or uphold a reputation.
Leaders of health care organizations will need to consider:
It’s also hard to promise that you have fixed something when you don’t know what went wrong.
While it’s critical to communicate transparency and urgency in the face of a crisis, the opaque nature of AI will make it difficult to quickly share initial facts or provide a follow-up report. You may also be limited in what you can share by non-disclosure agreements, which some health care organizations are beginning to sign with technology vendors.
Without complete information and the ability to analyze what went wrong, the public and media attention will focus even more heavily on the actions you take to help those affected. An executive who can quickly and genuinely communicate a plan to address the wrongs will demonstrate humility, empathy and decisive leadership—and begin to earn back trust and brand reputation.
Any crisis response must consider the fact that in today’s environment, patients already feel vulnerable to technology. According to a Pew research study, 75% say their top concern about AI in health care is that providers “will move too fast” implementing new solutions “before fully understanding the risks for patients.”
In other words: No one wants the computer to be in charge. If a crisis occurs, you need to show that a human is still minding the store.
In the face of impersonal technology, it’s humanity that builds trust. Leaders must address the situation personally, with responsibility and compassion, to counterbalance the role of AI. Technology can’t apologize, provide restitution or be sued, so you must be ready to step forward with care, concern and a plan to make it right.
Remember, while legal liability may not be clear, your organization is already on trial in the court of public opinion.Identifying the players at fault may be nuanced, as it likely depends on the information that can be gleaned from the AI tool and whether errors can be traced to a user or developer.
Still, you should be prepared for the public to hold your organization responsible for choosing to use the technology and for being the site of the injury. While you don’t want to own something that may not be your fault, it is critical to acknowledge the impact of the harm caused to your patients and immediately take action on their behalf.
AI is already profoundly changing health care. And the organizations that will lead the way are those ready both to harness the innovations AI enables and protect their reputations and people if something goes wrong.
Brian Tierney is the CEO of Brian Communications and former publisher of The Philadelphia Inquirer, Daily News and Philly.com.