Banner

Blog

Article

The ethical adoption of AI in healthcare: A proactive approach

Author(s):

Responsible application of AI is crucial, especially when the health and well-being of people are at stake.

The rapid advancement of artificial intelligence (AI), most notably generative AI, has revolutionized various industries. Health care is no exception, and AI holds immense potential to improve efficiencies and outcomes. However, we have already seen organizations and governmental agencies indicate the need to address ethical issues such technology can create.

Manny Krakaris, CEO, Augmedix: ©Augmedix

Manny Krakaris, CEO, Augmedix: ©Augmedix

Responsible application of AI is crucial, especially when the health and well-being of people are at stake. To manage this responsibility, health care leaders need to unite with experts from various backgrounds to assess the current state of AI, its capabilities and limitations, and its path forward. These considerations must account for all stakeholders—patients, clinicians, health care organizations and technology developers. Viewing AI holistically will enable the health care industry to take maximum advantage of its potential while protecting public safety.

Importance of AI in health care

AI has the potential to help bridge the growing gap in the health care industry between the contracting capacity to provide care and the increasing demand for care by a growing and aging patient population.

The latest technological advancements can save clinicians up to three hours each workday by relieving them of essential administrative tasks. Among other benefits, such as enabling clinicians and patients to establish a more human connection, that time savings can be repurposed into increasing patient access, something the industry desperately needs. However, the task of accurately documenting the patient encounter is a solemn responsibility that the industry must continue to undertake with the utmost care. While AI plays a significant role in automating documentation, we cannot blindly rely upon it to generate the accuracy demanded by our industry because it is far from perfect.

Accordingly, it is essential to maintain some degree of human involvement to ensure the required level of accuracy, which is ultimately reflected in the quality of care patients receive. AI can be a useful productivity tool, but it cannot replace the human element entirely.

As CEO of an ambient medical documentation company, emphasizing the importance of the responsible use of AI across our organization is vital. And we are proactive about it. Rather than wait for regulations to catch up with technological advancements, leaders must take the initiative and collaborate to ensure concerns are being addressed. It is our responsibility as health care leaders to establish and uphold a standard of transparency, safety, privacy, and trust.

Critical considerations for AI adoption in health care

Large language models (LLMs), a class of natural language processing (NLP) algorithms, have seen widespread use in health care for some time and are becoming more prevalent in conversations. New LLMs, such as GPT4, are powerful tools, but are de facto black boxes that rightfully do not instill confidence among many in the industry.

LLMs come with certain challenges that need to be addressed to ensure their responsible adoption. Learnings from some of these challenges will help us solve issues in the future.

Accuracy and reliability: LLMs are sometimes prone to making up answers when faced with insufficient information, resulting in unreliable output, which are often referred to as “hallucinations.” This phenomenon has received considerable criticism and represents a major hurdle for AI’s adoption in health care, as clinicians and patients need to be able to trust this technology. LLMs must have tight guardrails in place to ensure reliability and patient safety. There are several approaches to address this problem, some of which are technology-based, while others call for human experts to review LLM output before action is taken. Correlating LLM output against output generated by independently derived models is one approach that can help identify areas of low confidence that require human expert review.

Bias: LLMs learn from input data sets, including content and data contributed by individuals on the internet. This can lead to built-in biases within AI algorithms. Guarding against these biases is essential to ensure fair and equitable AI applications. Efforts include activity-training algorithms on diverse patient population sets and ensuring that when reviewing data, bias is considered. This is an area where human involvement is vital. Having a diverse team from various backgrounds to review and analyze information created by this technology is necessary to eliminate any inherent biases.

Data completeness and structure: LLMs provide summaries based on shared data, which can include transcripts of clinician-patient conversations. However, if the input data is incomplete, inaccurate or has gaps, the accuracy of the summary may be compromised. To address this potential deficiency, LLMs should be supplemented with independent data sets and models, and the output should be reviewed by a clinician to ensure accuracy.

All of these issues bring about their own challenges that health care leaders could (and should) address. With thoughtful insights from experts, we can collectively solve these problems while also preemptively considering future technological and ethical issues. This technology is evolving rapidly, and it is paramount that industry leaders stay ahead of it.

Interoperability for enhanced AI development

Interoperability, the seamless exchange of patient data between health systems, can significantly impact AI adoption. If health systems share data effectively, AI algorithms can learn from a larger, collective knowledge base. This can reduce reliance on internet data, with its inherent biases, and improve the accuracy of AI models.

The role of an AI advisory council

Another area worth considering is the formation of AI Advisory Councils. These independent bodies can serve as ombudsmen and advocates for responsible and ethical AI application. Such councils can be comprised of a diverse group of experts, including leading researchers in AI, bioinformaticians, executives from health care systems, and professional societies that contribute to government and industrial policies governing AI in health care. By gathering perspectives from such diverse stakeholders, we can better ensure that organizations utilize AI responsibly and thoughtfully, aligning with the best interest of patients and clinicians.

As AI continues to reshape the health care landscape, its responsible adoption must be a top priority. The health care industry’s adoption of AI will be governed by trust. By incorporating transparency, accuracy, reliability, and expert oversight into their AI applications, organizations can instill trust in this technology and thereby help ensure its widespread adoption.

About Manny Krakaris

Manny Krakaris serves as the Chief Executive Officer of Augmedix, a company that delivers industry-leading, ambient medical documentation and data solutions to health care systems, physician practices, hospitals, and telemedicine practitioners.

Related Videos
Dermasensor
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot