
Health care's quest for responsible innovation
Why establishing AI safeguards to reduce risks in the prior authorization process is important
As a family physician, I have navigated the burdensome process associated with prior authorization.
However, leaning too heavily on automation raises concerns when it pertains to reviewing, approving, or denying
Responsible automation and mitigating bias in AI algorithms
Many industry professionals are rapidly adopting artificial intelligence (AI) solutions and machine learning to expedite the delivery of high-quality care and optimize administrative workflows. While these cutting-edge tools hold immense potential to enhance organizational efficiency, streamline operations, and revolutionize patient outcomes, it is vital to ensure that AI-driven technology is not only accurate but adequately governed.
As physicians and health plans increasingly integrate AI into their operations, a new concern emerges regarding potential overreliance on AI for crucial decision-making, particularly in optimizing legacy prior authorization procedures.
To manage this risk, AI must operate under clinical oversight but does not require physicians to repeat all AI's work. Health plans must not rely solely on AI; physicians should verify decisions for accuracy, particularly in denial cases. This combined approach maintains efficiency and decision accuracy.
A national endeavor
Responding to an increase in improper prior authorization denials, the federal government put forth a
The need for such changes has prompted many national agencies to take action, especially regarding AI's role in PA approvals and denials. The
AI's primary function should focus on streamlining processes to expedite positive health outcomes and guide physicians toward optimal treatment options, irrespective of the prior authorization decision.
The core principles of responsible AI
The effectiveness of AI depends on the quality of input data, necessitating the recognition of its inherent biases and limitations to ensure responsible usage. Addressing these concerns and following four key considerations enables AI to help physicians deliver high-quality, value-based care and improve patient outcomes.
- Transparency: AI-driven decisions must be grounded in clinical data, and transparent practices are vital in minimizing the risk that AI models will recommend inappropriate denials.
- Privacy & Security: Safeguarding sensitive patient information necessitates clinical oversight. AI models for PA requests should exclude patient identifiers, relying solely on critical treatment data such as type, date of care, and diagnosis.
- Accountability: Developing responsible AI involves a strong partnership between clinical experts and software engineers to guarantee that AI model creation, assessment, and refinement are guided by specialized knowledge in the field.
- Inclusiveness & Equity: Patient care is influenced by social determinants, which underscores the importance of ensuring that at-risk patients impacted by such factors are not subject to automatic denial. Aligning AI models with specific health plan policies maintains consistent standards, prevents erroneous care denials, and upholds equity and expert judgment across patient populations.
A healthier future
The urgency of establishing and embracing ethical and responsible AI in health care is becoming increasingly evident. Its potential extends beyond diagnosis and treatment, promising to vastly improve patient experiences and health outcomes while upholding patient privacy and data security. By championing responsible AI alongside advanced clinical innovation and oversight, health care is charting a course toward a more patient-centric, precise, and compassionate health care system.
***
Mary Krebs, MD, FAAFP, is the medical director, Primary Care,
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.