Blog
Article
Author(s):
How do we regulate AI purposefully to ensure that it enhances rather than hinders the practice of medicine?
There is something different about the current moment in artificial intelligence. New capabilities are emerging rapidly due to advances in computing, algorithmic development, and access to vast amounts of data. The change feels real.
There is a clear appetite, from within and without, to incorporate AI into medical practice. A recent NEJM AI study using insurance claims data shows clear, albeit limited, evidence for the use of AI devices in routine health care delivery. Medical AI experts are predicting an emerging third epoch of AI in health care, distinct from the first epoch, which focused on developing expert systems that mimic human decision-making, and the second epoch, which relies on deep learning to make predictions.
Optimism for AI in health care is warranted. Machine learning systems can identify patients at risk of critical illness, deep learning approaches can match expert human imaging technicians, and AI chatbots can expand access to mental health. However, health care differs fundamentally from consumer technology, where regulation is often viewed as an obstacle to innovation. In health care, purposeful regulation is crucial to ensuring the trustworthiness of AI models and promoting the four pillars of incorporating AI into healthcare workflows: physician buy-in, patient acceptance, provider investment, and payer support.
As we look forward with anticipation to an AI-powered health care future, how do we regulate AI purposefully to ensure that it enhances rather than hinders the practice of medicine? How do we foster innovation while mitigating risk and identifying bias? What must be done to ensure that AI systems do not automate and exacerbate today's health care disparities?
These are not trivial questions. Patient welfare and ethical considerations are paramount. Get it right, and AI could propel health care to new heights of efficiency, insight, and patient-centered care. Get it wrong, and we risk unintended consequences ranging from exacerbating health care disparities to outright patient harm.
The biggest challenge in regulating AI is that AI itself is evolving at a breakneck pace. How do we create regulatory frameworks that are agile enough to keep pace with technological change, yet rigorous enough to ensure patient safety and ethical standards?
Early missteps offer a cautionary tale. Medicare Advantage plans used algorithmic tools to automate coverage denials, making real health care algorithmic bias concerns raised previously by researchers, and prompting a new policy from the CMS. Without proper guidance, this will not be the last time we see algorithms behaving in ways that are inconsistent with patient-centered health care.
The first step in regulation requires establishing dedicated regulatory processes to ensure patient safety while providing guidance to innovators. Recognizing that emergent AI technologies did not fit well within existing regulatory frameworks, the FDA introduced the Software as a Medical Device pathway. However, the pathway is limited to software that either augments or replaces the work of physicians and excludes software that, among other things, provides administrative support to health care entities, promotes or maintains a healthy lifestyle, or functions as an electronic health record (including patient-provided information). It is unclear whether the pathway applies to the many ongoing generative AI applications that health care providers across the country are exploring.
As a next step, regulators should consider adopting a risk-based regulatory framework. This approach would differentiate between lower-risk tools, such as the automation of administrative tasks, and higher-risk ones, such as treatment decisions in time-sensitive clinical scenarios. Such a framework allows regulators to focus limited resources on applications that require the most scrutiny. A discussion should also be initiated on payment models for AI tools in health care. Should payers reimburse AI methods that improve workflows? What about algorithms that demonstrate better patient outcomes? And if they demonstrate improved outcomes in diverse populations?
A final consideration is the oversight of AI systems in health care. Dataset shift — where an AI system does not perform as expected due to an imbalance between the data it was developed on and the data it is currently encountering — can result in systems with completely unexpected behavior. Public-private partnerships, in the form of health AI assurance labs, may be one possible solution.
We believe that patients, clinicians, and administrators, particularly those on the frontline, should take a leading role in shaping the guidelines for health AI. In particular, nurses and physicians, with their frontline experience, can provide invaluable insight into how AI tools would perform in real-world clinical settings. They can help ensure that regulations are based on practical medical knowledge, not just theory. Furthermore, clinicians have a moral responsibility to advocate for oversight that prioritizes the well-being of patients above all else. Health care AI should be guided by its impact on patient outcomes and quality of care, rather than profits or technological novelty for its own sake.
The path ahead may be challenging and will require collaboration among health care stakeholders, regulators, and technologists. However, with purposeful regulation, we can realize the immense potential of AI in medicine. We have the opportunity to shape technological progress in a way that is consistent with health care's highest calling - to provide the best possible care for all patients. With wisdom and vigilance, AI can advance our shared mission without compromising it. The well-being of millions of people hangs in the balance.
Tej D. Azad (@tdazad) is a senior neurosurgery resident at Johns Hopkins Hospital, a AAAS Science & Technology Policy fellow, and an affiliate scholar at the Hopkins Business of Health Initiative. Tinglong Dai is the Bernard T. Ferrari Professor at the Johns Hopkins Carey Business School, co-chair of the Johns Hopkins Workgroup on AI and Healthcare, which is part of the Hopkins Business of Health Initiative, and Vice President of Marketing, Communication, and Outreach at INFORMS.