Banner

Blog

Article

Generative AI in health care: How far do we have to go and what can we do to accelerate progress?

Author(s):

Generative AI has a large number of health care use cases, but the road to maturity is a long one. Here is an analysis on AI’s near-term potential in chronic condition management, with thoughts on bias and regulation.

© CCS

Richard Mackey
© CCS

Health care organizations of all types are increasingly signaling that they are “all in” on generative artificial intelligence (GenAI), the type of AI that can rapidly classify data, summarize information, and create new audio, visual, and written content. While there are hundreds of exciting use cases for generative AI across the payer, provider, and patient communities, there is still much to be done before stakeholders have broad access to safe, trustworthy, effective AI tools at scale.

As we work through the challenges of this critical problem-solving phase of AI’s evolution, leaders in the space must work closely together with each other and with regulatory agencies to create appropriate guardrails, set expectations, and catalyze innovation.

As a chief technology officer, I strongly believe a collaborative approach and an eye toward patient-centered outcomes is essential for supporting more effective care powered by AI, especially for services related to chronic diseases. Recently, I had the opportunity to reflect on how generative AI is already making an impact — and what the industry has to do to keep moving forward toward the promises of an AI-driven health care ecosystem.

How mature is AI when it comes to providing care support decisions?

There’s no question that AI for care support decisions is getting more sophisticated every day, especially in the realm of generative AI. We’re starting to understand more and more about how AI tools can identify patterns and help us use those insights to guide decisions in everything from radiology reports to sepsis detection to chronic disease management.

However, none of these algorithms are mature or reliable enough yet to be used completely on their own. These tools augment decision-making by clinicians, but they don’t replace them. It is very much still a mandate to keep humans in the loop as we work through AI growing pains, including the possibilities of bias, hallucinations, and inaccuracies.

I’m optimistic about how fast we’ll solve these problems, but at the moment we’re still at the very beginning of the maturity curve. Right now, humans must still be the ones making the ultimate decisions about care so we can ensure that we’re delivering the best possible services to the people we’re responsible for treating.

Where do you see the most near-term potential for AI to upend current approaches to chronic care management?

I’m most excited about AI’s potential to support long-term adherence to therapies, particularly adherence to continuous glucose monitors (CGMs) for people with diabetes. AI has already shown its strength for predicting nonadherence based on clinical and administrative data. But when we add novel datasets, like patient behavior data and socioeconomic data, we can start to truly understand how to personalize interventions based on risk classifications and deliver patient-specific education and support at precise moments in their self-care journeys, up to several months earlier than we’ve been able to do in the past.

This is crucial for adherence to CGM therapy, which requires people living with diabetes to analyze their own data multiple times a day and use that information to make decisions about diet, exercise, and other lifestyle factors.

If someone is showing signals that they’re about to fall off their therapy, and we can use AI to intervene proactively to keep them on track, that represents an enormous difference in the way we can transform the legacy relationships around chronic disease management, which are often extremely reactive rather than proactive. We know we can improve outcomes and experiences via this method, and we’re already proving we can save money, too: up to $2,200 per patient per year due to improved CGM adherence and better glycemic control.

How do you think patients living with diabetes and physicians serving them will be most positively impacted by AI?

AI will help us accelerate the shift from “sick care” to more proactive, personalized, and preventive care. AI will help physicians “skate ahead of the puck” by providing predictive capabilities using datasets that are far too large and complex for a human brain to handle on its own. When physicians know more and know sooner, they can really start pushing toward those long-standing Quadruple Aim goals of better experiences (for doctors and patients), lower costs, and better outcomes — which have been a highly challenging endeavor in the diabetes space. The key to success will be creating a seamless, interoperable, and reliable data ecosystem to inform our AI tools and equitably disseminate access to high-quality results across populations so that all patients living with diabetes have access to a personalized, preventive, and holistic chronic care management experience.

How do you envision AI positively impacting specialists serving the population of people living with diabetes?

Provider shortages are hitting the diabetes world particularly hard, with not enough endocrinologists to offer specialty support and nowhere near enough primary care providers to close those gaps for the 38.4 million people living with diabetes, let alone the 97.6 million with prediabetes. It’s going to become essential to use AI to augment and extend the capacity of our human providers so they can practice with confidence and meet the needs of this growing population.

Specialists and physicians of all kinds win when patients win, so there is an inherent mutual benefit in better outcomes and lower costs. More specifically, we have the opportunity to use AI as a workflow enhancer and smart assistant to help overburdened providers identify potential issues earlier and more frequently, before they become full-blown crisis events.

What concerns do you have as a CTO when it comes to health care AI bias?

We need to recognize that our systems and datasets will have some bias based on the manner in which the data is collected and maintained. I think the important thing is to develop a test and learn approach as an organization so that implications from bias can be identified early and factored into the recommendations coming from the model.

Regulations specific to health care AI are emerging from regulators, nonprofits, and industry consortiums. Are there any areas where you feel like these groups are not seeing the “forest for the trees”?

The health care AI field is evolving rapidly. As such, it is crucial for regulators to offer a comprehensive perspective on how AI can and should be used in health care in a safe and effective manner. It is vital that regulations advance beyond their current point particularly as it relates to privacy, security, bias, and transparency.

There has been a more measured pace for AI adoption in the United States with focus on executive orders and frameworks to implement future regulations. In contrast, in the European Union (EU), they have been swifter in putting regulatory guardrails in place that may be difficult or burdensome to implement.

The safeguarding and protecting of personal information is paramount in health care, and it will become important to deploy generative tools in a way that privacy is maintained. The key here is to introduce basic and pragmatic guidelines that indicate where and how generative AI is being deployed for the benefit of the patient and some requirements for transparency and bias when AI tools are deployed. Given the speed to deployment for some of these tools, it is important that regulations in the United States accelerate for the benefit of the patient — while also not necessarily mirroring the EU’s approach, which holds the potential to stifle rather than encourage AI innovation.

If we can remain aligned on regulatory matters while ensuring that developers and users can work together effectively, we can create an environment that balances safety, security, and accessibility so that everyone has the opportunity to benefit from what AI has to offer.

Richard Mackey is chief technology officer at CCS, a company that transforms chronic care management by combining medical devices and supplies with comprehensive patient education and coaching all in one platform.

Related Videos
Dermasensor
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot