Banner

News

Article

Senate Finance Committee discusses AI promise, pitfalls in health care

Experts weigh in on regulations needed as health leaders integrate artificial intelligence into patient care.

© U.S. Senate Finance Committee

Witnesses are seated at the table ahead of testimony Feb. 8, 2024, in the U.S. Senate Finance Committee hearing, “Artificial Intelligence and Health Care: Promise and Pitfalls.”
© U.S. Senate Finance Committee

Federal lawmakers said they have plenty of questions as they consider regulations for artificial intelligence (AI) and how physicians and other clinicians develop it for patient care.

On Feb. 8, 2024, the Senate Finance Committee convened for the hearing, “Artificial Intelligence and Health Care: Promise and Pitfalls.” Five experts testified about the current uses of AI, regulations needed, and what is to come.

Committee Chair Sen. Ron Wyden (D-Oregon) touched on the promise and pitfalls in his opening statement.

“There are a lot of reasons to be optimistic about the potential of AI to improve health care,” Wyden said. “Today, the industry faces a host of challenges, all made worse by the strain of the COVID pandemic on our health system. There's a workforce shortage. Existing providers are facing high rates of burnout, health care costs rising faster than wages. And there is an ever-growing gap between the care that's needed and the care that's being delivered. Already AI tools are being deployed to reduce some of these pressures, and ease the strain on the industry and providers.”

Wyden also cited the research of one of the hearing witnesses, emergency physician Ziad Obermeyer, MD, associate professor and Blue Cross Of California distinguished professor at University of California – Berkeley. Obermeyer was coauthor of a 2019 study that detected racial bias in an algorithm used to analyze patient health risks.

“Dr. Obermeyer found that the tool on average required black patients to present with worse symptoms than white patients in order to qualify for the same level of care. Folks, that is not a close call. It's just not,” Wyden said. The witness panel took questions and shared insights with at least 11 other Senators over the two-hour session.

Reimbursement needed

Siemens Healthineers has been applying AI to health care for more than 20 years. The company is the only one capable of end-to-end cancer care, said Peter Shen, the company’s head of digital and automation for North America. AI will be used to diagnose changes in brain volume over time, a potential predictor of neurodegenerative disease such as Alzheimer disease, Shen said.

The U.S. Centers for Medicare & Medicaid Services (CMS) has recognized the value and the complex nature of algorithm-based health care services. But CMS’ reimbursement policies have not uniformly and consistently reimbursed physicians and other clinicians for those services, he said.

“This inconsistent unpredictable approach stifles adoption by providers, especially in rural and underserved areas, and therefore restricts patients access to new and innovative diagnostic tests and treatments,” Shen said.

“Guaranteeing a consistent reimbursement process would empower hospitals and providers to invest in AI confidently ensuring their services are appropriately reimbursed,” he said. “Without this financial support. These providers will face difficulties in embracing and integrating AI technologies, ultimately potentially denying revolutionary services to patients.”

Just getting started

Federal “guardrails” will help the few organizations already on the AI adoption highway, said Mark Sendak, MD, MPP, co-lead for the Health AI Partnership at Duke University.

But other health care organizations need AI roads, on-ramps and bridges, Sendak said.

“Most health care organizations in the U.S. need an on ramp to the AI adoption highway,” he said. “They are struggling with clinician burnout, they face razor-thin or negative margins. They are entirely dependent on external EHR (electronic health record) vendors for technology, expertise and assistance. Simply put, they do not have the resources, personnel or technical infrastructure to embrace guardrails for the AI adoption highway.”

Some 15 years ago, Congress funded the procurement of EHRs with 62 regional extension centers to support EHRs in low-resource settings. EHRs are far from perfect, but federal programs enabled broad adoption of the technology, and health care organizations need technology infrastructure to integrate and monitor AI tools, Sendak said.

Creating standards

Hospital leaders recognize the need to vet AI tools, but most don’t have robust review processes yet and they need help, said Michelle M. Mello, JD, PhD, professor of health policy and of law at Stanford University.

Effective governance should not focus only on AI algorithms, but it must encompass how the AI is incorporated into clinical workflows – how physicians, nurses and other staff interact with each other, with the AI tool and with patients, Mello said.

“Oversight must go beyond the algorithm to reach how adopting organizations will use it,” Mello said. “To take a simple analogy if we want to prevent motor vehicle accidents, it's not enough to just set design standards for cars. Road safety features, driver's licensing requirements and rules of the road also help keep people safe.”

The field is moving too quickly to enshrine exact standards in law. Mello suggested the government leaders “foster a consensus-building process that brings experts together to create standards and process for evaluating proposed uses of health care AI.”

Shaping the market

Obermeyer mentioned a number of health care issues and patient conditions with potential help from AI, ranging from predicting heart arrhythmias to discovering new antibiotics to detecting signs of violence in X-rays of patient injuries.

Incidents of bias highlight the need for oversight, Obermeyer said. He prescribed specificity in AI.

“We should know exactly what an algorithm is predicting. If it's predicting cost, the developer shouldn't be able to say that it's predicting risk or needs or something else,” he said.

Health care leaders must insist on accountability, measuring performance through new independent data sets that reflect the majority of the American population. Government programs also should pay for AI that generates value – a key to developing AI tools, Obermeyer said.

“We don't need to settle for the often poor quality products that are put in front of us by developers today,” he said. “We can shape that market thanks to the purchasing power of those programs.”

Long-term care

AI can be part of reforming care delivery system-wide, said Katherine Baicker, PhD, provost of the University of Chicago. Currently there is an under-incentive to invest in care that would improve patient well-being over decades, and that is particularly problematic for Medicare, she said.

Private insurers have incentives to invest in care that patients can appreciate in the near term, but it is harder to discern long-term benefits when patients are healthy, before they are sick, Baicker said.

“Just as risk adjustment now provides a mechanism to deter insurers from selectively enrolling only healthy people having that kind of population level long run risk adjustment can help provide the resources needed to invest in people's long term health, which is of course, first and foremost, to their benefit but also to the benefit of Medicare, from whom those patients will eventually be receiving care,” Baicker said.

Updated legislation

Wyden also touted the updated Algorithmic Accountability Act, which he is cosponsoring to “pull back the curtain” with protections against bias and discrimination for people affected by AI systems. Wyden, Sen. Cory Booker (D-New Jersey) and Rep. Yvette Clarke (D-New York) have sponsored similar legislation in the past, and it has support from at least 10 Democrats in the Senate.

Related Videos
Dermasensor
Emma Schuering: ©Polsinelli
Emma Schuering: ©Polsinelli