Banner

Blog

Article

How America can seize its rare opportunity to nail AI regulation for health care

What does successful regulations for health care-specific AI models look like?

Michal Tzuchman-Katz, MD, CEO and Co-Founder, Kahun Medical: ©Kahun Medical

Michal Tzuchman-Katz, MD, CEO and Co-Founder, Kahun Medical: ©Kahun Medical

President Biden issuing his first executive order on responsible and trustworthy AI development shows that the White House is serious about keeping AI growth in check. With points emphasizing safety and security guardrails, consumer and labor market protections, and equitable advancement—it indicates a strong step forward following Biden’s meeting with seven leading U.S.-based AI companies earlier this summer to apply pressure on voluntary restraints.

And there’s a rare opportunity for Congress to take it a step further. Majorities of voters from both parties prefer federal regulation of AI to self-regulation by tech companies, according to the Artificial Intelligence Policy Institute, and 82% said they don’t trust tech executives to regulate AI.

This perspective is especially critical to AI companies building health care-specific models meant to assist health care professionals. Overly rigid regulations can make development stagnate, and we’ve seen that happen across a multitude of industries. At the same time, however, regulators can’t afford to be lenient for the sake of progress or potential profits.

With such a broad consensus, it would be a massive missed opportunity if Congress didn’t seize on this sentiment to build an effective bipartisan framework for AI regulation beyond an executive order. But what does success here look like for health care-specific AI models?

Regulation as a catalyst for AI development

Too often, sweeping regulation in a particular industry comes as either a response to a massive crisis, such as the Dodd-Frank reforms after the 2008 financial crisis or appears to be an unnecessary hurdle to innovation. But successful AI regulation would be the opposite: Rather than stifling innovation, it will guide companies to develop models that are more accurate and reliable—and therefore more powerful and capable of improving human lives.

For health care-specific AI to flourish, the professionals using it have to be able to trust its output, full stop. Integrating AI that’s not up to par and potentially adding hurdles to industries already struggling with efficiency is the antithesis of that. We know that AI offers tremendous near-term potential in many industries beyond simple automation— it could help architects design buildings, help lawyers build cases, and help doctors with clinical work. But if we don’t regulate AI properly, there’s no way to ensure responsible adoption in any of these fields, especially in health care.

The main gap here is that large language models don’t reason like a doctor. They produce their output by filling in the blank with the most statistically probable word, without any mechanism for understanding the context of those words. The result is an unreliable model that gets things right much of the time, but sometimes completely hallucinates information or gets things wrong.

Because doctors and other health care professionals can’t trust such a tool, they don’t even come close to harnessing AI’s true potential in their industries. This is why a firm regulatory backbone is fundamental to catalyzing more sustainable and credible development.

Proactive regulation must include health care leaders

These flaws of generative AI models should be fully understood and taken into account when crafting legislation to regulate it. That’s in addition to the massive number of other issues that need to be addressed, such as privacy surrounding the personal user data that goes into training these models, labor market impacts, and the way AI-produced content shakes up longstanding intellectual property law.

Biden’s executive order kicked things off in the right direction here. But it should be viewed as just a starting point, one where regulators can take a proactive approach to AI regulation that involves consulting with the industry itself. That includes the AI giants, such as OpenAI and Google and the like. But it must also include the smaller startups in the health care space.

Since AI giants aren’t prioritizing gaining professional-level expertise in every field their models could be used in, relying on them to dictate regulation will likely take on a “one-size-fits-all” approach that ignores the nuances of health care-specific AI and unwittingly stall industry-wide adoption.

In health care, for example, proper regulation can achieve a baseline of elements that health care systems can look for when deciding on adopting an AI tool that coincides with their own specific needs. That will encourage health care-specific AI companies to build models that smartly mitigate risk and foster transparency by making sure its output can be clinically validated and trustworthy.

But if regulators don’t consult with AI companies that are already working in the health care space, there’s no guarantee that this baseline of elements will accurately reflect what the industry needs or is even capable of doing. This is why working with smaller, industry-specific companies is so critical, as they can give ground-level understanding and expertise that large-scale AI companies simply cannot provide. By going in blind, regulators do themselves a disservice by creating rules that miss key knowledge and will inevitably require working backward to mitigate those gaps.

In this scenario, regulators get to play two roles. They still get to set the rules, but also help forge industry-spanning business relationships in the shadow of regulatory clarity. By creating this sort of “regulatory safety net” that takes the industry’s nuances and requirements into account, physicians and other health care professionals will have an easier time trusting and incorporating AI models into their workflows. Knowing that there is an external force that is keeping AI companies in line to make sure their output is clinically verifiable and understandable will only expedite adoption for physicians and health care systems.

That requires giving health care AI builders already blazing a trail in the space a seat at the table preemptively, rather than when things go awry. In doing so, these regulatory bodies can catalyze the industry instead of slowing it down.

Michal Tzuchman-Katz, MD, is CEO and Co-Founder at Kahun Medical, a company that built an evidence-based clinical reasoning tool for physicians. Before co-founding Kahun in 2018, she worked as a pediatrician at Ichilov Sourasky Medical Center, where she also completed her residency. She continues to practice pediatric medicine at Clalit Health Services, Israel’s largest HMO. Additionally, she has a background in software engineering and led a tech development team at Live Person.

Related Videos
Dermasensor