News
Article
Clinicians need to acquire and employ AI not as the latest “me too” thing that everyone else is using, but rather with clearly defined purposes and careful planning.
AI will not solve all problems in medicine: ©Lalaka - stock.adobe.com
At my most recent health visit, my doctor surprised me by saying he’s thinking about retiring early. He’s in his mid-50s. He has a successful practice. His patients love him. Why, I asked.
For much of his career, he spent most of his day engaged with his patients, helping them with his advice, he explained. But like other physicians, he’s found himself spending ever more time away from patients dealing with administrative functions. As I know from my work, such activities now consume up to 50% of clinicians’ time. Here was yet more proof that administrative burdens are sapping the joy of practicing medicine, and they’re also saddling practices with excess administrative costs, undermining their financial viability.
I would love for artificial intelligence to be the cure-all that lifts the administrative burdens from health care providers’ shoulders and keeps my doctor in practice. And indeed, AI is showing significant potential to be transformative for reducing burnout and sky-high administrative costs, which are among the biggest challenges facing the U.S. health care system. So, there are good reasons why physicians’ use of AI for certain tasks has nearly doubled in the past year and why the American Medical Association says “AI” can stand for “augmented intelligence” for its ability to enhance, not replace, physicians’ knowledge and skills.
But there’s a reason that panacea, a term for all-healing remedy we got from Greek mythology, is seldom seen in reality. AI may increase efficiency in administrative areas such as coding, clinical notes, and the provider side of the preauthorization process. But the “garbage in, garbage out” concept definitely applies to AI. These tools must be carefully programmed and trained using high-quality data and parameters to get accurate outcomes. How have the AI tools that you’re using, or might be considering, actually been trained?
So, I’ll offer another concept of what AI can stand for: act intentionally. This is my central tenet for all uses of new technology in health care. Acting intentionally is especially important with generative AI. Clinicians need to acquire and employ AI not as the latest “me too” thing that everyone else is using, but rather with clearly defined purposes and careful planning. After all, we have already seen the unintended consequences when practices were rushed to adopt the new technology of EHRs a decade ago.
While EHRs have freed clinics from giant cabinets full of paper records and now support health research as well as care, EHRs are still creating needless headaches and inefficiencies today because they were designed mostly to support billing rather than care and devised without adequate clinician input. One study reported that providers these days are spending about 45% of their time on the EHR.
So, to act intentionally when adopting AI, I offer a few additional guidelines.
Learn from the experience of others. AI can provide a big assist for tasks such as medical coding and the documentation of patient exams. Successful applications such as these are already evident, so it’s important to stay on top of what’s happening with AI through health care conferences, professional connections, and online sources, such as the American Medical Association Education Hub. And don’t gloss over implementation challenges that you read or hear about. These must be carefully addressed before full adoption.
Don’t neglect fundamentals. Automation guided by AI may well increase the efficiency of processes in your practice, but it’s also critical to clean up errors and inefficiencies in other components of your processes, particularly your practice’s revenue cycle management. Those issues won’t go away just because you use AI for some of your RCM pieces. For example, performing charge capture audits enables practices to identify and address weaknesses in documentation of patient care and medical code assignment.
Respect patients and protect sensitive information. Public opinion surveys have shown mixed results, indicating both interest in and uncertainty or distrust about use of AI in health care. So, it behooves clinicians to be transparent with their patients about when and how they’re using AI and to take data security seriously. The reality that AI works with large data sets of health information that include patient health data does not preclude its use, but it does call for maintaining a vigilant focus on data security and privacy practices.
Remember: AI augments rather than replaces you. “AI hallucinations” can generate inaccurate or nonsensical outputs. There’s also the challenge of “drift,” the accumulation of changes in the underlying data that can degrade an AI tool’s predictive power over time. So, while we can appreciate the time savings of AI-generated medical codes, exam notes and patient communications, experts still need to review AI-generated coding for errors, and clinicians should review their AI-generated examination notes and other outputs for accuracy.
By acting intentionally and seeing AI as only one part of the whole solution, I’m confident we can make it an effective prescription for the administrative burdens and costs ailing our health care system. And with the right approach, it can enable practitioners like my doctor to restore his focus on the reasons why he committed to a career in medicine in the first place.
Allen Fredrickson is a pioneer in reducing administrative burdens and costs in the US health care industry and the founder and CEO of Signature Performance, a full-service health care administrative business.