Warning Labels for AI Use in Healthcare

March 19, 2025

Can artificial intelligence be a prescription for what ails U.S. healthcare? Yes — AI has tremendous potential for relieving the healthcare system’s administrative costs and burdens. But AI should also come with a warning label for hospital and healthcare system administrators: If you apply AI before ensuring that a sound process and automation protocol is in place, it may lead to unwanted outcomes. It could even cause more harm than good.  

The promise and perils of AI

Before I explain my five guidelines for using AI correctly to derive the benefits and avoid potential harm, I want to make one point clear: I’m all for discovering the advantage of AI in healthcare. This technology could be transformative for reducing clinician burnout and sky-high administrative costs, which are among the biggest challenges facing the U.S. healthcare system.  

My own company, Signature Performance, uses and develops AI tools for healthcare services and administration. For example, we’re making customer call centers more efficient and effective by using sentiment testing AI technology to immediately escalate callers who are expressing suicidal thoughts. We are also seeing positive outcomes from using AI in the submission and claims adjudication and processing functions we perform.  

Additionally, more than half of physicians say reducing administrative burdens through automation is a key area of opportunity for AI. There’s a good reason why physicians’ use of AI for certain tasks has nearly doubled in the past year.

But rushing into the use of AI invites risk. The adoption of electronic health record (EHR) systems should serve as a cautionary tale.  

EHRs were once the shiny new tool that was going to make healthcare more efficient and effective. Although they have, in some ways, greatly improved care in our decentralized healthcare system, EHRs were often rushed (and sometimes forced) into widespread use with little input or involvement from the clinicians that have to use this technology every day. Because of the way they were implemented, these tools are still creating needless headaches and inefficiencies for administrators, clinicians, and patients. For instance, one study reported that physicians are spending about 45% of their time on the EHR.  

Five guidelines for using AI the smart way

I don’t want AI applications to get to the point where they — like EHRs — need major surgery (see my solutions for EHR challenges). That’s why I encourage every healthcare leader to follow my five guidelines on where, when, and how to use AI. Think of the guidelines as preventive care for potential harms:

  1. Listen to your clinicians. When assessing where AI could help the most, start by asking your physicians, nurses, other clinicians, and support staff to identify their biggest pain points in serving and providing care to patients. Then take each pain point to your technology experts and ask, “Is there a way to apply AI to solve this issue?”  
  1. Learn from others’ use of AI in healthcare. AI can provide a big assist for tasks such as medical coding and the documentation of patient exams. Promising use cases such as these are already evident, so it’s important to stay on top of what’s happening with AI in healthcare through conferences, professional connections, and online sources, such as the American Medical Association Education Hub. But don’t gloss over implementation challenges that you read of or hear about. These need to be addressed before full adoption.
  1. Don’t discard the human element. AI isn’t perfect, and you shouldn’t view it as a replacement for human involvement. For example, internal experts should review AI-generated coding for errors, and clinicians should review their AI-generated examination notes. As AMA says, when explaining its preference for the term “augmented” intelligence (over “artificial” intelligence), AI’s design “enhances human intelligence rather than replaces it.” With this in mind, reassure staff that you’re not introducing AI to replace their jobs but, instead, to free them up for more rewarding tasks.  
  1. Don’t treat AI as a panacea. AI may increase efficiency in areas such as coding, clinical notes, and the provider side of the preauthorization process. But the “garbage in, garbage out” concept still applies to AI: You must provide AI with high-quality data and parameters to get accurate outcomes. It’s also critical to clean up errors and inefficiencies in other components of your revenue cycle management (RCM). Those issues won’t go away just because you use AI for some of the RCM pieces.
  1. Adopt safeguards to protect sensitive information. AI works with large data sets, often including patient information. That reality doesn’t preclude the use of AI, but it does call for maintaining a focus on data security and privacy practices.  

These five guidelines align with my central tenet for all uses of new technology in healthcare: Act intentionally. It is a happy — and fitting — coincidence that “AI” could also be an acronym for that phrase, because acting intentionally is especially important with this technology. Healthcare administrators must acquire and use AI, not as a shiny new toy, but with purpose and planning.

By acting intentionally, in accordance with the five guidelines, I’m confident that we can make AI an effective prescription for the administration and delivery of healthcare.

Get in Touch

Sign up to learn more, submit media inquiries or speaking requests, or connect with Allen. 

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

By clicking submit, you're confirming that you agree with our Terms and Conditions.

Twitter
Facebook
Pinterest
Linkedin
Telegram
Reddit