AI Isn’t Replacing Healthcare. It’s Exposing Where We Broke It.

The recent wave of headlines about OpenAI and healthcare hasn’t surprised me.

ChatGPT Health for consumers. OpenAI tools for hospitals and health systems. Patients ordering their own labs, scans, and genetic tests and showing up to appointments with more data than anyone knows what to do with.

To some, this feels chaotic. Even dangerous.

To me, it looks like a system finally revealing its stress fractures.

Consumers Didn’t Suddenly Get “Too Curious”

There’s a narrative floating around that patients are overreaching. That they’re bringing unnecessary data into exam rooms and creating confusion.

I don’t buy that.

People are ordering tests, using AI, and searching for answers because they feel undersupported. Appointments are short. Access is limited. Follow-up is inconsistent. And many patients leave visits feeling rushed or dismissed.

So they fill the gaps themselves.

That behavior didn’t start with AI. AI just made it visible.

Providers Are Overloaded, Not Unwilling

On the provider side, the frustration is real too.

Clinicians are being handed massive data sets without time, reimbursement, or workflow support to interpret them. Explaining results eats into already limited face time. Administrative burden keeps growing. Burnout is everywhere.

This isn’t a failure of care. It’s a failure of infrastructure.

And it’s exactly what I wrote about in RCM 2030.

AI Is Sitting in the Middle of a Broken Conversation

What’s happening now isn’t about replacing clinicians or bypassing medical judgment.

It’s about translation.

Patients need help understanding what information means.
Providers need help managing volume and complexity.
Revenue cycle teams are caught downstream when confusion turns into duplicate tests, unnecessary visits, denials, and bad debt.

AI is stepping into the middle because no one else was assigned that job.

That’s the opportunity most organizations are missing.

This Is a Workforce Problem Disguised as a Technology Problem

I’ve said this before and I’ll keep saying it.

AI will not fix healthcare on its own.
But it will punish organizations that refuse to evolve.

The real work is not buying tools. It’s hiring differently.

We need roles that didn’t exist before:

  • People who can translate clinical data into plain language

  • Teams that understand both financial impact and patient experience

  • Operators who see communication breakdowns as revenue risk

  • Leaders who treat trust as infrastructure, not a soft skill

That’s why the Workforce Modernization Companion Guide focuses so heavily on hybrid skills, data literacy, and human-centered design.

The future revenue cycle depends on them.

Stop Panicking. Start Designing.

Every major shift in healthcare creates fear. AI is no different.

But panic is the least productive response.

Consumers are telling us they want clarity.
Providers are telling us they need support.
The system is telling us it can’t keep operating the way it has.

AI didn’t create that reality. It surfaced it.

The organizations that win the next decade will be the ones that stop asking, “How do we control this?” and start asking, “How do we design for it?”

That’s what RCM 2030 was about then.
And it’s what this moment is about now.

Previous
Previous

Why Vertical Integration Is Becoming an RCM Risk, Not Just a Policy Debate

Next
Next

RCM 2030 Reality Check: What the Kaufman Hall Performance Outlook Reveals About the Next Decade