The AI ROI Gap: Why Health Systems Are Finally Getting It Right (And Where They're Still Getting It Wrong)

A few weeks ago, a friend of mine at market research firm Eliciting Insights released the results of their second annual AI adoption survey with 120 health systems, February 2026. I've been thinking about the data ever since, because it tells two very different stories depending on which number you look at first.

Here's the one that jumps off the screen: 75% of U.S. health systems are now using at least one AI solution, up from 59% last year. That's a 27% jump in twelve months. And 50% of health systems are running three or more AI applications simultaneously.

That's the good news. Here's the number that deserves more attention: among the organizations that have actually implemented AI and tried to quantify the return, more than half report at least a 2x ROI.

More than half. But not all.

Which means a meaningful chunk of health systems have deployed AI, are running it, and either can't measure what it's doing for them. Or they’ve measured it, and the news isn't great. That gap between the organizations capturing real returns and the ones collecting expensive shelfware is what I want to talk about today.

Why ROI in RCM AI Is Actually the Easy Part

I want to start here because there's a tendency in healthcare to treat AI ROI measurement as inherently complicated. It doesn't have to be, at least not in revenue cycle.

One of the CIOs quoted in a Becker's piece this week said it plainly: when it comes to revenue cycle AI, the ROI is "pretty cut and dry." You either prevented a denial that would have cost you money to work, or you didn't. You either reduced the time it took to post payments, or you didn't. You either collected a patient balance that would have aged into bad debt, or you didn't.

The math is available. Days in A/R, first-pass acceptance rates, cost-to-collect, denial rate by category, self-pay yield. These are metrics most revenue cycle teams already track. What AI does is move them. If you're not measuring the movement, that's a choice, not a constraint.

What makes this relevant right now is that the Eliciting Insights data shows exactly which AI tools health systems are deploying and growing fastest, and the pattern is telling. The highest growth categories (AI-prepopulated technical appeals (up 50% year over year), draft replies to patient texts (up 80%), AI-based CDI (up 59%)) are all areas where you can directly connect the tool to a financial outcome. Appeals sent. Texts responded to. Codes improved. Claims paid.

That's not a coincidence. Organizations are gravitating toward AI where they can see what it's doing. And that's actually a smart instinct.

The Measurement Problem Hiding in Plain Sight

Here's where it gets interesting, though. Knowing that ROI is theoretically measurable in RCM and actually measuring it consistently are two different things.

In my work with health systems over the years, I've seen a pattern that the Eliciting Insights data quietly confirms: adoption is outpacing accountability. You can have 75% of health systems using AI and still have a significant portion of them unable to tell you with confidence what their AI investments are actually returning.

Why? A few reasons that show up again and again:

  • The tool gets purchased by IT, but the outcome is owned by nobody. When AI projects originate in innovation labs or vendor demos and get handed off to operations teams that weren't part of the design conversation, measurement is an afterthought. The revenue cycle director finds out about the deployment at go-live, not at the business case stage. That's a setup for shelfware.

  • Baseline data wasn't captured before implementation. This sounds basic, but it's remarkably common. If you didn't document your denial rate, your appeals acceptance rate, or your manual touch rate before you turned the tool on, you have nothing to compare against. You can tell me the tool is working. You can't tell me by how much.

  • The ROI model was built on vendor projections, not your data. Vendor-supplied ROI calculators are useful for building a business case. They are not a substitute for measuring actual performance in your environment, with your payer mix, your workflow, and your team. A denial prediction tool that shows a 20% reduction in coding-related denials at Summit Health may perform differently at your critical access hospital in rural Iowa. Measure it yourself.

Where the Real Gap Is Opening Up

The Eliciting Insights data shows something that I think most healthcare finance leaders haven't fully internalized yet: payer-side AI adoption is accelerating at the same time provider-side adoption is just finding its footing.

I've written about this in RCM 2030, and I'll say it again here because the data keeps making the case: 43% of large health insurers already use or plan to use AI for claim adjudication. They are reviewing your submissions in seconds with algorithms trained on years of claims data. They know what documentation patterns lead to denials before you do.

When you look at the Eliciting Insights numbers like 36% adoption for AI coding solutions, 25% for denial prediction, etc., against that backdrop, the asymmetry becomes obvious. More than half of health systems still don't have AI-powered denial prediction running before claims leave their system. That means the payer's algorithm is reviewing the claim before your team has applied the same kind of analytical rigor to it.

That is a structural disadvantage. And it compounds over time, because payer algorithms keep learning while manual processes stay static.

The organizations that are reporting 2x+ ROI on their AI investments are the ones that have closed or are actively closing this gap. They're deploying upstream, catching problems before submission rather than managing them in the appeals queue. The ones that haven't made that shift are spending more on rework every quarter.

What Getting It Right Actually Looks Like

I don't want this to be purely diagnostic, so let me get specific about what the high-ROI organizations have in common, based on both the survey data and what I've seen working with health systems directly.

They measure denial prevention, not just denial rates. There's a difference between tracking your overall denial rate and tracking how many denials you prevented before the claim left the building. The latter requires upstream tools. It also requires someone who owns that metric and reports on it to finance leadership. If your CDI and coding AI isn't connected to a denial prevention KPI, you're measuring the wrong thing.

They treat patient financial engagement as a collection strategy, not a service nicety. The 80% growth in AI-drafted replies to patient texts isn't happening because health systems suddenly care more about communication. It's happening because they've connected patient engagement touchpoints to self-pay yield. When a patient gets a clear, fast answer about their bill — at 9 PM on a Tuesday, without calling a billing office — the probability of payment goes up. That's not soft ROI. That's dollars.

They've built governance before they scaled. The organizations that are successfully running three, four, five AI solutions simultaneously aren't doing it through chaos. They have a defined framework for evaluating tools, connecting them to operational outcomes, and monitoring performance after go-live. Sutter Health's Chief AI Officer put it well this week: organizations with mature AI governance are more than twice as likely to successfully scale AI and realize its value. That's not anecdotal. That's a capability multiplier.

They've stopped treating every AI tool as a separate decision. One of the clearest signals from CIOs this week was the move away from point solutions and toward platform alignment. When you're running a dozen AI tools that don't talk to each other, you're not building a revenue cycle. You're building a Frankenstein tech stack that creates integration debt, cybersecurity surface area, and support overhead. The question for 2026 isn't just "does this tool work?" It's "does this tool fit a platform strategy I can sustain through 2030?"

The Number That Should Be Bothering You

I want to come back to something I mentioned at the top, because I think it's the most important takeaway from the Eliciting Insights data for revenue cycle leaders specifically.

If more than half of implementers report 2x+ ROI, that means a meaningful portion either can't quantify their returns, or their returns aren't there. In an environment where the median hospital operating margin closed out 2025 at 1.3%, deploying AI that you can't measure is a luxury you can't afford.

The good news is that this is a solvable problem. The tools that drive real returns in revenue cycle are known. Denial prediction upstream of submission. AI-supported coding and CDI. Automated payment posting. Patient financial segmentation. Cash forecasting tied to payer behavior analytics. These aren't emerging capabilities. They're available today, deployed at scale in high-performing organizations, and producing measurable results.

The question is whether your organization is measuring them honestly.

Want to Know Where You Stand?

I built a free RCM AI Readiness Scorecard for exactly this conversation, comprised of 24 diagnostic questions across the five areas where AI either protects your margin or fails to. Each question tells you not just what to check, but what your answer reveals about your financial risk exposure.

Download the RCM AI Readiness Scorecard → aprilwilson.net

It takes about 10 minutes and shows you which stage your organization is in, with a clear sense of where to focus next.

And if you want this kind of analysis every week (what all the buzz means and what you should do), I'd love to have you as a subscriber to my LinkedIn newsletter, RCM 2030 Insider. Subscribe here 

Previous
Previous

Agentic AI Is Coming for Healthcare Operations

Next
Next

AI Is Moving Inside the EHR. Revenue Cycle Leaders Should Pay Attention.