The AI Lawsuits Are Here. What Should RCM Leaders Do Before the Next One Names Their Hospital?
The short answer is this: if your revenue cycle uses AI tools to make consequential decisions about patients (prior authorization, denial routing, charity care screening, propensity-to-pay scoring), and you cannot document who reviews those decisions before they are acted on, you have the same governance gap that is currently being litigated against major payers and technology companies. The time to build that governance infrastructure is before the discovery process starts, not after.
The lawyers have arrived.
Pennsylvania just became the first state in the country to sue an AI company for impersonating a licensed medical professional. Their target was Character.AI, whose chatbot "Emilie" told a state investigator it was a licensed psychiatrist, offered to schedule a mental health assessment, said it could prescribe medication, and provided a fake Pennsylvania medical license number. Governor Josh Shapiro said it plainly: "We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional."
That is an AI chatbot lawsuit. But do not get comfortable thinking it has nothing to do with revenue cycle.
Meanwhile, the class action lawsuit against UnitedHealthcare over its "nH Predict" AI algorithm is working its way through the courts. Plaintiffs allege the algorithm has a 90% error rate and was used to override physician assessments and deny post-acute care coverage without adequate human review. UnitedHealthcare has faced congressional investigations, class action suits, and a level of public scrutiny that contributed to the environment in which its CEO was murdered in December. The company responded by announcing it will eliminate prior authorization requirements for an additional 30% of services by end of 2026.
That is not a goodwill gesture. That is a company responding to legal pressure.
The AI lawsuits are here. And the RCM community is not ready for what comes next.
Why do AI lawsuits against payers and technology companies matter for hospital revenue cycle leaders?
The instinct when you read about the Character.AI lawsuit or the UnitedHealthcare prior auth litigation is to think: that is a payer problem, or that is a consumer tech problem. Not my department.
That instinct is wrong.
Every argument being made in these lawsuits maps directly onto practices that already exist, or are being rapidly deployed, inside hospital revenue cycle operations.
The UnitedHealthcare case is built on one central claim: an AI algorithm was making consequential decisions about patient care without adequate human oversight, with an error rate so high that it constituted systematic wrongdoing rather than occasional mistakes. Plaintiffs did not argue that AI should not be used. They argued that the AI was used irresponsibly, at scale, without the governance structures to catch and correct its failures.
Now ask yourself this question about your own revenue cycle. When your AI-assisted prior authorization tool denies a request, who reviews that decision before it goes out? When your coding AI flags a claim, who validates the output before submission? When your denial prediction model routes a claim to a low-priority queue, who audits whether that routing logic is working correctly?
If the answer to any of those questions is "nobody" or "we check it periodically" or "the vendor manages that," you have the same governance gap that UnitedHealthcare is currently defending in court. The only difference is nobody has sued you yet.
What are the three legal theories that will shape AI liability in healthcare?
Understanding where the lawsuits are going requires understanding which legal arguments are gaining traction. There are three worth watching closely.
Unlicensed practice of medicine. The Pennsylvania case against Character.AI is built on this theory. An AI system that holds itself out as a licensed professional, provides clinical assessments, or makes treatment recommendations without appropriate supervision may be engaged in the unlicensed practice of medicine. The RCM implication is narrower but real: an AI tool that makes determinations about medical necessity, clinical appropriateness, or level of care is touching clinical territory. If that tool operates without physician oversight and its outputs are used to deny, reduce, or delay care, the argument writes itself.
Algorithmic negligence. The UnitedHealthcare litigation is the clearest example of this theory in motion. The argument is not that AI should be banned from utilization management. The argument is that deploying an algorithm you know has a high error rate, at scale, without human review, constitutes negligence when patients are harmed as a result. For RCM leaders, this theory applies to any AI tool making consequential decisions about patient financial obligations: charity care screening, payment plan eligibility, propensity-to-pay scoring. If those models have systematic errors and your organization acts on their outputs without verification, the negligence argument is available to a plaintiff's attorney.
Black box liability. HTI-2 already opened the door on this one by requiring transparency around decision support tools and algorithms. Project 2025 doubles down on fraud, waste, and abuse, which means opaque AI is a regulatory target. I wrote in the RCM 2030 Companion Guide that you should expect a formal rule by the end of the decade requiring vendors to prove how their models make decisions, not just what outcomes they produce. The lawsuits are accelerating that timeline. A vendor who cannot explain why their algorithm produced a specific output is a vendor who creates liability for the hospital that deployed them.
What do hospital RCM vendor contracts fail to protect you from when AI is involved?
I have reviewed a lot of RCM vendor contracts in my career. Almost none of them adequately address what happens when the vendor's AI produces a bad outcome.
Most vendor agreements contain some version of this: the vendor is not responsible for outcomes resulting from the client's use of the product. In plain English: if our AI denies a claim incorrectly and a patient is harmed, that is your problem, not ours.
That language has not been tested in court at scale yet in the healthcare AI context. It is about to be.
The argument that will emerge on the plaintiff's side is straightforward: the hospital deployed the tool, the hospital profited from the efficiency it provided, and the hospital is responsible for ensuring the tool operates safely. The vendor's contractual disclaimer does not relieve the hospital of its duty to the patient. Every Medicare condition of participation, every state licensing requirement, every accreditation standard flows to the hospital, not the vendor.
I said in RCM 2030 that by 2030, business associate agreements would demand live compliance feeds, not annual letters. That contracts would include penalties for missed deadlines. That vendors who cannot meet transparency requirements would lose business. What I did not say loudly enough is that the enforcement mechanism for all of that is not just regulatory. It is increasingly judicial.
What should RCM leaders and CFOs do right now to reduce AI liability exposure?
Audit every AI tool for human oversight. For each AI or algorithm-assisted workflow in your revenue cycle, document who reviews its output before a consequential decision is made. Prior auth, denial routing, charity care screening, propensity-to-pay scoring, coding assistance. If the answer is "the system acts on the output automatically," that is an exposure that needs to be fixed before a plaintiff's lawyer finds it.
Demand explainability from your vendors. If your vendor cannot tell you why their model produced a specific output for a specific claim, you cannot defend that output to a regulator or a court. Start requiring explainability in writing. Add it to your next contract renewal as a non-negotiable SLA requirement. Vendors who refuse to provide it are telling you something important about their liability posture.
Update your AI governance documentation. The board needs to know what AI tools are making consequential decisions in the revenue cycle, what the error rates are, how errors are caught and corrected, and what human oversight exists. If you cannot answer those questions in a board presentation, you cannot answer them in a deposition.
Review your vendor contracts for liability allocation. Pull your current RCM vendor agreements and find the indemnification language. Understand what you are accepting as your responsibility versus what the vendor has disclaimed. If the contract allocates all AI-related liability to the hospital, negotiate. If the vendor refuses to accept any liability for their AI's outputs, that is a business decision you should make with your eyes open.
Train your team on what AI can and cannot do. The Character.AI case exists partly because users did not understand they were talking to a fictional character, not a licensed professional. The UnitedHealthcare case exists partly because physicians and patients did not know an algorithm was making the decisions they thought a human was making. Transparency with your own staff about what AI is doing in your workflow is not just good ethics. It is a defense.
What does the future of AI liability in healthcare look like by 2030?
I said in RCM 2030 that Project 2025 frames deregulation as shifting risk, not removing it. The government will not micromanage how you deploy AI. But when something goes wrong, the liability lands on you, not on Washington and not on your vendor.
The Pennsylvania lawsuit and the UnitedHealthcare litigation are the opening act. What comes next will be provider-side. A hospital whose AI-assisted charity care screening systematically under-identifies eligible patients. A health system whose denial prediction model has been routing legitimate claims to low-priority queues for two years. A revenue cycle vendor whose proprietary algorithm has been making coding decisions that inflate certain DRGs and nobody noticed.
The arguments are already written. The legal theories are already proven in adjacent cases. All that is missing is a plaintiff, a discovery process, and a jury that has been watching the news about AI for three years.
The time to build your governance infrastructure is before that discovery process starts.
Frequently Asked Questions: AI Liability and RCM
What is the UnitedHealthcare AI lawsuit about? The class action lawsuit against UnitedHealthcare alleges that its AI algorithm "nH Predict" was used to deny post-acute care coverage with a 90% error rate, overriding physician assessments without adequate human review. Plaintiffs argue this constitutes systematic negligence rather than occasional mistakes. The case has drawn congressional scrutiny and contributed to UnitedHealthcare's announcement that it will eliminate prior authorization requirements for an additional 30% of services by end of 2026.
What was the Pennsylvania Character.AI lawsuit about? Pennsylvania sued Character Technologies, the company behind Character.AI, alleging that a chatbot called "Emilie" unlawfully presented itself as a licensed psychiatrist, offered to schedule mental health assessments, claimed it could prescribe medication, and provided a fake Pennsylvania medical license number. The lawsuit was filed under Pennsylvania's Medical Practice Act and is described as the first enforcement action of its kind by a U.S. governor.
Can a hospital be sued for how its RCM vendor's AI performs? Yes. The hospital deploys the tool, benefits from its use, and retains the duty of care to patients under Medicare conditions of participation and state licensing requirements. Vendor contracts that disclaim liability for AI outputs do not transfer the hospital's underlying obligations to patients. If an AI tool makes systematic errors in charity care screening, denial routing, or prior authorization and patients are harmed, the hospital is a viable defendant regardless of what the vendor contract says.
What is algorithmic negligence in healthcare? Algorithmic negligence is a legal theory holding that deploying an AI algorithm with a known or reasonably discoverable high error rate, at scale, without adequate human oversight, constitutes negligence when patients suffer harm as a result. The UnitedHealthcare litigation is the most prominent current example. The theory does not require the AI to be intentionally harmful; it requires that the organization deploying it failed to exercise reasonable care in its governance and oversight.
What does AI governance for revenue cycle actually require? At minimum: documentation of which AI tools are making consequential decisions in the revenue cycle; defined human review processes before those decisions are acted on; vendor contracts that require explainability and audit trails; board-level reporting on AI error rates and oversight structures; and regular audits of whether the AI is producing the outcomes it claims. I cover the full framework in the RCM 2030 Companion Guide series.
What is black box liability in the context of healthcare AI? Black box liability refers to the legal and regulatory exposure created when an AI system cannot explain how it reached a specific decision. HTI-2 already requires transparency around decision support tools. If a vendor's algorithm cannot produce an auditable explanation for a specific output, that vendor creates liability for the hospital that deployed it, because neither the hospital nor its patients can challenge decisions they cannot understand.
What should hospitals require in AI vendor contracts to reduce liability? At minimum: explainability requirements specifying that the vendor must be able to produce a documented explanation for any specific algorithmic decision; audit trail obligations requiring time-stamped records of all AI-generated outputs; indemnification language that does not place all AI-related liability on the hospital; error rate disclosure requirements; human oversight SLAs specifying response times for flagged outputs; and exit provisions that allow the hospital to terminate if the vendor fails compliance or certification requirements.
Is hospital use of AI in prior authorization subject to the same scrutiny as payer use? Yes. While most current litigation targets payers, the legal theories being established apply equally to any organization using AI to make decisions that affect patient access to care or patient financial obligations. A hospital using AI to screen charity care eligibility, score propensity to pay, or route denials is making consequential decisions with the same legal exposure profile as a payer using AI for utilization management. The direction of travel is clear; hospitals are next.
Please note that I’m not a lawyer and none of this should be taken as legal advice.

