Federal Court Orders Broad Discovery Against UHC in AI Coverage Denial Lawsuit

In a recent ruling out of the District of Minnesota, a federal magistrate judge directed UnitedHealthcare (UHC) to turn over an expansive set of documents in the class action Estate of Lokken v. UnitedHealth Group, Inc., alleging that the health insurer used an artificial intelligence (AI) algorithm to improperly withhold post-acute care coverage from Medicare Advantage enrollees.

On

The March 9 order granted broad discovery into the insurer’s AI-driven claims processes. This ruling is an important indicator that litigation challenging AI-based health insurer coverage denials is gaining traction and that the inner workings of these algorithmic tools may be subject to court-ordered disclosure.

Background

Family members of two deceased Medicare Advantage beneficiaries initiated the action in November 2023. At its core, the lawsuit — which we have previously written about — challenges UHC’s use of nH Predict, a predictive AI tool by its Optum subsidiary naviHealth (now Home & Community Care). The plaintiffs alleged that UHC deployed nH Predict to supplant physicians’ clinical judgments and prematurely refuse to cover medically necessary care for Medicare Advantage beneficiaries. Optum has disputed these allegations, maintaining that nH Predict is a care-support tool — not a claims adjudication tool — and that medical necessity determinations are made by qualified physicians adhering to the Centers for Medicare & Medicaid Services’ guidance. 

In an earlier ruling, the court granted in part and denied in part UHC’s motion to dismiss, allowing the plaintiffs’ claims for breach of contract and breach of the implied covenant of good faith and fair dealing to proceed (other claims were dismissed due to Medicare Advantage preemption). The court found that the viable claims arose from UHC’s coverage terms and turned on whether UHC complied with promises in its policies that claim decisions would be made by clinical staff and physicians when it allegedly used AI. Following that ruling, UHC sought to bifurcate discovery, proposing to limit initial discovery to the question of whether nH Predict was actually used for the plaintiffs’ claims instead of physicians. The court denied that request, concluding that full class-wide discovery should proceed from the outset given the “enmeshed factual and legal issues” at stake.

The Discovery Order

After the court’s denial of UHC’s request for bifurcation, the plaintiffs moved to compel UHC to produce requested documents and information. The court largely sided with the plaintiffs. Under the court’s order, UHC is required to turn over documents covering a wide array of subjects, with some categories of documents reaching as far back as January 2017. These documents include the company’s policies and procedures regarding post-acute care claims and employee training, documents analyzing or discussing nH Predict, business documents regarding its acquisition of naviHealth in connection with post-acute care claims and projected cost savings for these claims, and documents related to government investigations into UHC’s use of AI in claims assessment and adjudication. UHC must additionally hand over employee performance reviews and compensation data for care coordinators and medical directors working on post-acute claims, materials pertaining to its internal AI review board — including board member identities — and identifying and contact details for the medical directors and care coordinators who participated in denying coverage to 300 putative class members.

Notably, the court rejected UHC’s attempts to narrow these requests — efforts the court observed raised “[m]any of the same issues” it had already resolved in denying UHC’s prior request to bifurcate discovery. Among other things, UHC contended that records from before July 2019, the date nH Predict first went into use, lacked relevance. The court disagreed, finding that the earlier records could serve as circumstantial evidence of breach of contract.

The court pointed to the US Senate Permanent Subcommittee on Investigations’ October 2024 report, Refusal of Recovery: How Medicare Advantage Insurers Have Denied Patients Access to Post-Acute Care, which found that UHC’s denial rate for post-acute care claims more than doubled after it started using naviHealth and nH Predict in 2019. That finding, the court reasoned, made earlier records relevant to establishing a baseline, including how UHC’s policies, practices, employee incentives, and performance evaluations changed before and after the AI tool’s deployment.

Key Takeaways

This ruling carries several important implications for health care providers.

First, the scope of the court’s order underscores the types of information that may become relevant in litigation over AI-driven coverage denials — information that has largely remained opaque to providers on the receiving end of these decisions. Although discovery materials will not be publicly available, the litigation itself may yield insights into how tools like nH Predict interact with or displace the clinical judgments of treating physicians when insurers make coverage determinations. For providers who have seen an uptick in denials for skilled nursing and other post-acute services, this case may offer a framework for understanding whether and how AI algorithms have contributed to that trend.

Second, the court’s refusal to limit discovery to the period after nH Predict’s deployment reinforces that the full arc of an insurer’s shift toward AI-driven decision making is fair game in litigation. Providers should take note: The court found it relevant to examine how denial rates, employee incentive structures, and internal policies changed before and after AI adoption. This type of before-and-after analysis could prove valuable for providers building their own records when challenging coverage denials on appeal or in litigation.

Third, this ruling is part of a broader wave of legal, regulatory, and legislative activity aimed at curbing AI-driven coverage denials — from state laws like California’s Physicians Make Decisions Act to the Senate investigation cited by the court. Providers should be aware that the landscape is shifting when it comes to demanding transparency about how insurers use AI in coverage decisions and should consider whether denials they receive may be the product of algorithmic tools rather than individualized clinical review.

Finally, this case involves AI technology that predates the agentic AI boom and does not involve next generation AI platforms, like ChatGPT and Claude, that are common today. This litigation may serve as the “canary in the coal mine” for the wave of disputes on the horizon as health insurers incorporate agentic AI into their business processes, coverage decisions, and payment practices. 

The ArentFox Schiff Managed Care, Payer Disputes & Reimbursement team will continue to track this case and the evolving intersection of AI and health coverage. If you have any questions, please contact the authors of this alert. 

Contacts

Continue Reading