sa国际传媒

Authenticating with LexisNexis

Feature

The High-Stakes Healthcare AI Battles To Watch In 2026

Email Hannah Albarazi

" href='#'>Hannah Albarazi
·

(January 2, 2026, 12:03 PM EST) -- Courts across the country are set to hear a wave of litigation in the coming year that will begin to draw the legal boundaries around artificial intelligence in healthcare and the life sciences.

From insurance companies using cost-saving algorithms to help make coverage decisions to allegations that AI chatbots encouraged suicides, some of the cases on this year's dockets will attempt to determine who is responsible when AI tools cause life-altering harm.

As lawsuits mount, novel questions are surfacing: Who's liable for harm caused by a generative AI tool's output? Does an AI-powered diagnostic tool count as a patentable invention? What laws govern a doctor's use of ambient AI note-taking tools during an exam?

Here, Law360 spoke with legal experts about the high-stakes AI litigation set to unfold in 2026.

A "Bellwether" for AI Chatbots

Among the most emotionally charged cases of 2026 will focus on wrongful death claims against tech companies and their generative AI developers.

In August 2025, Matthew and Maria Raine sued OpenAI alleging that the company's ChatGPT tool encouraged their 16-year-old son, Adam Raine, to take his own life. The suit claims the bot isolated him from his family, encouraged self-harm and provided detailed suicide instructions.

OpenAI has denied responsibility for the death, citing its terms of services which prohibit using the tool for self-harm.

The case — which is expected to reach trial in 2026 — could provide greater clarity on whether AI models are subject to strict liability laws.

Character Technologies Inc. — the creator of the generative AI chat platform Character.AI, which has ties to Google — has likewise been accused in multiple lawsuits of failing to put up guardrails for children, leading to serious harm.

While OpenAI subsequently launched ChatGPT parental controls and Character.AI placed a restriction on minors' access to its chat platform, Eric Fish, a partner at Hooper Lundy & Bookman PC, told Law360 that the high-profile nature of these cases may accelerate legislation.

These cases "may serve as a bellwether to test whether products liability provides a conceptual framework capable of responding to the learning and iterative aspects of these technologies," he said.

The cases are Matthew Raine et al. v. OpenAI et al., case number CGC25628528, in the Superior Court of the State of California, County of San Francisco; E.S. et al. v. Character Technologies Inc. et al. and Montoya et al. v. Character Technologies Inc. et al., case numbers 1:25-cv-02906 and 1:25-cv-02907, in the U.S. District Court for the District of Colorado, and P.J. v. Character Technologies Inc. et al., case number 1:25-cv-01295, in the U.S. District Court for the Northern District of New York.

Algorithmic Denial Battles

Health insurance giants like UnitedHealth, Humana and Cigna are facing a battery of putative class actions alleging that they relied on algorithmic tools to improperly deny coverage and cut costs, harming patients in the process.

Plaintiffs claim UnitedHealth and Humana deployed the nH Predict AI model to override doctors' recommendations, cutting off Medicare Advantage patients despite a known high rate of error.

In 2025, federal judges in Minnesota and Kentucky allowed plaintiffs' contract-based claims to proceed, rejecting the insurers' attempts to dismiss them on the grounds of federal preemption.

Lauren Carboni and Jason Mehta, co-chairs of Foley & Lardner LLP's healthcare litigation team, said these decisions "carved out a path for contract-based challenges to AI-driven coverage decisions," opening the door for similar suits against other Medicare Advantage plans.

Beyond that, Carboni and Mehta think these cases "will likely shape insurer policies and contracting language regarding use of AI in claims processing" and may impact federal regulatory guidance concerning deployment of AI in utilization management.

Cigna, meanwhile, is facing allegations from beneficiaries of employer-sponsored plans that it relied on an automated tool — known as the PxDx system — to process claims without clinical personnel reviewing the member's clinical information. That led to the wrongful denial of coverage for medically necessary care, according to the complaint.

Plaintiffs assert claims under the Employee Retirement Income Security Act of 1974 and California state law. Cigna denies that any of its benefits determinations for plaintiffs' claims violated ERISA and maintains that it had discretionary authority in interpreting the plan to use the PxDx tool.

A California federal judge, however, ruled that Cigna's use of an algorithm to automate medical necessity decisions — with a medical director merely pushing the button — conflicted with the plan's plain language and constituted an abuse of discretion.

Crucially, the court permitted state-level Unfair Competition Law claims to proceed, ruling they were not preempted by ERISA. The case remains in discovery.

Looking ahead, courts will have an opportunity to apply the specific facts of a case to the existing legal frameworks, such as laws requiring individual physician review for coverage denials, Briana L. Black, a partner at Hogan Lovells, told Law360.

These cases are "potentially important early indicators of how courts will evaluate the development and operation of algorithms used in various aspects of healthcare delivery and financing," she said.

The cases are Kisting-Leung et al. v. Cigna Corp. et al., case number 2:23-cv-01477, in the U.S. District Court for the Eastern District of California; Estate of Gene B. Lokken et al. v. UnitedHealth Group Inc. et al., case number 0:23-cv-03514, in the U.S. District Court for the District of Minnesota; and Barrows et al. v. Humana Inc., case number 3:23-cv-00654, in the U.S. District Court for the Western District of Kentucky.

The AI Patent Wars

In the quickly-growing field of AI-enabled "precision medicine," the question isn't about liability, but rather ownership.

On Jan. 13, 2026, a California federal court is scheduled to hear a pivotal motion in a patent dispute between AI-powered diagnostics company Tempus AI Inc. and medical test maker Guardant Health Inc.

Tempus AI claims Guardant infringed its "groundbreaking" AI-enabled diagnostic patents for technology that enables access to patient records, such as lab test results, that allow medical professionals to better treat cancer.

Guardant maintains that the patents are directed to patent-ineligible abstract ideas.

"This case is especially significant because it is among the first in which a court is being asked to decide whether patents claiming AI-enabled diagnostic technologies are directed to patent-eligible subject matter," Stuart Knight, an IP attorney at Foley Hoag LLP, told Law360.

This case will be "an early indicator" of what is required for AI-enabled diagnostic technologies to meet the requirements for patent eligibility, Knight said. If the court sides with Guardant, it could chill investment in AI-enabled "precision medicine."

Knight noted it is also a timely case in light of November guidance from the U.S. Patent and Trademark Office that treats AI systems as tools analogous to lab equipment, rather than joint inventors.

The case is Tempus AI Inc. v. Guardant Health Inc., case number 3:25-cv-06622, in the U.S. District Court for the Northern District of California.

Ambient AI and Consent

One of the most common ways that patients are likely to encounter AI in 2026 is through ambient note-taking tools, which listen to doctor-patient conversations to automatically generate clinical notes.

The healthcare industry has swiftly adopted the tools, which proponents say conveniently allow providers to focus on patients while making vital clinical records. However, the use of these tools has also birthed a new swell of privacy litigation.

In California, nonprofit hospital group Sharp Healthcare is facing a proposed class action over its alleged use of an ambient AI tool without first obtaining explicit patient consent. The hospital group declined to comment on the pending litigation.

A similar suit in Illinois accuses Heartland Dental LLC, a major dental support organization, and the internet-based communication company RingCentral of using an ambient AI note-taking tool to record and analyze patient calls without disclosure.

Heartland and Ring Central have urged the court to dismiss the claims, arguing that the plaintiffs have not plausibly alleged a Federal Wiretap Act violation or identified a concrete harm.

Jennifer Kreick, co-chair of Haynes Boone's healthcare and life sciences practice group, told Law360 that the outcome of this litigation could inform best practices for obtaining and documenting patient consent around ambient AI usage and provide clarity on potential liability.

It could also lead to greater obligations imposed on AI vendors related to consent and opt-outs, as well as new laws around the retention and deletion of data.

"[T]he requirements for how to obtain appropriate patient consent [for] use of these tools are not well-defined," she said.

The cases are Saucedo v. Sharp Healthcare, case number 25CU063632C, in California Superior Court, San Diego County and Lisota v. Heartland Dental LLC., case number 25-cv-07518, in the U.S. District Court for the Northern District of Illinois.

--Editing by Amy French.

For a reprint of this article, please contact reprints@law360.com.

Useful Tools & Links

Related Sections

Case Information

Case Title

Kisting-Leung et al v. Cigna Corp., et al


Case Number

2:23-cv-01477

Court

California Eastern

Nature of Suit

Contract: Other

Judge

Dale A. Drozd

Date Filed

July 24, 2023


Case Title

Estate of Gene B. Lokken, The et al v. UnitedHealth Group, Inc. et al


Case Number

0:23-cv-03514

Court

Minnesota

Nature of Suit

Insurance

Judge

John R. Tunheim

Date Filed

November 14, 2023


Case Title

Barrows et al v. Humana, Inc.


Case Number

3:23-cv-00654

Court

Kentucky Western

Nature of Suit

Insurance

Judge

Rebecca Grady Jennings

Date Filed

December 12, 2023


Case Title

Lisota v. Heartland Dental, LLC et al


Case Number

1:25-cv-07518

Court

Illinois Northern

Nature of Suit

890 Other Statutory Actions

Judge

Honorable Lindsay C. Jenkins

Date Filed

July 03, 2025


Case Title

Tempus AI, Inc. v. Guardant Health, Inc.


Case Number

3:25-cv-06622

Court

California Northern

Nature of Suit

Patent

Judge

Trina L Thompson

Date Filed

August 06, 2025


Case Title

E.S. et al v. Character Technologies, Inc. et al


Case Number

1:25-cv-02906

Court

Colorado

Nature of Suit

Personal Inj. Prod. Liability

Judge

S. Kato Crews

Date Filed

September 15, 2025


Case Title

Montoya et al v. Character Technologies, Inc. et al


Case Number

1:25-cv-02907

Court

Colorado

Nature of Suit

Personal Inj. Prod. Liability

Judge

Gordon P Gallagher

Date Filed

September 15, 2025


Case Title

P.J. v. Character Technologies, Inc. et al


Case Number

1:25-cv-01295

Court

New York Northern

Nature of Suit

Personal Inj. Prod. Liability

Judge

Mae A. D'Agostino

Date Filed

September 16, 2025

Law Firms

Companies

Government Agencies