Photo of Bexis

Today’s guest post is by Reed Smith‘s Jamie Lanphear. She has long been interested in tech issues, and particularly in how they might intersect with product liability. This post examines product liability implications of using artificial intelligence (“AI”) for medical purposes. It’s a fascinating subject, and as always our guest posters deserve 100% of the credit (and any blame) for their work.

**********

Artificial intelligence is rapidly moving into domains traditionally occupied by physicians, medical devices, and clinical software. That shift raises a familiar question for product liability lawyers: when new technology begins performing functions that look like medical judgment, how will existing tort doctrines respond?

Earlier this year, a major AI company introduced a new AI health feature designed to review users’ medical records and provide personalized health guidance. Users can upload electronic medical records, sync data from other wellness platforms, and ask health-related questions about symptoms, medications, and lab results. The company describes its new feature as informational and emphasizes that it is “not intended for diagnosis or treatment.” The tool is currently available only to a limited group of early users while the company continues refining its safety and reliability.

Even with those limitations, the legal implications are easy to see. If a user experiencing symptoms of a serious condition asks the system whether emergency care is necessary—and receives guidance that later proves incomplete or inaccurate—disclaimers alone may not resolve the liability analysis.

For defense lawyers tracking AI litigation trends, this new use of AI presents an emerging test case. The issue is not whether plaintiffs will attempt to frame AI-generated health advice as a product liability problem. They will. That’s what they do. The real question is how courts will apply existing tort doctrines to technology that blurs the line between software tools and medical decision-making.

First Things First

The threshold issue in any product liability claim is whether the thing that allegedly caused harm is a “product” at all. As the Blog has discussed here, here, here, and here, the legal treatment of software and AI is shifting. Traditionally, courts were reluctant to classify software as a product for purposes of strict liability, reasoning that software is intangible, often licensed or available for free rather than sold, and its defects resemble service failures more than manufacturing flaws. But that view is eroding.

Recent decisions reveal three emerging approaches to when software should be deemed a product. Under the defect-specific approach, courts examine particular functions alleged to be defective and evaluate whether each can be analogized to tangible personal property. In In re: Social Media Adolescent Addiction, for example, the Northern District of California analyzed specific features like parental controls and algorithmic content delivery, concluding that each had a tangible analogue sufficient to allow design defect claims to proceed. In re: Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F. Supp. 3d 809 (N.D. Cal. 2023). Under the platform-as-a-whole approach, courts ask whether an app or platform in its entirety is analogous to a product, emphasizing the policy rationale for strict liability. In T.V. v. Grindr, the Middle District of Florida held that a dating app was a product because, as with most products and many services (including lawyers), it was “designed, mass-marketed, placed into the global stream of commerce and generated a profit.” T.V. v. Grindr LLC, No. 3:22-cv-864-MMH-PDB, 2024 U.S. Dist. LEXIS 143777 (M.D. Fla. Aug. 13, 2024). Of course, so are books and newspapers. And under the content-versus-medium distinction, courts permit product liability claims based on design or functionality defects while dismissing those based on expressive content.  Cf. Defense Distributed v. Att’y Gen. of New Jersey, 167 F.4th 65, 83-84 (3d Cir. 2026) (“whether code enjoys First Amendment protection requires a fact-based and context-specific analysis”).

These approaches have direct implications for AI health tools. AI operators have asserted in pending litigation that generative AI is a service and/or not a product for purposes of product liability law. That is how amusement parks and taxi dispatchers were treated before they morphed into social media and ride sharing. But if courts continue following the trend seen in the social media and chatbot cases, that defense may not hold.

Meanwhile, legislators are pushing in the same direction. The AI LEAD Act, recently introduced by Senators Durbin and Hawley, but unlikely to be enacted, would define AI systems as “products” and establish a federal product liability framework. And in the EU, the new Product Liability Directive explicitly treats software and AI as products—even when delivered as a service. The global momentum is unmistakable: the product line is moving.

Potential Theories of Liability

If AI health tools are products, what theories of liability might plaintiffs bring when their advice is alleged to have caused harm? The ongoing wave of litigation over AI chatbots offers a preview of the causes of action plaintiffs are deploying—and the factual predicates they are developing.

Strict liability for design defect. Under the consumer expectations test adopted by many jurisdictions, a product is defectively designed when it fails to perform as safely as an ordinary consumer would expect. Under the risk-utility test, a product is defective when the risk of danger inherent in the design outweighs its benefits. Plaintiffs in the existing chatbot cases allege that these bots cultivate emotional dependency, fail to terminate dangerous conversations, and provide harmful guidance during mental health crises. Generative AI with medical triage functionality could become a focus of similar claims.

A recent study published in Nature Medicine examined a recently introduced AI health tool’s performance in simulated emergency triage scenarios and identified cases where the system recommended routine care for conditions the researchers classified as emergencies. AI triage tools continue to evolve, but plaintiffs will undoubtedly cite early research of this kind to argue that triage logic constitutes a design defect when users delay care in reliance on a system’s recommendations.

Strict liability for failure to warn. Manufacturers have a duty to warn about dangers known or knowable at the time of distribution. A disclaimer that an AI health tool is “not intended for diagnosis or treatment” is a warning, but plaintiffs will inevitably argue it is inadequate because the product’s design invites that use, and therefore disregard of the warning should not be considered superseding cause. The existing complaints allege that ordinary consumers “could not have foreseen” that generative AI chatbots would cultivate dependency and provide dangerous guidance, “especially given that it was marketed as a product with built-in safeguards.” The same logic could apply to AI health tools: if the product is designed to ingest medical records, analyze symptoms, and suggest whether to seek care, a disclaimer alone may not suffice.

Negligent design. Plaintiffs also bring negligence claims alleging that the defendant failed to exercise reasonable care in designing its AI offering. Negligence is not limited to products. The evidence from the chatbot cases is instructive: moderation technology capable of detecting harmful content and terminating conversations exists and is deployed to protect copyrighted material—plaintiffs argue that the failure to deploy similar safeguards for user safety evidences a breach of duty. For AI health tools, inconsistent crisis safeguards may also become a focus of litigation. Early research on AI health tools has noted variability in how crisis-detection safeguards perform across different scenarios. These are the type of findings plaintiffs will undoubtedly cite to support arguments that safety features were not adequately calibrated before they were deployed.

Unlicensed practice claims. Plaintiffs in the existing chatbot cases have brought claims alleging that providing therapeutic services without adequate licensure violates state licensing statutes. An AI health tool that acts as a medical adviser—capable of interpreting lab results, flagging drug interactions, and recommending urgency levels for care—could invite similar theories in the medical context, particularly if plaintiffs argue that the service constitutes the unlicensed practice of medicine.

The Regulatory Layer

Plaintiffs in medical device cases routinely use regulatory evidence to bolster their claims—arguing that defendants withheld information from FDA, failed to report adverse events, or should have sought premarket clearance when they did not. AI health tools present a novel twist on this theme.

Under the Federal Food, Drug, and Cosmetic Act, a “device” includes software intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. FDA guidance clarifies that certain software functions may be regulated as medical devices, while others fall outside the agency’s oversight. For example, clinical decision support software may fall outside FDA regulation when it supports recommendations for a healthcare professional rather than providing guidance directly to patients. Similarly, FDA’s “general wellness” framework applies to low-risk products that do not reference diagnosis, treatment, or specific diseases.

Whether a particular AI health tool falls within or outside these exemptions is highly fact-specific—and often genuinely unclear. But that ambiguity may become a litigation narrative.  Plaintiffs might argue that an AI health tool should have been submitted to FDA for review and that the developer avoided regulatory scrutiny before launching the product to anyone on the Internet. Whether or not that argument ultimately succeeds, it could resonate with juries.

For defense lawyers, the takeaway is that companies deploying AI health tools should carefully evaluate whether their product’s functionality brings it within FDA’s jurisdiction, and if there is genuine ambiguity, consider engaging with the agency proactively rather than waiting for plaintiffs to make the argument first. And remember Buckman – private enforcement of claimed FDCA violations, such as failure to submit a medical device to the FDA, is not allowed.

Building the Defense

What defenses are available to developers of AI health tools? The pending chatbot litigation offers a preview of the arguments defendants are raising—and how courts may receive them.

Service, not product. A threshold defense is that AI health tools are more analogous to the provision of informational or advisory services than to the sale of a tangible product. Courts have long distinguished between products—subject to strict liability—and services, which typically sound in negligence. An AI system that reviews medical records, analyzes symptoms, and offers guidance about whether to seek care arguably performs a function closer to a triage or health advisory service, or even a healthcare provider, than to a manufactured device placed into the stream of commerce. Framed that way, the alleged defect is not a flaw in a product but an alleged failure in the provision of information or judgment, making strict liability an awkward fit at best.

Disclaimer and contract. Terms of use for AI services typically state that users should not rely on outputs as a substitute for professional advice and may include limitation-of-liability and “as-is” warranty disclaimers.” As the Blog recently pointed out, the FDA’s newly revamped online adverse event database requires users to execute a signed disclaimer before using. Similar requirements, including arbitration agreements, could be used by medical AI providers, if they are not seen as too much of a deterrent to their prospective audience. Such provisions will be central to any defense, but their effectiveness may depend on how the AI responds when users ask directly whether it is providing medical advice—if in-session responses are inconsistent with the terms, plaintiffs may argue the disclaimer was effectively disclaimed. Plaintiffs have also attacked the enforceability of such terms by alleging that sign-up processes use “dark patterns” that prevent meaningful consent. In the U.S., litigation over clickwrap and browsewrap can offer some guidance. In Europe, the EU’s new Product Liability Directive expressly bars contractual waivers of product liability.

Section 230. Section 230 of the Communications Decency Act shields providers from liability for third-party content. But the statute is designed to protect platforms from being treated as the publisher of user-generated speech; it has historically not shielded claims based on a platform’s own design choices or information. AI-generated outputs are produced by the model itself rather than third-party speech, so the defense may have a limited application in the AI health context.

State of the art. Defendants may assert that their methods, standards, and techniques complied with the generally recognized state of the art—pointing to safety processes, red-teaming, and expert advisory councils. Compliance with industry standards is evidence of non-defectiveness, even if not dispositive.

Lack of causation. Causation may be the strongest defense available. Defendants can point to users’ pre-existing conditions, their use of other information sources, and their failure to seek professional care. In the health context, causation will be fact intensive. Additionally, because AI outputs depend heavily on user inputs—prompts and contextual information—defendants may also argue that user conduct contributed to the outcome.

Misuse and user conduct. Usage policies typically prohibit certain uses and warn users not to rely on AI as a substitute for professional advice. But courts have been skeptical of misuse defenses where the product’s design is viewed as inviting the reliance users are warned against.

Where the Law May Be Headed

AI health tools raise many of the same liability questions already emerging in chatbot litigation, while adding the additional complexity of medical decision-making. Recent AI health tools offer a useful case study, but the questions they raise will recur across the industry as more companies develop AI-powered symptom checkers, triage tools, and wellness assistants.

Are these products? Are they medical devices? Are they services? What duties do developers owe to users who rely on them?

The answers are still being written. Courts are increasingly willing to treat software and AI as products for purposes of strict liability. Legislators—in the EU and now in Congress—are pushing in the same direction. And FDA’s regulatory perimeter may not remain static as consumer-facing AI health tools proliferate.

For defense lawyers, the existing chatbot litigation is worth watching closely. The theories plaintiffs are deploying will translate readily to medical contexts. The defenses being raised will be tested. And the outcomes will shape the landscape for years to come.