Not quite three years ago, we co-authored a chapter in a Digital Health guide put out by International Comparative Legal Guides. It bore the pithy title “Predicting Risk and Examining the Intersection of Traditional Principles of Product Liability Laws with Digital Health.” We continue to tinker with the principles of product liability law and monitor how they intersect with digital health, software, artificial intelligence, and other aspects of our increasingly abstract existence. Our then-colleague Gerry Stegmaier with whom we co-authored the chapter still lives in the world of companies that develop software and AI platforms, so we thought it made sense to get his views again. Perhaps because January invites retrospection (and because high tech years have at least a canine multiplier), we decided to see how we did with our predictions. Being lawyers, our prior predictions were not overtly identified as such, but it is still possible to pull out the forward-looking statements we made with a probabilistic bent.
Our declared idealistic “aim to aid software developers in digital health and those that advise them in anticipating, preparing for, and responding to this potentially rapid changing liability landscape” was offered in the context of four general developments over the few years prior to early 2023. First, many digital health products, medical devices and otherwise, were being developed, often by companies recently created to tackle digital health needs. Second, a number of decisions had come out that tested the presumptive treatment of software as a licensed service or intangible, not a product. See here, here, and here. Relatedly, the creation of the high profile Social Media MDL (In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation) meant that these issues were going to get increased attention. Third, FDA had been in catch-up mode, issuing a number of guidance documents about software and medical applications that were arguably long overdue. (The history of FDA regulation in this area is included in the second section here.) Fourth, although not the focus of our chapter, the writing was on the wall that the EU was going to issue a directive treating software as a product for purposes of liability, which it ultimately did in October 2024. While the U.S. does not tend to follow what the E.U. does on product liability, companies that hope to sell their products or license their technologies around the world do have to pay attention to such things.
Taking our predictions in the order in which they appeared in the chapter, the first contains an implicit prediction: “As healthcare continues to become more digital, the prevalence of devices reliant on software continues to grow.” This was low hanging fruit, so to speak, but both trends have continued. Public lists of FDA approvals and clearances since early 2023 include many software-driven devices. In the “Device Innovation” section of its 2024 Annual Report, FDA’s Center for Devices and Radiologic Health identified eight examples of novel devices, of which four were software and two others utilized software. We find that telling.
Next, we identified “a number of areas where the application of traditional U.S. product liability principles could differ with patient-facing digital health and software-driven medical devices.” (Our apologies in advance for any confusion over the British spellings that carry over from the chapter in International Comparative Legal Guides.)
- We noted the possibility for liability based on a post-sale duty to warn for digital health and software-driven devices “where the relationship with the end-used may continue post-sale and the ability to update software may go hand-in-hand with the ability to notify an end-user of a post-sale issue.”
- We noted that software update abilities with prescription medical devices could mean that “the learned intermediary doctrine may not apply in some cases where the level of direct and/or continuing interaction between the manufacturer and patient undercuts the rationale for the doctrine.”
- We postulated a poor preemption result for a PMA device when, “[i]n the case of a device with software that will be updated over time or where the device utilises AI or machine learning, some courts may doubt that the device at the time of the alleged injury was the same as what FDA had approved.”
- “Lawsuits over injuries allegedly due to a failure of software in a medical device might also name entities that contracted with the device manufacturer to develop or update that software. A parallel may be seen in the history of suing manufacturer.”
- We suggested that federal legislation could try to impose “limitations on liability” if civil litigation impeded innovation.
- We also envisioned courts creating easier routes to liability without proof of defect or negligence in the situation where “software fails to perform as expected and there is no ability for it to be altered by anyone other than the manufacturer or its agents.”
- Suits alleging product liability in the absence of “tangible physical injuries,” including under increased risk or medical monitoring theories.
- “Given that product liability law has generally not applied to software, any imposition of product liability would entail making new state law.”
Addressing each of these in turn would make this post far too detailed and long to keep any reader’s attention, so we will summarize them and focus on a few. The summary is that positions taken in litigation, decisions issued by courts, and who is getting sued over digital health, including devices driven by AI and software, are in flux. We have been tracking them here and in other specific posts. Litigation in this space has increased, but the relatively few decisions since early 2023 have not yet established any consensus.
On our point above about “new state law,” our chapter specifically looked to the Social Media MDL, finding it “highly likely that the issue of whether strict product liability applies to social media platforms, including the software that runs them, will be decided directly. Those decisions, potentially modified on appellate review, will inevitably influence the legal playing field for potential claims relating to digital health and software-driven medical devices.” We do have decisions from that court, including In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, 702 F. Supp. 3d 809 (N.D. Cal. 2023), discussed here, but we do not yet have a clear articulation of an adoption of rejection of new law for specific states. Without speaking to the law of specific states consistent with Erie, that court denied a motion to dismiss asserted product liability claims because “plaintiffs adequately plead[ed] the existence of product components as to each alleged defect analyzed” under a novel approach focused on “functionality” instead of “tangibility” or other traditional measures of what constitutes a “product” for purposes of product liability. We cannot say that this novel way of defining what is a product will have any legs, Later, in the same MDL in the context of plaintiffs pushing public nuisance under what is fundamentally a product liability theory, the plaintiffs denied that they were asserting product liability claims at all and the court characterized the issue of whether the social media platforms were products as disputed. Clearly, this is going to require more litigation, especially on appeal, to get some clarity.
As for our prediction on federal legislation, the pending AI Lead Act is certainly broader than a software version of the Biomaterials Access Assurance Act. If enacted, it would create a federal cause of action, preempt some state laws, and definitely treat software as a product. Until it is passed in some form, though, it is unclear how this will change the playing field. The history of product liability litigation over PMA devices and other medical products with applicable express preemption provisions teaches that plaintiff lawyers will make a concerted and prolonged effort to erase any statutory limits on liability and damages. We cannot help but see the chilling effect that the plaintiff bar’s on-going efforts to expand liability, both in terms of novel claims and new targets, can have on investment in developing new technologies in digital health.
Speaking of prolonged efforts, we also predicted that FDA would be issuing more guidance on software as devices and devices driven by software, even though it viewed its authority in this area as limited. In its September 2022 Policy for Device Software Functions and Mobile Medical Applications, FDA promised to “continue to evaluate the potential impact these technologies might have on improving health care, reducing potential medical mistakes, and protecting patients” Earlier this month, made good on its promise with two new guidances, which replaced guidances from September 2022 and September 2019, respectively, and a new draft guidance. Briefly—because each could have a deep dive on its content and potential liability implications—the guidance on Clinical Decision Support Software reexamined “FDA’s oversight of clinical decision support software intended for health care professionals . . . as devices,” which is undoubtedly one of the hot development areas. A big part of the guidance on General Wellness: Policy for Low Risk Devices is to help define which software is a device and which is merely an application to help people stay or become healthier without treating a specific disease. FDA does not regulate the latter, so certainty on what is and is not a device is important but perhaps ephemeral. The draft guidance Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations builds on a number of FDA statements over the last 19 months. See, e.g., here, here, and here. Draft guidances are subject to public comment, and FDA responses to those can be informative. Sometimes, FDA’s draft guidances have stayed that way for years, taking on the same functional authority as final guidances, which are still considered “nonbinding.” So, the short- and long-term impacts of this draft guidance remain to be seen. More generally, given that these three (draft) guidances were issued within four weeks of an Executive Order about setting a national policy on AI, it seems inevitable that the regulation of software as devices and devices driven by software will continue to be an area of change for the foreseeable future.
Our last two predictions were, frankly, pretty obvious in hindsight. We said:
[S]oftware-development lifecycle best practices are likely to evolve further and familiarity with FDA’s risk classification schemes may be a useful starting point for many developers, regardless of whether their software is or may be a medical device.
***
In any event, medical device companies, software developers who work with them, and those who assist each with managing and responding to liability risks will benefit from greater understanding of and monitoring this emerging area of the law.
We continue to think these are sound predictions, even if made in early 2026. In this space, risk minimization cannot be driven solely by traditional views about software—i.e., contractual limits were—or doctrinaire views about product liability. We expect that digital health will continue to be area where legislators, regulators, litigators, and judges all play a role in setting the ground rules that will emerge over time. As that happens, companies in this space have to take steps to minimize risks even while ambiguity and uncertainty remain about what those ground rules will be. It would be a shame if the uncertainty hampers the development, deployment, and adoption of AI-powered products and services that will advance public health and the delivery of healthcare.