Photo of Eric Alexander

We typed the following question into a simple AI prompt:  “What is the difference between admonish and deter?”  The response started with “The primary difference between admonish and deter lies in their intent and timing:  admonishing is form of active, often verbal correction or warning regarding past or present behavior, while deterring is an act of preventing a future action from happening in the first place.”  We did this, not because of existential dread about training our new Newfie puppy, but because we read an otherwise routine denial of a motion to vacate a conditional transfer order.  For people who do not play in the MDL sandbox, once an MDL is established by the Judicial Panel on Multidistrict Litigation (“JPML”), a new case filed in (or removed to) any federal district court can be tagged by any party, the JPML, or the MDL, which results in the entry of a Conditional Transfer Order that, absent a motion to vacate, loses its conditional status after a period of time.  The JPML decides those motions to vacate and almost always denies them.  Using the rubric of a product liability MDL, cases involving the product or products at issue in the MDL and generally the same factual issues as led the JPML to create an MDL in the first place are going to get transferred as long as the MDL is still accepting cases for pretrial centralization.  In In re Philips Recalled CPAP Bi-Level PAP & Mechanical Ventilator Prods. Liab. Litig., MDL No. 3014, 2026 WL 926016 (J.P.M.L. Apr. 6, 2026) (“Gravelyn”), that is what happened, notwithstanding plaintiff’s arguments about the seriousness of his injuries, the Northern District of California being more convenient, and the path to trial being longer with transfer.  As expected, those arguments were all rejected.  That is not why we are discussing our musings on the decision, though.

This is yet another case where a party submitted made-up citations and holdings.  In a recent post, we offered our own modest proposal (that did not involve cannibalism) for deterring such nonsense, but we also linked a website that does a really good job of tracking “hallucinations” in legal briefs from around the world.  More on those later.  In Gravelyn, the plaintiff lawyer started things off by misfiling her brief with phony citations to two purported JPML decisions allegedly supporting vacatur.  The misfiled brief was replaced with a brief with two different bogus citations—the only JPML decisions it purported to cite.  One cite was wrong, but both decisions held the opposite of what plaintiff claimed in his brief.  Minus footnotes, this is what the panel said about the plaintiff’s gobbledygook:

Before concluding, we must raise an additional issue, given our serious concerns about the integrity of the record in this matter. Plaintiff’s brief cites only two Panel decisions, but these two citations are inaccurate and misrepresent the holdings in the underlying cases. The nature of the misrepresentations suggests that counsel may have used generative artificial intelligence to draft plaintiff’s brief without checking the accuracy of the information produced, though it is possible counsel used some other unreliable source. Regardless, plaintiff improperly submitted a brief with false legal representations. We admonish plaintiff and his counsel for fabricating and misrepresenting legal authorities. This is an abuse of the judicial process, and one which we do not take lightly. Parties have a duty to ensure that citations are, in fact, real. See In re Snowflake, Inc., Data Sec. Breach Litig., MDL No. 3126, ___ F. Supp. 3d ___, 2025 WL 4007421, at *2 (J.P.M.L. Aug. 7, 2025). Any further non-compliant submissions from plaintiff may be stricken or result in additional appropriate corrective action.

2026 WL 926016, *2.  We do not find this admonishment enough in light of the “serious concerns,” “false legal representations,” “abuse of the judicial process,” and vow to not take it lightly.  The threat of striking future “non-compliant submissions from plaintiff” also rings empty.

Part of the problem in Gravelyn is the nature of the JPML practice.  The plaintiff and his counsel were not before the JPML for more than a few weeks.  The plaintiff is unlikely ever to be back.  His counsel might be back on some other case, although this firm seems to be a local Texas immigration firm operating out of its depth.  Even if the future sanction for more citation to hallucinated cases is that this firm’s motion to vacate in a future case involving transfer to the same or another MDL is stricken for similar false citations, then the result would be the transfer of a tagged case to an active MDL as happens in the vast majority of cases where transfer is opposed.  Overall, we see little chance that this admonishment will deter this plaintiff, his lawyer, or other plaintiff lawyers who do not file with the JPML regularly from continuing to save time and money by letting A.I. write their briefs without human oversight.

There are tools available beyond the proverbial finger wag, though.  When a lawyer hears that a brief was submitted with “false legal representations,” the first thought is usually Fed. R. Civ. P. 11.  It clearly covers this situation, regardless of whether A.I. was utilized (poorly):

By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances:

(1) it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation;

(2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law;

Fed. R. Civ. P. 11(b)(1)-(2).  There are a few drawbacks to using Rule 11 as the primary tool of deterrence, though.  First, making the defendants police this recurring plaintiff behavior is unfair absent cost-shifting.  Second, the procedure under Rule 11 means that the motion for sanctions cannot be filed until 21 days after it is first served on the plaintiff to afford time to withdraw or correct the filing.  Even if that timing worked in the context of a motion to vacate a CTO, there will be no consequence if the plaintiff files a new brief without the fake stuff.  Third, the court’s authority to order the plaintiff to show cause under Rule 11(c)(3) is unlikely to be utilized by the JPML unless it catches the problem early and/or holds onto the case long enough to sanction the plaintiff.  Fifth, monetary sanctions may not be awarded against a represented party—but can be against a pro se party—for violating Rule 11(b)(2), which means the court would need to find that the “improper purpose” language applied.  Fed. R. Civ. P. 11(c)(5).  The purpose of any sanction is to “deter repetition of the conduct or comparable conduct by others similarly situated.”  Fed. R. Civ. P. 11(c)(4).  Taking monetary sanctions off the table certainly limits the deterrence value.

When Rule 11 is not enough, courts can look to their inherent authority and various rules of professional conduct as empowering them to act to deter bad conduct.  In Gravelyn, the court noted its uncertainty about whether “counsel may have used generative artificial intelligence to draft plaintiff’s brief without checking the accuracy of the information produced” or “counsel used some other unreliable source.”  The JPML surely could have directed plaintiff’s counsel to explain what happened.  We have seen sanctions hearings where the plaintiff’s counsel dug a deeper hole.  The plaintiff lawyer in an auto accident case we wrote about last year got plenty of chances to avoid sanctions for contradictory jurisdictional allegations,   In the end, the $7500 fine was not as impactful as the formal reprimand for “[f]ailing to comply with the duties of an attorney by filing pleadings containing false representations and legally unsupportable contentions” that would have to be reported in connection with any application for admission. His firm caught shrapnel with the judge’s finding that, “based on the many [of the firm’s] cases that have come before me and my colleagues, I am persuaded that the facially obvious errors found in this case reflect a cultural norm at [the firm] to prioritize volume at the expense of accuracy.”  (This is the same court where sanctions-related proceedings are still going on a decade after the plaintiff lawyers’ non-A.I. hallucinations and other misconduct discussed here.)

We keep saying “plaintiff” and “plaintiff’s counsel” because that is who seems to be engaging in the sanctionable conduct when it comes to made up citations and made-up facts.  The website we mentioned earlier shows that hallucinations in U.S. cases overwhelmingly come from pro se litigants and plaintiff lawyers.  An occasional criminal defense lawyer too, but very rarely from lawyers representing a tort defendant.  Because the list of cases addressing A.I. hallucinations grows longer just about every day, it is apparent that the bad actors are not being deterred by what most courts have been doing.  There are some courts taking an aggressing approach to out and deter the behavior, though.  Focusing on sanctions against lawyers for citing bogus cases, two very recent decisions stood out to us in terms of the courts doing something more to deter continuing or similar behavior.  (By the way, we read the decisions and did not just rely on a website’s summary.)  In a case called Heimkes v. Fairhope Motorcoach Resort Condominium Owners Association, Inc., the Southern District of Alabama ordered the plaintiff lawyer to pay $55,597 to the defense counsel “for their time spent addressing [plaintiff lawyer’s] misstatements of law.”  In addition, the plaintiff lawyer was required to file the court’s order on the open docket in each of his pending cases and any new ones over the next year.  Within three days, he had to certify that he sent the order to wherever he was licensed.  The court sent the order to the state bar with a recommendation that the plaintiff’s lawyer “be found incompetent to practice law.”  For good measure, all the judges in the district and the chief judges of the three federal districts in the state would get a copy, and it would be submitted for publication in the Federal Supplement.  That should get someone’s attention. 

In an unpublished decision from the Sixth Circuit in a case called U.S. v. Farris, the lawyer appointed to represent an indigent criminal defendant on appeal got hammered for his hallucinations.  This decision is also pertinent to our musings about Gravelyn because the current JPML chair is a judge from the Eastern District of Kentucky, within the Sixth Circuit and where Farris was tried.  The Sixth Circuit kicked the lawyer off the case and took away any fees to which he would have been entitled under the Criminal Justice Act.  He was referred for disciplinary proceedings with the Sixth Circuit and his state bar.  The opinion was also sent to the Chief Judge and Clerk of the Eastern District of Kentucky.  While perhaps not quite the tar-and-feathering of Heimkes, this decision should be a cautionary take for anyone who thinks less effort can got into an appointed representation.

By contrast, we have seen any number of MDL decisions where the plaintiff lawyers have not been sanctioned for false representations in filings and other misconduct.  This seems to be the case even when the lawyers are before the court on multiple cases.  Such as here.

That brings us back to our prior proposal.  We suggested the following:

After due prior publicity – bar associations should punish attorneys caught with their hand in the AI cookie jar with suspensions from the practice of law.  We think that a first offense should warrant a suspension of one week for each fraudulent citation, quotation, or misrepresentation of a judicial opinion.  Second offenses should be punished at the rate of a month apiece.  If that’s not a sufficient deterrent, then the licensed miscreant is probably not fit to practice law.

The policing entity in this Bexian world is a bar association and the initial stick is a suspension.  Maybe this would work, but it seems like courts need to play a more active role.  We are not sure than suspension alone will do the job, especially if law firm partners can continue to share profits while on these mini-sabbaticals (and bar associations probably do not have a say on that).  Fee shifting directly from the miscreant to the other side also makes sense and courts have plenty of authority to do it. If sunlight makes the best disinfectant, then circulating decisions to people and entities that should care, including recipients of pro hac applications, should help.  We are sensitive to the suggestion that an accusation is not proof, as well as the reality that some mistakes really are innocent.  After all, our day job involves representing defendants who never do anything wrong.  However, just like punitive damages are intended to deter bad conduct, not just admonish it, there needs to be consistent action from courts to deter bad conduct from hallucinating lawyers.  Features of the rule of law, such as notice, opportunity, and appellate review, also have a place for severe sanctions, but there has to at least be the possibility of severe sanctions to begin to prevent patterns of prevarication.  Perhaps.

Photo of Stephen McConnell

Emile Bove’s nomination to the Third Circuit was controversial. We do not know enough about that controversy to offer an opinion, but we know it was about politics, and there is little reason for you to care about our political opinions. As we reflect back over the years, we calculate that our political opinions have been wrong at least 30% of the time.

We’d like to think our batting average when it comes to legal analysis has been better than that. United States v. Anderson, No. 4:21-cr-00204-001 (3d Cir. March 26, 2026), is the first opinion written by Judge Bove that we have read. Here is the beginning of his Anderson opinion:  “If you dislike jargon, buckle up. The focus of this appeal is the reliability of probabilistic genotyping software in forensic DNA identification under Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) and Rule 702 of the Federal Rules of Evidence.” Anderson is a criminal case, not a drug or device tort case, but any analysis of Rule 702 by the Third Circuit (our home circuit) will be of more than a little interest to us. 

Rule 702 is the same for criminal and civil cases, but we cannot help but think that the criminal vs. civil context matters. How does it matter? We are thinking of Sarah Isgur’s rule of Bad Man Stays in Jail.  With most judges, close calls usually do not benefit criminal defendants who seem like especially bad people. The defendant in Anderson was charged with being a convicted felon in possession of a gun, a violation of 18 U.S.C. section 922(g)(1).  While executing a search warrant, Pennsylvania State Police seized a gun from a bag that also contained Anderson’s identification and two loaded magazines.  Anderson was in the same bedroom as the bag at the time of the search. He was on parole for a state-law offense at the time.  On that evidence alone, we would bet on a conviction. But the prosecution had more evidence.  The police swabbed DNA evidence from the gun. They compared it to known DNA evidence taken from the defendant.  The Pennsylvania State Police laboratory found several sources of DNA on the gun, but could not state within a degree of reasonable scientific certainty whether there was a match with Anderson. Law enforcement then sent the DNA evidence to Pittsburgh-based Cybergenetics, Corp., which used TrueAllele software and advanced computing power to perform probabilistic genotyping. Comparison of the DNA profiles via a sophisticated algorithm led to a conclusion that a match between some of the DNA from the gun and Anderson’s DNA was “11.5 trillion times more probable than a coincidental match” between the evidentiary sample and a random person’s DNA. The defendant challenged the admissibility of that evidence under Rule 702 and lost.  Anderson then pleaded guilty, but preserved his ability to appeal the admissibility of the DNA evidence. 

The Third Circuit reviewed the admissibility of the evidence for abuse of discretion. From the first two sentences of Judge Bove’s opinion, which we recited above, we had two immediate responses: (1) Judge Bove is a good, clear writer; and (2) he clearly does not follow our rule of don’t say Daubert. As most of you readers know, this blog prefers to refer to Rule 702 – as amended – and to leave the Daubert label behind, because too many courts shirked their gatekeeping duties and watered Daubert down with all sorts of weight-not-admissibility nonsense.  The result is that the very name Daubert now comes freighted with a lot of sloppy law, which the recent 702 amendment was intended to correct. To our eyes, the deployment of the Daubert case name amplifies the possibility of error (and trauma to defense hacks).

So were we worried a bit at the outset of Anderson? Yes we were. Even more worrisome, when the Anderson opinion goes through the usual throat-clearing announcement of Rule 702’s standards, it discusses cases that predate the 2023 amendment to Rule 702, which makes clear that the party proffering the expert witness bears the burden of proof, and that the expert’s opinion must be a reliable application of principles and methods to the facts of the case. The Anderson opinion itself never mentions the Rule 702 amendment. Ugh.  Nevertheless, the Anderson opinion does point out that “the proponent of the evidence must establish admissibility by a preponderance under Rule 104(a).” Maybe Judge Bove falls into the camp of those who consider the 2023 amendment as a mere clarification of what was always required.

We should mention that also on the Anderson panel was Judge Bibas, who is one of the smartest judges anywhere, particularly when it comes to criminal law and criminal procedure. Let’s face it: it is harder for a judge to screw things up when that judge is on a three-judge panel. By contrast, a crazy trial judge can perpetrate enormous mischief. That is not to say that Judge Bove is crazy or that he screws anything up in the Anderson case. But, as you will see, we do harbor at least a little doubt about the outcome.

The Anderson opinion correctly states that “[r]eliability is the issue here.” Some of the factors “bearing on reliability are testability, peer review, error rates, existence of standards, and general acceptance of the method.” So far so good. And then that fine point is at least slightly ruined by this: “These are just guideposts. The list is not exhaustive. Determining reliability is not a check-the-box exercise, and we give trial courts significant autonomy to do the necessary work.” That little chestnut is all fine and good unless that “significant autonomy” translates into the court acting as a matador, waving the bull by. Some trial judges do not want to do the work — the whole point of going to law school was to avoid math and science — and will seize upon any basis to send sketchy science to the jury to sort out however they see fit. Again, we are mostly fretting about things that could happen but did not happen.  In Anderson, Judge Bove delves fairly deeply into science behind the TrueAllele evidence. He covers the method’s testability, support in peer reviewed literature, error rates, existence of standards, and general acceptance of the method. We daresay that most of the plaintiff expert opinions we see in drug, device, toxic tort, or asbestos litigation would not satisfy the type of analysis that Judge Bove applies in Anderson. If you ask most plaintiff experts what their error rate is, they will indignantly reject the question, exclaiming that their error rate is precisely zero. What that really means is that their science is not science at all, but a litigation-driven article of faith. Also, good luck finding peer reviewed support for the plaintiff expert’s methodology and conclusion. At best, you’ll hear about bogus extrapolations or ginned-up articles replete with fraud.

For the most part, the Anderson opinion is sound. The court upheld the trial court’s admission of the probabilistic genotyping. TrueAllele does look more like science than voodoo. Bad man will, indeed, stay in jail. The one issue that gives us pause was the defendant’s argument “on appeal that he should have been granted access to TrueAllele’s source code so that he could test that too.” The lower and appellate courts concluded that because “the issue was whether TrueAllele is capable of being tested based on objective criteria, Cybergenetics was not required to let the defense under TrueAllele’s hood by disclosing the source code.” The source code was the programmer syntax that implements the algorithm, and the defense had access to the actual algorithm. For the court, that access was enough. “Daubert is not a criminal discovery device.” We suppose there might be some proprietary concerns for Cybergenetics, though a confidentiality/protective order could address those concerns. Moreover, the defense could run its own tests and “try to show that TrueAllele does not function in the manner that the government’s expert described.” But showing a different test result is not necessarily the best or only way of undermining the inculpatory test result.   The Anderson court talks about not authorizing “a fishing expedition through TrueAllele’s source code under the auspices of Daubert” and how the defendant was not entitled to go “under the hood.” But then what is left is a black box supporting the government’s expert testimony. Maybe, in context, that black box in Anderson was enough to pass Rule 702 muster. Still, we can think of plenty of cases where the expert’s entire opinion is a giant black box wrapped in ipse dixit.

Anderson also challenged the TrueAllele evidence because it involved mere likelihoods and was subject to “potential errors by software operators and problems lurking in TrueAllele’s source code.” That argument went nowhere fast. There are other areas of expert testimony where discretion and judgment play a not insubstantial role. Fingerprint identification methodology , for example, involves “an unspecified, subjective, sliding scale” and human judgment calls relating to the quality and level of detail in a fingerprint.” Handwriting comparisons are even more subjective, yet judges every day admit such evidence.  (When we worked in the U.S. Attorney’s office in Los Angeles a long time ago, there was one rogue judge who excluded fingerprint and handwriting experts, reasoning that jurors were capable of making the comparisons themselves. That view was not shared by any colleagues and was not smiled upon by the Ninth Circuit.)

Anderson is worth reading if you are looking at potential Rule 702 admissibility issues in the Third Circuit.  It applies rigorous analysis that should weed out the worst of the worst junk science on offer. But it might also contain enough squishiness to let some frail expert testimony through.

The opinion also rejected the defendant’s argument that section 922(g) violated the Second Amendment. That is a fairly obvious and uninteresting result, unless you want to dive into a political debate — which we do not.       

Photo of Michelle Yeary

Artificial intelligence isn’t going anywhere. Experts use it. Opposing counsel use it. Clients use it – and want their lawyers to use it too. It is becoming an increasingly standard legal research, drafting, and case strategy tool. But as a couple of our recent posts (here and here) have pointed out—AI is far from perfect. And modest fines and court-imposed sanctions are proving wholly insufficient to combat the accelerated frequency of AI hallucinations in litigation filings. We’ve suggested some harsher sanctions (if anyone is listening). But it also made us curious to see if anything else was being done. Turns out, courts across the country are moving beyond reactive sanctions imposed after problematic filings, toward a proactive enforcement posture.  Some courts now impose disclosure obligations and verification standards as threshold filing requirements.   

The standing orders vary in scope and consequence, and we have not undertaken a 50-state or all jurisdiction review. So, this should not be considered comprehensive, but more of an FYI and a heads up to check your local rules and judge-specific standing orders.

For example, in the Southern District of New York, Judge Broderick’s Civil Rules provide that any party must disclose the use of GenAI when the GenAI tool is used to prepare any filings with the Court.  Rule 4(J); see Bonsignore v. N.Y. Dep’t. Tax’n & Fin., No. 25-CV-6324, 2025 WL 3468041 (S.D.N.Y. Dec. 3, 2025); see also Brian NG v. Amguard Ins. Co., 25 CIV. 806 (VSB) (GS), 2025 WL 3754555, at *7 (S.D.N.Y. Dec. 29, 2025) (warning a party against submitting fake citations and noting that Judge Broderick’s certification rules apply to all parties, regardless of pro se status). More importantly, the party must certify that it has independently reviewed and verified the accuracy of any portion of the filing generated by GenAI, and that the filing complies with Rule 11 obligations or else face penalties. In the Southern District of Texas, Judge Olvera’s Local Rules similarly require all parties at the outset of a case to file a certificate attesting

either that no portion of any filing will be drafted by [GenAI] or that any language drafted by [GenAI] will be checked for accuracy, using print reporters or traditional legal databases, by a person.

Rule 8(C)(1) (emphasis added).

It’s a little scary that courts need to have “check your work or else” orders. But as Judge Matthew J. Kacsmaryk (N.D. Tex.) explains in his Mandatory Certification Regarding Generative Artificial Intelligence:

[GenAI] platforms are incredibly powerful and have many uses in the law . . . But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, [GenAI] is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.

In other words, the use of GenAI in court filings raises real ethical and practical concerns. And judges should not accept “sorry, ChatGPT threw that cite in there” any more than they accept “I’m just here covering for another attorney.”

Here are additional orders we found requiring certification that a human being has reviewed AI generated material:  IAS Rules (Judge Peter Weinmann, NY Sup. Ct., Erie Cnty.) (requiring certification that the filing has been verified and reviewed by a human being); Use of Generative AI (Magistrate Judge Phillip Caraballo, M.D. Pa.) (requiring parties to certify that they have “checked the accuracy of any portion of the document generated by AI, including all citations and legal authority”); Standing Order, Rule 5(c) (Judge Blumenfeld, C.D. Cal.) (requiring certification that “filer has reviewed the source material and verified the [AI] generated content is accurate”); Standing Order, Rule E(5) (Judge Hwang, C.D. Cal.) (same); Standing Order, Rule 10 (Magistrate Judge Susan van Keulen, N.D. Cal.) (imputing any GenAI related hallucinations and corresponding sanctions to the signature of the attorney or party on the filing, because the signature indicates that counsel has personally confirmed the accuracy of the content generated by GenAI tools); Individual Rules and Practices, Rule 2(E) (Judge Cronan, S.D.N.Y.) (requiring certification that “litigant personally reviewed the filing for accuracy of cited legal authorities and factual assertions and . . . describing in detail the steps taken to verify the accuracy of all legal authorities and factual assertions generated by the AI tool” or else face sanctions).

One of the most comprehensive standing orders we found on AI use comes from the Northern District of California’s Magistrate Judge Kang. His Standing Order issues guidelines in three broad categories: filings with the Court, evidence, and confidentiality. Rule VII(C). For filings with the Court, any party must identify the GenAI tools used for drafting the text in its title, caption, a table preceding the body text, or by a separate contemporaneous notice and must also maintain records of these portions of text should the Court request it. Any GenAI generated evidentiary material must be identified in discovery by a notice and declaration verifying authenticity and may not contain “uncorroboratable” or “fictitious” statements of fact or evidence. Regarding confidentiality, counsel must comply with all protective order obligations while interacting with such tools and must maintain records of all prompts or inquiries submitted to these tools to establish compliance with this standing order. Further, Judge Kang repeatedly admonishes that parties should only use AI tools “with competent training, knowledge, and understanding of the limitations and risks of such automated tools.” The order stops short of requiring certification of human proof reading, it makes clear that the court expects as much in accordance with all ethical and professional standards.

While the above are all judge-specific orders, the Northern District of Texas, has adopted a Local Civil Rule mandating that a brief must disclose GenAI use on “the first page under the heading ‘Use of Generative Artificial Intelligence’ [and] [i]f the presiding judge so directs, the party filing the brief must disclose the specific parts prepared using [GenAI].”  Rule 7.2(f)(1).     

Finally, some courts outright prohibit the use of artificial intelligence in court filings.  The Western District of North Carolina, for instance, based on a “concern regarding the reliability and accuracy of filings” using AI, has issued an Order requiring that all court filings be accompanied by a certification that stipulates that no artificial intelligence was used in research for the preparation of the document, except for AI embedded in traditional legal research tools, and verifies that every statement and citation has been checked by an attorney or paralegal.  Judge Newman’s Standing Order, in the Southern District of Ohio, prohibits the use of any AI in the preparation of any filing to the court, with exceptions for information gathered from legal search engines, internet search engines, or Microsoft suite products. Rule VI. A violation of the AI ban may result in “sanctions including, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit. See also Standing Order (Judge Boyko, N.D. Ohio) (same). In the Northern District of Illinois, Judge Coleman’s Case Procedures prohibit the use of AI to draft memoranda or as authority to support a party’s motion.

Paralleling the growth of AI and the uncertainty it generates, the proliferation of judge-specific standing orders rather than uniform district- or circuit-wide rules has created a patchwork that is increasingly difficult to navigate. But one thing is clear—courts are not waiting to deal with AI problems after the fact. They are shifting the burden to counsel at the outset. Disclose it, verify it, stand behind it.

And that shift has consequences. These disclosure, certification, and outright ban orders are not just compliance hurdles—they are litigation tools. They provide a roadmap to challenge opposing counsel’s filings, probe the reliability of their submissions, and tee up Rule 11 motions. So yes, check your local rules. But also read your opponent’s certifications carefully. Because in a court where AI disclosure is mandatory, what is said about how a filing was created may matter just as much as what the filing says. 

Much thanks to Dechert law clerk, Nimisha Noronha, for digging in on this research.

Photo of Bexis

As we have discussed more times than we like, the plaintiffs’ class action cabal, in conjunction with their running-dog Valisure “if it doesn’t have it, we’ll cook it until it does” “testing” laboratory, has targeted various products supposedly containing benzene contaminants.  The result has been a plethora of no-injury class actions by plaintiffs who used these products without incident, but purportedly want their money back.

As is evident from our prior posts, most of them have been dismissed.

Here’s another one:  Navarro v. Walgreens Boots Alliance, Inc., 2025 WL 1411406 (Mag. E.D. Cal. May 15, 2025), adopted, 2025 WL 3485004 (E.D. Cal. Dec. 4, 2025).  The targets of opportunity in Navarro were several “acne treatment drug products.”  Id. at *1.  After keeping some of these products at ridiculously high temperatures (between 99 and 158°F) for extended periods of time (18 days), Valisure was able to detect “high levels of benzene” in some of the samples.  Id. at *2.  Citing those tests, a couple of plaintiffs (from California and Massachusetts) sued, bringing consumer protection, warranty, and unjust enrichment claims under the laws of various states – including states in which they neither lived nor purchased any products.  Id. at *5.

The magistrate judge threw out the entire kit and kaboodle, holding that while the plaintiffs had standing (at least in their states of residence) the claims were either preempted or failed to state a claim.  Unfortunately, Ninth Circuit standing precedents are woefully lax, allowing plaintiffs to sue whenever they allege that they “paid more for [the product] than [they] otherwise would have paid, or bought it when [they] otherwise would not have done so,” but for a defendant’s actionable statements or omissions.  Id. at  *8 (citation and quotation marks omitted).  So the plaintiffs got a pass on standing.

These anti-acne products are OTC drugs, and as we have discussed many times, the OTC section of the FDCA has a strong preemption clause – albeit with an exception for “product liability” that does not protect economic loss claims from preemption. That clause prohibits enforcement of state-law requirements “different from or in addition to, or . . . otherwise not identical with” FDA requirements.  Id. at *10 (quoting 21 U.S.C. §379r(a)).  Plaintiffs, of course, claimed that the various states’ laws required benzene-related warnings that the FDA had not required in its “monograph” governing these products.  Since the federal labeling requirements for these OTC products did not require any reference to benzene, those claims “are expressly preempted.”  Id. at *11.  Nor could plaintiffs evade preemption by claiming “misbranding.”  “The upshot of the regulations and the anti-acne monograph, described above, is that a drug that complies with all monograph requirements is not misbranded.”  Id. (citation omitted).  An applicable monograph “cannot be used as the basis to force a company to go above and beyond what that monograph prescribes because that is expressly preempted.”  Id.

Nor was benzene either an “inactive ingredient” or a “component” of these products.

There are no allegations that [defendant] intended to directly combine benzene into [these] products; rather, the allegations are that [they] can and do[] degrade into benzene.  Based on the forgoing, the Court concludes that benzene is not an inactive ingredient, and [defendant] was therefore not required to list it on its . . . labels. Plaintiffs have not pointed the Court to any authority to the contrary.

Id. at *13.

Navarro did not, however, extend preemption to purported “parallel” claims involving allegations of violations of FDA CGMPs – at least not yet.  Disturbingly, the decision imported the rationale in Davidson v. Sprout Foods, Inc., 106 F.4th 842 (9th Cir. 2024), a food case, into the OTC drug context, without even acknowledging the difference between the two.  Navarro, 2025 WL 1411406, at *13-14.  Even though that claim was an afterthought, with CGMPs never mentioned in the complaint, Navarro found these purported “parallel” claims sufficiently pleaded.  Id.

But parallel claims didn’t get these plaintiffs very far, because the state-law claims they brought all sounded in fraud, and plaintiffs “failed to allege how they were deceived.”  Id. at *14.

Plaintiffs have argued throughout their opposition (and alleged in their complaint) that [defendant] has potentially violated regulations or cGMPs.  Even with this reasonable inference, the Court finds that Plaintiffs have not alleged facts that either establish that [defendant] violated cGMPs, or even if [it] did violate the cGMPs, how those violations ultimately deceived Plaintiffs.

Id. at *15.  Vague allegations of “omissions” were not enough.  Id.  Therefore, the entire action was dismissed.

Plaintiffs predictably objected to the magistrate’s findings.  They lost again.  Navarro v. Walgreens Boots Alliance, Inc., 2025 WL 3485004 (E.D. Cal. Dec. 4, 2025), did more than just adopt the magistrate’s decision.  First, the district judge agreed that “claims based on a theory of failure to warn/disclose BPO degradation risk are categorically preempted by the [FDCA].”  Id. at *1.  On the adequacy of the pleadings, however, the judge went further.  Plaintiffs failed even to allege any purchase of purportedly benzene-contaminated products:

Plaintiffs assert parallel state duties and third-party free speech rights that assume benzene adulteration.  However, Plaintiffs have not made a colorable showing the product they purchased from Defendant contained benzene due to a failure to follow cGMP or otherwise.

Id.  Thus, this Valisure molehill will not become a class action mountain.

Good riddance.

Photo of Michelle Yeary

On April 28, Dechert will host its 3rd Annual Life Sciences Day, a half-day program for in-house counsel, executives, and investors. Featuring speakers from leading pharmaceutical and biotech companies, the program will deliver sharp insights on the legal, regulatory, and business challenges shaping the industry. Hear directly from practitioners on how to manage risk and stay ahead in a rapidly evolving landscape.

The topics will include:

  • Health Apps in the Cross Hairs: Winning the First Big Tech Privacy Class Action and What It Means for Your Business
  • Active and Engaged: What Government Enforcers Want Life Sciences Companies to Know
  • Antitrust R&D: What’s in the Pipeline?
  • The Life Sciences Executive: The New Rules for Leadership

Click here to register.

Detailed agenda and additional speaker announcements will be shared soon. We hope to see you there!

Event Details:

Location: Dechert LLP, Cira Centre, 2929 Arch Street, Philadelphia, PA 19104

Date: Tuesday, April 28, 2026

1:30 p.m. | Registration

2:00 p.m. | Program

5:30 p.m. | Networking Reception

CLE Information:

CLE credit for this program is pending for California, Connecticut, Illinois, New Jersey, New York, North Carolina, Pennsylvania and Texas.

Photo of Eric Hudson

It’s hard to think of any recent litigation where plaintiffs didn’t seek overblown discovery about adverse event reports and then have their experts rely on those reports in an effort to establish causation.  But as we’ve blogged about repeatedly, reports from the FDA’s Adverse Event Reporting System (“FAERS”) do not establish causation (and, for good measure, they don’t constitute newly acquired information). Today’s decision, Taylor v. Dixon, 2026 WL 865183 (M.D. Fla. Mar. 30, 2026), is a little different since it involves a federal habeas petition.  But we couldn’t resist blogging about it given the court’s comprehensive take-down of the attempted use of an adverse event report to show causation. 

Continue Reading Adverse Event Reports May Not Be Used to Establish Causation
Photo of Lisa Baird

The latest medical device express preemption decision, Wieder v. Advanced Bionics LLC, 2026 U.S. Dist. LEXIS 70645, 2026 WL 880370 (S.D.N.Y. Mar. 31, 2026), comes out of the Southern District of New York and involves a Class III, PMA‑approved cochlear implant. 

Fluid allegedly worked its way into the device and caused a short‑circuit and device failure.  Then, a replacement surgery only partially succeeded because of scarring attributed to the original device, allegedly leading to permanent impairment of the infant patient’s ability to hear, speak, and learn.

In this opinion, the District Court adopted almost all of a solid Report and Recommendation by Magistrate Judge Gorenstein, and the result is that all the classic product‑liability theories—manufacturing defect, design defect, implied warranty, consumer protection, fraud, and failure to warn—were dismissed, largely on preemption grounds, while three negligence‑based counts and derivative loss‑of‑services claims live on.

The court followed the familiar Riegel v. Medtronic, Inc. framework:  For PMA‑approved Class III devices, state‑law claims are expressly preempted if they would impose requirements “different from, or in addition to” the federal requirements.  To escape preemption, plaintiffs must plead a “parallel” claim: a traditional state‑law duty that mirrors a federal requirement applicable to the device without imposing anything more or different.  (We realize the parallel terminology is de rigueur, but would not mind if more courts adopt the “mirror and ceiling” framework instead.)

We heartily approve of some themes that run through the opinion:

  • It is plaintiffs’ burden to allege a specific deviation from applicable FDA requirements.
  • General references to Current Good Manufacturing Practice regulations (or “CGMPs”) do not supply the needed specificity.
  • Voluntary recalls, alleged high failure rates, and alleged under‑reporting to FDA also are not, without more, enough either.

As to strict liability manufacturing defect, the plaintiffs did not merely regurgitate the elements in a conclusory way.  They at least added some alleged facts to the mix—that the manufacturing defect resulted from hand-application of a silicone sealant around the implant’s electrode, which allegedly was not subject to proper quality control, as the CGMPs allegedly require and as a voluntary recall and high failure-rate study allegedly established.

While that perhaps would be enough to state an ordinary manufacturing defect claim for a regular product, it was not enough to plead and adequate, non-preempted one for a PMA medical device.  The complaint did not allege any specific  CGMP violation, it wasn’t clear that hand application was a CGMP violation, and asserting “no quality control” is too conclusory to withstand our old friend Twiqbal.

In objecting to the Magistrate Judge’s Report and Recommendation, Plaintiffs did raise a specific alleged CGMP violation for the first time (21 C.F.R. § 820.70(a) regarding process validation and reproducibility), but the District Court refused to consider arguments not presented to the magistrate judge. 

Nevertheless, we will raise another CGMP argument of our own here for the first time in response.  PMA applications “are required to include” descriptions of their compliance with Current Good Manufacturing Practices (“CGMP”) requirements, which are promulgated in the Quality System Regulation (“QSR”), and “approval of a PMA application for a device can be denied if a manufacturer does not conform to the QS regulation requirements.”   See FDA’s Compliance Program Guidance Manual related to Medical Device PMA Preapproval and PMA Postmarket Inspections, at Part I, p. 1-2.

Putting it another way, the FDA reviewed and approved the hand-painted silicone step and the quality control procedures as part of the PMA process.  Therefore, a plaintiff claiming that the quality control process didn’t measure up, or that the manufacturer’s systems did not satisfy 21 C.F.R. § 820.70(a)’s process validation and reproducibility requirements, necessarily is using a state law claim to second-guess the FDA’s grant of premarket approval and inherent determination that the manufacturer’s processes were appropriate.  Preemption should result regardless.

Anyway, next up was the design‑defect claim, and that was quickly dispatched.  State‑law design‑defect claims challenging the safety or efficacy of the PMA‑approved design are categorically preempted because they would impose different design requirements than those the FDA approved.  Even though a plaintiff lacks access to confidential PMA documents at the motion to dismiss phase, those practical difficulties do not excuse plaintiffs from meeting their burden of alleging a specific deviation from PMA requirements. 

The New York implied warranty of merchantability claim went the same way.  Such claims also necessarily second‑guess the FDA’s safety and effectiveness determination by asserting the product was not “merchantable” (i.e., not fit for its ordinary purpose) despite being approved.

Plaintiffs’ New York consumer protection and fraud claims, and part of their failure to warn claim, targeted the device’s labeling, marketing, or promotional materials as deceptive and misleading.  But those materials were FDA‑approved, so these theories were preempted.  (Plaintiffs also failed to identify the allegedly fraudulent statements, which itself is always a problem.)

Weider also addressed a failure to warn claim premised on an alleged failure to report adverse events to the FDA.  The Magistrate Judge, quite correctly, recommended dismissal of this claim because New York law does not recognize a state law duty to report adverse events to FDA. 

The District Court unfortunately disagreed on this point, concluding that New York law might allow this theory in some circumstances.  In particular, because the learned intermediary doctrine requires manufacturers to take “adequate steps” to alert prescribing or implanting physicians to device risks, those steps may include notifying the FDA of adverse events, which the FDA may then disseminate to the universe. 

That allowance did not save plaintiffs’ claim here, however, because these plaintiffs alleged that the FDA received “multitudes of adverse event reports,” undermining the whole “there was a failure to report” idea.  

The claims that made it through were narrower negligence and loss of consortium-type claims that the manufacturer did not attack with preemption. 

The District Court’s order smartly closed the door on further amended complaints, and directed the parties to get to work on a discovery and case management plan for the case as narrowed. 

Photo of Stephen McConnell

Foster v. Nestle USA, Inc., 2026 WL 893348 (N.D. Ill. March 31, 2026), is not a drug or device case, but it is noteworthy because the court held that there was no private right of action under the Illinois Food, Drug, and Cosmetic Act.  Then again, the case is about chocolate, and chocolate has an effect on neurotransmitters as powerful as some drugs.  Chocolate can induce feelings of love. Or satisfaction. Or guilt.  It is pretty powerful stuff.  

But what exactly is chocolate? That is the question in Foster.  Answering that question is Judge Steven Seeger, whose logic is exceeded only by his prose style.  Here is the first paragraph in Judge Seeger’s Foster opinion: “Stephanie Foster has a sweet tooth, and she wanted to sink her teeth into a mouthful of chocolate. By the sound of things, Foster is a foodie. She didn’t want just any chocolate. She wanted 100% real chocolate.”  The opinion draws the reader in, like the smell of toll-house cookies.  Note the present tense, the short words, and short sentences. There is alliteration, but it is not overly done. Judge Seeger could probably write better ad copy than most Madison Avenue agencies.  

Advertising, in a sense, is what the Foster case is about.  It is yet another food false advertising case, in which a plaintiff (really the plaintiff lawyer) is playing a hyper technical game of Gotcha.  The plaintiff bought several bags of Nestle chocolate chips. (Judge Seeger mentions that it is a little strange that the plaintiff bought her bags from different retailers – as if “she was claim shopping while chocolate shopping.”) Judge Seeger tells us that the labels on the bags promised “any hungry consumer that the bag contained ‘100% real chocolate.’” 

The plaintiff cried foul upon discovering that the chocolate chips contained soy lecithin and natural flavors.  (Of course those ingredients were plainly listed on the back of the bag.) She claimed that the soy lecithin and natural flavors made the “100% real chocolate” statement a 100% lie. She filed a class action complaint containing six claims.  The first three claims fall under consumer protection statutes.  The last three were a couple of breach of warranty claims along with negligent misrepresentation.  

The defendant filed a 12(b)(6) motion, which the court granted, concluding that the “complaint is half-baked and is 100% dismissed.” As part of what we perceive to be a welcome and growing trend, the opinion includes pictures of the bag of chocolate chips. The court, while plainly having fun with the case, faithfully recited the plaintiff’s damages theory, which was that chocolate chips boasting 100% authenticity were priced higher than chips lacking that boast. But the court also faithfully followed the laws supporting the claims, which “require a false or misleading statement that deceives a reasonable consumer.” The court concluded that “No reasonable consumer would need protection from Nestle’s bag of chocolate chips.” The reasonable consumer standard is objective, not subjective. No reasonable consumer thinks that chocolate contains only the byproduct of cacao beans. Maybe Forrest Gump said that life is like a box of chocolates because you never know what you’ll get, but any consumer of chocolate knows that they are getting a composite product. Sugar is just one example of something added to chocolate. There is a decent chance that, as a kid, you once made the mistake of gobbling some pure cacao powder. If you did so, you grew alarmed at the unpleasant, bitter sensation. That stuff plainly needed something else to make it luscious. Modern chocolates contain milk solids, added flavors, modifiers, and preservatives. The common, everyday understanding of what is in chocolate is confirmed by FDA regulations, which list a variety of constituents of chocolate foods, many of which have nothing to do with cacao beans. The National Confectioners Association is in accord, as are many other authorities (including public health journals and MIT).

The fact that the other ingredients are listed on the back of the bag is important, though not necessarily dispositive. There is Seventh Circuit authority that an accurate fine-print list of ingredients on the back does not foreclose as a matter of law a claim that an ambiguous front label deceives reasonable customers. But the reference to 100% chocolate on the front of the bag does not deceive reasonable customers. Judge Seeger reasoned that “Courts don’t have to treat consumers like eggshell-skull plaintiffs, wandering bewildered down the grocery aisle in the Land of Confusion. And at some point, it is not asking too much to expect a reasonable consumer to read the list of ingredients if they’re unsure.” The court refused “to check common sense at the door.” Courts should not “bend over backwards to find ambiguities.” A “true chocolate lover” would not be hoodwinked by the reference to “100% chocolate” “and a reasonable consumer wouldn’t either.” There are far too many consumer fraud lawsuits that dance on the head of a pin. The Foster opinion offers a blueprint for emptying those lawsuits into the garbage can.

We also enjoyed the Foster opinion’s sweet prose and delicious common sense. Our profession could use s’more of such clarity.

Photo of Michelle Yeary

As one court succinctly put it over a decade ago: “No federal appellate court has approved a nationwide personal injury, product liability or medical monitoring class.” Durocher v. NCAA, 2015 WL 1505675, at *10 (S.D. Ind. Mar. 31, 2015). That remains true today—and the plaintiff in Lester v. Abiomed, Inc., 2026 WL 874051 (N.D. Ohio Mar. 31, 2026), didn’t come close to changing it.

Plaintiff brought a putative nationwide personal injury class action arising from a recalled medical device. Defendant moved to dismiss several claims and to strike the class allegations. The court granted both motions—unsurprisingly, and for good reason.

Start with the pleadings. Ohio’s Products Liability Act (“OPLA”) explicitly abrogates all common law product liability claims. If you are seeking damages for injury allegedly caused by a product’s design, manufacture, warnings, or representations (warranty), your claim lives (and dies) under OPLA.

Plaintiff tried to plead around that reality with claims for fraudulent concealment, fraudulent misrepresentation, and violation of the Ohio Consumer Sales Practices Act. But labels don’t control—allegations do. And here, the allegations told a familiar story. Defendant knew about alleged defects and either concealed them or misrepresented the product’s safety. That is failure to warn. Id. at *2-3.

On the two fraud claims, the court looked past the plaintiff’s attempt to recast those allegations as breaches of a “general common law duty not to deceive,” id. at *3, and found them squarely within the heartland of product liability law. Because they were, they were abrogated by OPLA.  Id.

The consumer protection claim fared no better. It rested on the same alleged concealment and misrepresentation of product defects, making it “intrinsically intertwined” with the product liability claims. That, too, meant OPLA governed and displaced it. Id. at *4.

With the non-OPLA claims out of the way, the court turned to the class allegations. And here is where things went from unlikely to impossible.

Courts can strike class allegations at the pleading stage where it is clear from the complaint itself that Rule 23 cannot be satisfied. Id. This was one of those cases.

The problem, as always with products liability claims, is predominance. Rule 23(b)(3) requires “that the questions of law or fact predominate over any questions affecting only individual members.” Id. at *5. The Sixth Circuit frames the predominance inquiry simply: put common issues on one side of the scale and individual issues on the other. In personal injury product cases, the scale doesn’t just tip—it collapses.

Plaintiff’s own allegations made that clear. There was no single defect theory—there were three. No single causal pathway—each claim required an individualized inquiry. As the court explained, proving liability would require delving into “each class member’s individual medical history, conditions, medical procedures, and other potentially relevant contributory factors.” Id. at *6. Warnings were also individualized. They changed over time, and some putative class members received the very warnings plaintiff claims were lacking. Id.

And then there’s the nationwide class problem. Different states, different laws. Not just in nuance, but in ways that would make instructing a jury on a classwide basis “impossible.” Id.

Put it all together, and the conclusion was inevitable–individualized issues overwhelmingly predominate over common ones. The class allegations were stricken.

Which brings us back to where we started. Despite persistent efforts, the scoreboard hasn’t changed. There is still no approved nationwide personal injury products liability class action. And this court wasn’t about to be the first.

Photo of Bexis

Today’s guest post is by Reed Smith‘s Jamie Lanphear. She has long been interested in tech issues, and particularly in how they might intersect with product liability. This post examines product liability implications of using artificial intelligence (“AI”) for medical purposes. It’s a fascinating subject, and as always our guest posters deserve 100% of the credit (and any blame) for their work.

**********

Artificial intelligence is rapidly moving into domains traditionally occupied by physicians, medical devices, and clinical software. That shift raises a familiar question for product liability lawyers: when new technology begins performing functions that look like medical judgment, how will existing tort doctrines respond?

Earlier this year, a major AI company introduced a new AI health feature designed to review users’ medical records and provide personalized health guidance. Users can upload electronic medical records, sync data from other wellness platforms, and ask health-related questions about symptoms, medications, and lab results. The company describes its new feature as informational and emphasizes that it is “not intended for diagnosis or treatment.” The tool is currently available only to a limited group of early users while the company continues refining its safety and reliability.

Even with those limitations, the legal implications are easy to see. If a user experiencing symptoms of a serious condition asks the system whether emergency care is necessary—and receives guidance that later proves incomplete or inaccurate—disclaimers alone may not resolve the liability analysis.

For defense lawyers tracking AI litigation trends, this new use of AI presents an emerging test case. The issue is not whether plaintiffs will attempt to frame AI-generated health advice as a product liability problem. They will. That’s what they do. The real question is how courts will apply existing tort doctrines to technology that blurs the line between software tools and medical decision-making.

First Things First

The threshold issue in any product liability claim is whether the thing that allegedly caused harm is a “product” at all. As the Blog has discussed here, here, here, and here, the legal treatment of software and AI is shifting. Traditionally, courts were reluctant to classify software as a product for purposes of strict liability, reasoning that software is intangible, often licensed or available for free rather than sold, and its defects resemble service failures more than manufacturing flaws. But that view is eroding.

Recent decisions reveal three emerging approaches to when software should be deemed a product. Under the defect-specific approach, courts examine particular functions alleged to be defective and evaluate whether each can be analogized to tangible personal property. In In re: Social Media Adolescent Addiction, for example, the Northern District of California analyzed specific features like parental controls and algorithmic content delivery, concluding that each had a tangible analogue sufficient to allow design defect claims to proceed. In re: Social Media Adolescent Addiction/Personal Injury Prods. Liab. Litig., 702 F. Supp. 3d 809 (N.D. Cal. 2023). Under the platform-as-a-whole approach, courts ask whether an app or platform in its entirety is analogous to a product, emphasizing the policy rationale for strict liability. In T.V. v. Grindr, the Middle District of Florida held that a dating app was a product because, as with most products and many services (including lawyers), it was “designed, mass-marketed, placed into the global stream of commerce and generated a profit.” T.V. v. Grindr LLC, No. 3:22-cv-864-MMH-PDB, 2024 U.S. Dist. LEXIS 143777 (M.D. Fla. Aug. 13, 2024). Of course, so are books and newspapers. And under the content-versus-medium distinction, courts permit product liability claims based on design or functionality defects while dismissing those based on expressive content.  Cf. Defense Distributed v. Att’y Gen. of New Jersey, 167 F.4th 65, 83-84 (3d Cir. 2026) (“whether code enjoys First Amendment protection requires a fact-based and context-specific analysis”).

These approaches have direct implications for AI health tools. AI operators have asserted in pending litigation that generative AI is a service and/or not a product for purposes of product liability law. That is how amusement parks and taxi dispatchers were treated before they morphed into social media and ride sharing. But if courts continue following the trend seen in the social media and chatbot cases, that defense may not hold.

Meanwhile, legislators are pushing in the same direction. The AI LEAD Act, recently introduced by Senators Durbin and Hawley, but unlikely to be enacted, would define AI systems as “products” and establish a federal product liability framework. And in the EU, the new Product Liability Directive explicitly treats software and AI as products—even when delivered as a service. The global momentum is unmistakable: the product line is moving.

Potential Theories of Liability

If AI health tools are products, what theories of liability might plaintiffs bring when their advice is alleged to have caused harm? The ongoing wave of litigation over AI chatbots offers a preview of the causes of action plaintiffs are deploying—and the factual predicates they are developing.

Strict liability for design defect. Under the consumer expectations test adopted by many jurisdictions, a product is defectively designed when it fails to perform as safely as an ordinary consumer would expect. Under the risk-utility test, a product is defective when the risk of danger inherent in the design outweighs its benefits. Plaintiffs in the existing chatbot cases allege that these bots cultivate emotional dependency, fail to terminate dangerous conversations, and provide harmful guidance during mental health crises. Generative AI with medical triage functionality could become a focus of similar claims.

A recent study published in Nature Medicine examined a recently introduced AI health tool’s performance in simulated emergency triage scenarios and identified cases where the system recommended routine care for conditions the researchers classified as emergencies. AI triage tools continue to evolve, but plaintiffs will undoubtedly cite early research of this kind to argue that triage logic constitutes a design defect when users delay care in reliance on a system’s recommendations.

Strict liability for failure to warn. Manufacturers have a duty to warn about dangers known or knowable at the time of distribution. A disclaimer that an AI health tool is “not intended for diagnosis or treatment” is a warning, but plaintiffs will inevitably argue it is inadequate because the product’s design invites that use, and therefore disregard of the warning should not be considered superseding cause. The existing complaints allege that ordinary consumers “could not have foreseen” that generative AI chatbots would cultivate dependency and provide dangerous guidance, “especially given that it was marketed as a product with built-in safeguards.” The same logic could apply to AI health tools: if the product is designed to ingest medical records, analyze symptoms, and suggest whether to seek care, a disclaimer alone may not suffice.

Negligent design. Plaintiffs also bring negligence claims alleging that the defendant failed to exercise reasonable care in designing its AI offering. Negligence is not limited to products. The evidence from the chatbot cases is instructive: moderation technology capable of detecting harmful content and terminating conversations exists and is deployed to protect copyrighted material—plaintiffs argue that the failure to deploy similar safeguards for user safety evidences a breach of duty. For AI health tools, inconsistent crisis safeguards may also become a focus of litigation. Early research on AI health tools has noted variability in how crisis-detection safeguards perform across different scenarios. These are the type of findings plaintiffs will undoubtedly cite to support arguments that safety features were not adequately calibrated before they were deployed.

Unlicensed practice claims. Plaintiffs in the existing chatbot cases have brought claims alleging that providing therapeutic services without adequate licensure violates state licensing statutes. An AI health tool that acts as a medical adviser—capable of interpreting lab results, flagging drug interactions, and recommending urgency levels for care—could invite similar theories in the medical context, particularly if plaintiffs argue that the service constitutes the unlicensed practice of medicine.

The Regulatory Layer

Plaintiffs in medical device cases routinely use regulatory evidence to bolster their claims—arguing that defendants withheld information from FDA, failed to report adverse events, or should have sought premarket clearance when they did not. AI health tools present a novel twist on this theme.

Under the Federal Food, Drug, and Cosmetic Act, a “device” includes software intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. FDA guidance clarifies that certain software functions may be regulated as medical devices, while others fall outside the agency’s oversight. For example, clinical decision support software may fall outside FDA regulation when it supports recommendations for a healthcare professional rather than providing guidance directly to patients. Similarly, FDA’s “general wellness” framework applies to low-risk products that do not reference diagnosis, treatment, or specific diseases.

Whether a particular AI health tool falls within or outside these exemptions is highly fact-specific—and often genuinely unclear. But that ambiguity may become a litigation narrative.  Plaintiffs might argue that an AI health tool should have been submitted to FDA for review and that the developer avoided regulatory scrutiny before launching the product to anyone on the Internet. Whether or not that argument ultimately succeeds, it could resonate with juries.

For defense lawyers, the takeaway is that companies deploying AI health tools should carefully evaluate whether their product’s functionality brings it within FDA’s jurisdiction, and if there is genuine ambiguity, consider engaging with the agency proactively rather than waiting for plaintiffs to make the argument first. And remember Buckman – private enforcement of claimed FDCA violations, such as failure to submit a medical device to the FDA, is not allowed.

Building the Defense

What defenses are available to developers of AI health tools? The pending chatbot litigation offers a preview of the arguments defendants are raising—and how courts may receive them.

Service, not product. A threshold defense is that AI health tools are more analogous to the provision of informational or advisory services than to the sale of a tangible product. Courts have long distinguished between products—subject to strict liability—and services, which typically sound in negligence. An AI system that reviews medical records, analyzes symptoms, and offers guidance about whether to seek care arguably performs a function closer to a triage or health advisory service, or even a healthcare provider, than to a manufactured device placed into the stream of commerce. Framed that way, the alleged defect is not a flaw in a product but an alleged failure in the provision of information or judgment, making strict liability an awkward fit at best.

Disclaimer and contract. Terms of use for AI services typically state that users should not rely on outputs as a substitute for professional advice and may include limitation-of-liability and “as-is” warranty disclaimers.” As the Blog recently pointed out, the FDA’s newly revamped online adverse event database requires users to execute a signed disclaimer before using. Similar requirements, including arbitration agreements, could be used by medical AI providers, if they are not seen as too much of a deterrent to their prospective audience. Such provisions will be central to any defense, but their effectiveness may depend on how the AI responds when users ask directly whether it is providing medical advice—if in-session responses are inconsistent with the terms, plaintiffs may argue the disclaimer was effectively disclaimed. Plaintiffs have also attacked the enforceability of such terms by alleging that sign-up processes use “dark patterns” that prevent meaningful consent. In the U.S., litigation over clickwrap and browsewrap can offer some guidance. In Europe, the EU’s new Product Liability Directive expressly bars contractual waivers of product liability.

Section 230. Section 230 of the Communications Decency Act shields providers from liability for third-party content. But the statute is designed to protect platforms from being treated as the publisher of user-generated speech; it has historically not shielded claims based on a platform’s own design choices or information. AI-generated outputs are produced by the model itself rather than third-party speech, so the defense may have a limited application in the AI health context.

State of the art. Defendants may assert that their methods, standards, and techniques complied with the generally recognized state of the art—pointing to safety processes, red-teaming, and expert advisory councils. Compliance with industry standards is evidence of non-defectiveness, even if not dispositive.

Lack of causation. Causation may be the strongest defense available. Defendants can point to users’ pre-existing conditions, their use of other information sources, and their failure to seek professional care. In the health context, causation will be fact intensive. Additionally, because AI outputs depend heavily on user inputs—prompts and contextual information—defendants may also argue that user conduct contributed to the outcome.

Misuse and user conduct. Usage policies typically prohibit certain uses and warn users not to rely on AI as a substitute for professional advice. But courts have been skeptical of misuse defenses where the product’s design is viewed as inviting the reliance users are warned against.

Where the Law May Be Headed

AI health tools raise many of the same liability questions already emerging in chatbot litigation, while adding the additional complexity of medical decision-making. Recent AI health tools offer a useful case study, but the questions they raise will recur across the industry as more companies develop AI-powered symptom checkers, triage tools, and wellness assistants.

Are these products? Are they medical devices? Are they services? What duties do developers owe to users who rely on them?

The answers are still being written. Courts are increasingly willing to treat software and AI as products for purposes of strict liability. Legislators—in the EU and now in Congress—are pushing in the same direction. And FDA’s regulatory perimeter may not remain static as consumer-facing AI health tools proliferate.

For defense lawyers, the existing chatbot litigation is worth watching closely. The theories plaintiffs are deploying will translate readily to medical contexts. The defenses being raised will be tested. And the outcomes will shape the landscape for years to come.