Photo of Michelle Yeary

We litigators love a good hearing. Judges asking sharp questions, counsel delivering crisp arguments, everyone believing they’ve advanced the ball. What no one loves—especially our clients—is realizing after the hearing that confidential business information just galloped into the public record. That’s apparently what happened recently in In re Suboxone Buprenorphine/Naloxone Film Products Liability Litigation,  2026 U.S. Dist. LEXIS 2581 (N.D. Ohio Jan. 7, 2026), requiring defendants to move to seal portions of a hearing transcript.

The motion was granted in part and denied in part. The court agreed to seal certain portions of the transcript that contained “confidential business records, trade secrets, and other matters that the companies typically take efforts to protect against public disclosure.” Id. at *10. The court denied the motion where it found the transcript contained public information.  Id. But having the information sealed after its been disclosed is not a given. As the court points out,

The courts have long recognized . . . a strong presumption in favor of openness to court records. Overcoming this burden is a heavy one: Only the most compelling reasons can justify non-disclosure of judicial records. The greater the public interest, the greater the burden to justify seal.

Id. at *8.  

Plaintiffs obtain discovery containing a defendant company’s trade secrets or sensitive commercial information all the time—pricing models, internal processes, design specifications, customer data, you name it. Presumably, all such documents are marked “Confidential” under a stipulated protective order. Which is all well and good until plaintiffs’ counsel eager to make their point, quotes or paraphrases the confidential material in a publicly filed document or open court. Then, what was protected discovery material is transformed into a judicial record and the “strong presumption” of openness attaches. Unfortunately, dealing with that after the filing or the hearing can sometimes be like trying to put the toothpaste back in the tube. Or, to borrow another well-worn metaphor, you’re slamming the barn door after the horses are halfway to the next county. Only recovering horses is probably easier than trying to undo damage to a company’s competitive position.

We don’t know what the protective order in In re: Suboxone looks like, but the decision got us thinking about why it is important to think of confidentiality orders as more than just paperwork to get discovery moving. For defendants—particularly companies with valuable intellectual property or sensitive business data—they are frontline defense mechanisms. A well-drafted confidentiality order should do more than label documents “Confidential.” It should control how and when those documents can be used, especially in public-facing contexts like court filings and hearings. Defendants should consider including key protections like:

•             Advance notice provisions requiring a party to notify the producing party before using confidential material in a filing or at a hearing.

•             Time to object so the producing party can seek sealing, redactions, or other protections before disclosure occurs.

•             Clear obligations on the receiving party not to publicly disclose confidential information without court approval.

Without these provisions, a confidentiality designation may offer little real-world protection when it matters most.

Negotiating advance notice or disclosure provisions, however, can be tricky. From the producing party’s perspective, advance notice is about control and damage prevention. If sensitive documents are about to make a public appearance, they want time to object, seek sealing, or at least brace for impact. From the receiving party’s perspective, however, “advance notice” can sound suspiciously like “advance preview of my litigation strategy.”

And that’s where things get awkward. No one wants to explain why they’re using the document, how they plan to use it, or what argument it supports—especially not weeks before a filing is due. A notice provision that’s too vague is meaningless; one that’s too detailed feels like forced early disclosure of work product. In practice, the sweet spot is usually a notice requirement that’s specific about timing and scope (identify the documents x days before filing/hearing), but intentionally vague about theory. That way, the producing party gets fair warning without the receiving party having to tip its hand. Everyone leaves mildly dissatisfied, which in confidentiality-order negotiations generally means success.

And don’t forget that the notice should be before any public use, not just filings. Courtrooms are public. Transcripts are public by default. And once something is said on the record, it tends to stay said. Even if a transcript is later sealed, the information may already have been accessed, quoted, or summarized elsewhere. The internet, as they say, never forgets.

From a defense perspective, the best sealing motion is the one you never have to file. Preemptive procedures—built into the confidentiality order—allow disputes to be resolved before disclosure. If plaintiffs want to rely on confidential material at a hearing, they give notice. The defendant can then move to seal, request redactions, or ask the court to conduct that portion of the hearing under seal.  You might still lose—but losing in advance is very different from losing after the fact. At least you can warn the client, line up a sealing motion, prepare talking points, and generally move from “oh no, that’s in the public record” to “we saw this coming and planned accordingly.” Advance notice turns a scramble into a strategy session, which in high-stakes confidentiality fights is about as good as it gets.

Judges generally appreciate this approach too. It avoids last-minute emergencies and preserves the court’s interest in transparency while still protecting legitimate confidentiality concerns.

No one sets out to create a “barn door after the horses out” situation. But it happens when confidentiality orders are treated as boilerplate rather than strategic tools. Think of it this way–if you wouldn’t leave your office door unlocked overnight because you might be able to call the police in the morning, you shouldn’t rely on after-the-fact sealing motions to protect sensitive business information. Prevention beats damage control every time.

Confidentiality orders aren’t glamorous. They don’t win cases on their own. But when they’re done right, they quietly prevent very expensive problems—and keep those horses exactly where they belong–safely in the barn.

Photo of Bexis

Since it was published in 2011, the third edition of the Federal Judicial Center’s Reference Manual for Scientific Evidence has been the go-to guide for federal judges seeking to sort out scientific testimony, and a major source of non-precedential authority for both sides when arguing motions under Fed. R. Evid. 702.  2011, however, was fifteen years ago.  The FJC and its academic collaborators have been promising an update for several years.

It’s finally here, and you can get a free PDF copy of your very own here.

The Reference Manual on Scientific Evidence, Fourth Edition checks in at 1682 pages, so don’t expect a substantive analysis here.  But here is the table of contents:

Liesa L. Richter & Daniel J. Capra, “The Admissibility of Expert Testimony,” 1

Michael Weisberg & Anastasia Thanukos, “How Science Works,” 47

Valena E. Beety, Jane Campbell Moriarty, & Andrea L. Roth , “Reference Guide on Forensic Feature Comparison Evidence,” 113

David H. Kaye, “Reference Guide on Human DNA Identification Evidence,” 207

Thomas D. Albright & Brandon L. Garrett, “Reference Guide on Eyewitness Identification,” 361

David H. Kaye & Hal S. Stern, “Reference Guide on Statistics and Research Methods,” 463

Daniel L. Rubinfeld & David Card, “Reference Guide on Multiple Regression and Advanced Statistical Models,” 577

Shari Seidman Diamond, Matthew Kugler, & James N. Druckman, “Reference Guide on Survey Research,” 681

Mark A. Allen, Carlos Brain, & Filipe Lacerda, “Reference Guide on Estimation of Economic Damages,” 749

M. Elizabeth Marder & Joseph V. Rodricks, “Reference Guide on Exposure Science and Exposure Assessment,” 831

Steve C. Gold, Michael D. Green, Jonathan Chevrier, & Brenda Eskenazi, “Reference Guide on Epidemiology,” 897

David L. Eaton, Bernard D. Goldstein, & Mary Sue Henifin, “Reference Guide on Toxicology,” 1027

John B. Wong, Lawrence O. Gostin, & Oscar A. Cabrera, “Reference Guide on Medical Testimony,” 1105

Henry T. Greely & Nita A. Farahany, “Reference Guide on Neuroscience,” 1185

Kirk Heilbrun, David DeMatteo, & Paul S. Appelbaum, “Reference Guide on Mental Health Evidence,” 1269

Chaouki T. Abdallah, Bert Black, & Edl Schamiloglu, “Reference Guide on Engineering,” 1353

Brian N. Levine, Joanne Pasquarelli, & Clay Shields, “Reference Guide on Computer Science,” 1409

James E. Baker & Laurie N. Hobart, “Reference Guide on Artificial Intelligence,” 1481

Jessica Wentz & Radley Horton, “Reference Guide on Climate Science,” 1561

We compared this table of contents to the one for the Third Edition, and the differences are in bold.  There is considerable turnover in authorship, with only one chapter unchanged from 2011.  That’s not surprising, since authors who were considered grey-beard experts in their fields fifteen years ago have only gotten greyer (and older) since.  What’s of more import is the addition of three entirely new chapters – on computer science, artificial intelligence, and climate science.

Frankly, we’re surprised and disappointed that there wasn’t a fourth chapter on genetics and genomics, which we view as being of far greater general impact than “climate science,” which is much more of a niche area.

We have skimmed the chapter on AI, because Bexis has been involved with the Lawyers for Civil Justice’s recent submission on the proposed Fed. R. Evid. 707 that would create a possible avenue for admission of computer-generated evidence without a supporting expert.  “Computer-generated” is a broad term and encompasses a lot more than AI (one of the problems with the current draft), but as for AI, the Reference Manual’s new chapter only reinforces our previously-stated belief that there is no way AI can be admissible without the proponent offering expert testimony to support it.  The Reference Manual lists multiple questions that are “essential to authenticating and validating the use of AI.”  Reference Manual (4th), at 1514.

  • What is the AI trained to identify, how has it been weighted, and how is it currently weighted?
  • Does the system have a method to transparently identify these answers?  If not, why not?
  • Are the false positive and false negative rates known, if applicable, or hallucination rates?  If so, how do these rates relate to the case at hand?
  • How has AI accuracy been validated, and is the accuracy of the AI updated on a constant basis?
  • What are the AI’s biases? (See “Probing for Bias” questions below.)
  • Is authenticity an issue?
  • How do each of these questions and answers align with how the AI application is being used by the court or proffered as evidence?

Id.  None of these questions is self-evident, nor do we think that any layperson could competently address them.  Nor does the manual:

Judges might also consider that a qualified AI expert or witness ought to be able to credibly answer these questions, or perhaps the expert or witness may not be qualified to address the application at issue.

Id.

As mentioned in connection with the above questions, the Manual also suggests still more questions specifically designed “to probe for bias.”  Id. at 1529.  This set is even more extensive – and more detailed – than the recommended general questions.  Here are only some of the questions recommended by the manual – limited to those that could be applicable to prescription medical product liability litigation:

  • Who designed the algorithm at issue?
  • What process of review was the algorithm subjected to?
  • Were stakeholders – groups likely to be affected by the AI application – consulted in its conception, design, development, operation, and maintenance?
  • What is in the underlying training, validation, and testing data?
  • How has the chosen data been cleaned, altered, or assessed for bias?
  • How have the data points been evaluated for relevancy to the task at hand?
  • Is the data temporally relevant or stale?
  • Are certain groups improperly over-or under-represented?
  • How might definitions of the data points used impact the algorithm analysis?
  • Do the data or weighted factors include real or perceived racial, ethnic, gender, or other sensitive categories of social identity descriptors, or any proxies for those categories?  If so, why?
  • Have engineers and lawyers reviewed the way these criteria are weighted in and by the algorithm as part of the design and on an ongoing basis?  In accord with what process of validation and review?
  • Is the model the state of the art?  How does it compare against any industry standard evaluation metrics or application-specific benchmarks?
  • How might the terms or phrasings in the user-generated prompts bias the systems’ outputs?  Can these prompts be phrased in a more neutral way?
  • Do any of the terms used have alternative meanings?
  • Are the algorithm’s selection criteria known?  Iterative?  Retrievable in a transparent form?  If not, why not?
  • Does the application rely on a neural network?  If so, are the parameters and weights utilized within the neural network known or retrievable?
  • Does the design allow for emerging methodologies that provide for such transparency?  If a transparent methodology is possible, has it been used?  (If not, why not?)
  • If transparency is not possible with state of the art technology, and a less transparent methodology is employed, what is the risk that the system will rely on parameters that are unintended or unknown to the designers or operators?  How high is the risk?  Is the risk demonstrated?  How is the risk mitigated?
  • Is the input query or prompt asking for a judgment, a fact, or a prediction?
  • Is the judgment, fact, or prediction subject to ambiguity in response?
  • Are there situational factors or facts in play that could, or should, alter the algorithm’s predictive accuracy?
  • Is the application one in which nuance and cultural knowledge are essential to determine its accuracy or to properly query it?
  • Are the search terms and equations objective or ambiguous?  Can they be more precise and more objective? If not, why?
  • What is the application’s false positive rate?
  • What is the false negative rate?
  • What information corroborates or disputes the determination reached by the AI application?
  • Is the application designed to allow for real-time assessment?  If not, is operational necessity the reason, or is it simply a matter of design?
  • Is there a process for such assessment that occurs after the fact?
  • Is the AI being used for the purpose for which it was designed and trained?
  • Is the AI being used to inform or corroborate a human decision?  Are humans relying on the AI to decide or to inform and augment human decisions?

Id. at 1529-30.

To the extent – if at all − that proposed Rule 707 contemplates AI being admissible without expert testimony that could answer these questions (to the extent relevant in a given case), we wonder whether the federal judiciary’s right hand (the Rules Committee) has been following what its left hand (the Committee on Science for Judges) has been doing.  We frankly don’t see any plausible avenue (other than consent of the parties) that AI evidence could possibly be admitted without supporting – and probably extensive supporting – expert testimony that is prepared to address the questions posed in this new chapter of the Reference Manual on Scientific Evidence.

And that’s just one aspect of one chapter.  Have fun reading.

Photo of Steven Boranian

We have spilled a lot of blog ink on Federal Rule of Evidence 702 recently, so it was nice to see a case from our home state of California driving home the importance of following the rules when it comes to expert opinions.  California has a reputation for allowing expert opinions into evidence more permissively than under the Federal Rules, and that reputation is probably well deserved. 

There are, however, rules—and the plaintiff in McDonald v. Zargaryan, No. B329565, 2025 Cal. App. LEXIS 850 (Cal. Ct. App. Dec. 22, 2025), learned the hard way that there can be consequences to playing fast and loose.  In McDonald, the plaintiff claimed injuries to his hip and leg, and later to his neck and groin.  It did not, however, seem to slow him down much:  He continued to snowboard and rollerblade, but nonetheless pursued litigation and disclosed 30 experts under California’s rules requiring the exchange of expert information if any party demands it. 

This is where it gets weird.  Sixteen months after disclosing experts, and one week before trial, the plaintiff went to a new doctor, who recommended spine surgery.  As the California Court of Appeal would later describe it, “Until then, no one had proposed spine surgery [and] . . . [s]pine surgery had not been an issue in the case.”  Id. at *2.  The doctor, moreover, had a “professional relationship” with the plaintiff’s lawyer, and when asked whether his lawyer had referred him to the doctor, the plaintiff replied, “I don’t recall, but possibly.  Maybe.  I think so, before the trial.”  Id. at *3.  Hmm.

Despite these curious circumstances, the trial court allowed the (very) late disclosure of the doctor as an expert and denied a motion to exclude the testimony, so long as the plaintiff made the expert available for a deposition, which occurred.  The expert testified, and a jury returned a substantial verdict for the plaintiff.  Id. at *4-*5.

Allowing the testimony was an abuse of discretion, and the Court of Appeal reversed.  As the court explained,

The goal [of expert witness disclosure] is to avoid surprise at trial.  Surprise at trial is unfair.  It also is inefficient.  Surprise at trial is unfair because ambushes, while effective in warfare, are disfavored in court.  For legal disputes, California has replaced free-for-all trial by combat with rules of professionalism and fair play.  Surprise at trial is inefficient because, if both sides know exactly what evidence the trial will produce, they have a better chance of agreeing in advance on the true value of the case. 

Id. at *5-*6 (emphasis in original, citations omitted).  Here, the plaintiff bent the rules beyond the breaking point.  He did not seek the trial court’s permission to add a new expert, which was required, and he had no reasonable justification for the delay.  Trial counsel submitted his own declaration attempting to explain, but the declaration was just a legal brief and was “worthless as a piece of evidence.”  Id. at *10.

There can be valid justifications for late expert designation—sudden unavailability of an expert through death, illness, incapacitation, and “other serious and uncontrollable events.”  Id. at *10.  We can think of other possible reasons too, such as an unexpected change in the case or the plaintiff’s condition.  But none of that occurred here.  Not even close. 

Editor’s note: This post was revised on January 9, 2026, to remove a reference to this opinion being unpublished. The Court of Appeal certified the opinion for publication.

Photo of Eric Alexander

Perhaps driven by fear of retribution for saying what you really think, an indirect method of communication has gained some popularity on the social media platforms of late.  It goes like this:  1) a historical fact or spin on one is presented, such as on a past military conflict or a criminal conviction; and 2) there is a sentence at the end saying something like “This is not a post about [the subject matter discussed directly].”  Readers who think they know what the post was really about can feel special, whereas others may simply enjoy the bedlam that is the comments section on just about any social media post.  Still others may wish the author had opted for directness.  Our post today on this non-social (or asocial) media site concerns a case with claims and counterclaims that, on first blush, seem far from the Blog’s bailiwick.  First impressions, however, can be misleading.

In Eli Lilly & Co. v. Premier Weight Loss of Ind., LLC, No. 1:25-cv-00664-TWP-TAB, 2025 U.S. Dist. LEXIS 268138 (S.D. Ind. Dec. 31, 2025) (“PWL”), manufacturer A sued quasi-manufacturer B over trademark infringement and false advertising under the Lanham Act and its Indiana state law cousin.  Quasi-manufacturer B brought a counterclaim against manufacturer A for defamation based on statements the manufacturer had made to news outlets about its allegations in the lawsuit.  To resolve manufacturer A’s motion to dismiss, the PWL court had to delve into FDA issues that we find interesting and relevant to issues facing drug companies these days.  The connection, perhaps predictable from the names of the parties, is that the quasi-manufacturer was repackaging and selling the manufacturer’s prescription weight loss and diabetes medications.  (For some other posts on the intersection of the Lanham Act and the FDCA, try here, here, here, and here.)  This is not the increasingly common situation where a compounding pharmacy does its thing, because the defendant was not a licensed pharmacy, compounding or otherwise.  Instead, the defendant allegedly took the plaintiff’s “factory-sealed single-dose autoinjector pens containing 0.5 mL of tirzepatide fluid in various strengths,” repackaged the drug into “lower and/or different doses” in “third-party insulin syringes,” and sold those to patients with unapproved labeling and package inserts.  Id. at *3-4.  Before plaintiff sued over the alleged trademark infringement and false advertising, it shared the proposed complaint with a few local news outlets, at least one of which also allegedly received a statement saying, “We will continue to take action to stop these illegal actors and urgently call on regulators and law enforcement to do the same.”  Id. at *4-5.  The allegedly defamatory statements underpinning the counterclaim were:

1. Lilly stated that PWL is an “illegal actor” such that “regulators and law enforcement” should take action against it;

2. Lilly’s Complaint states that PWL is putting its patients’ lives at risk; and

3. Lilly’s Complaint states that PWL breaks apart or cracks open Lilly’s autoinjector pens.

Id. at *7 (internal parentheticals omitted).  Because truthfulness is a complete defense to defamation, by bringing this counterclaim, the quasi-manufacturer opened itself up to an early test of whether its conduct was illegal.

The PWL court first gave some background on the Indiana standards for defamation and the truthfulness defense, and then addressed the counterclaimant’s argument that it was premature to rule on a defense in the context of a motion to dismiss.  We will skip over that stuff to get to the substance of PWL.  Taking the allegedly defamatory third statement first, the court determined that its “gist” or “sting”—terms from the homey Indiana defamation caselaw—was true because the admitted repacking of a liquid drug from an autoinjector pen into an insulin syringe was “synonymous” with saying the counterclaimant “breaks apart” or “cracks open” the pens to extract the drug.  Id. at *13.  So, the counterclaim was dismissed as to alleged defamation based on that statement.  The second statement required little analysis because the pleadings shed insufficient light on whether the repackaging activities put patients’ lives at risk, such as by compromising sterility.  Id. at *11 & 23-24.  Accordingly, the motion to dismiss was denied on that issue.

The first statement engendered more analysis.  The counterclaimant argued that the statement could not be true unless its conduct violated a criminal statute.  The court rejected this based on Seventh Circuit law interpreting “illegal conduct” to encompass non-criminal acts.  Id. at *14.  Of course, the plaintiff’s claims did allege violations of the Lanham Act.  While evaluating on the “truthfulness” standard instead of the burden of proof applicable to the claims, the court found that “[d]ispensing the contents of Lilly’s medicines into third-party syringes and changing the packaging, labeling, and dosages contained therein, plausibly violates both the language and purpose of the Lanham Act.”  Id. at *19.  The PWL court specifically noted the importance to a trademark holder of preserving the quality control standards for the medications it manufactures.  Id, at *18.  Similar considerations informed the court’s related finding that “PWL’s conduct violates the FDCA,” a finding stated with any plausibility qualifier.  Id. at *20.  The court noted that FDA approval “constitutes approval of the product’s design, testing, intended use, manufacturing methods, performance standards, and labeling [and is] specific to the product,” and held that the counterclaimant’s repackaged products were not covered by the NDA approvals for medications and, thus, needed their own.  Id. at *22-23 (citations omitted).  Those products had not been approved, so their sale violated 21 U.S.C. § 355(a).  So, the first statement was also truthful in calling the counterclaimant an “illegal actor” based on an FDCA violation.  Like we said, this post is not really about trademarks or defamation.  Of course, by filing the counterclaim, the quasi manufacturer invited this ruling, which will make its defense of the manufacturer’s claims that much harder.  Being hacks for the companies that develop the medications and other innovative medical products in the first place, we are not too troubled by this.

The last part of the PWL decision involved the plaintiff’s argument that its statements, specifically the second one that survived the first part of the motion to dismiss, were subject to the litigation privilege applicable to “all relevant statements made in the course of a judicial proceeding, regardless of the truth or motive behind those statements.”  Id. at *25 (citation omitted).  As far as we can tell, this is a new one for the Blog.  We have had our own cases where we challenged scurrilous statements in plaintiff’s filings and where Rule 11 sanctions for unsupported positions in plaintiff’s filings were at issue.  We have also written about proceedings to go after plaintiffs’ experts or journal authors for trade libel.  But not this particular animal.  In PWL, it came down to a timing issue.  Passing around a complaint before it is filed, and presumably other statements made at that point about what a not-yet-filed lawsuit will allege, does not implicate the litigation privilege because the statements are not “made in the course of a judicial proceeding.”  Id. at *26-28.  This made us think about other targets of medical product manufacturer efforts to protect against misstatements about their (trademarked) products:  plaintiff lawyer websites and ads.  Just because the lawyers bring one suit or many suits at some point does not mean they can retroactively cloak their pre-suit statements in any litigation privilege.  This part of the post is actually about the blight of plaintiff lawyer websites and ads.  We are certainly not troubled by limiting protection for those potential vehicles for false advertising.

Photo of Michelle Yeary

Today we are talking about the decision in Govea v. Medtronic, Inc., 2025 WL 3467214 (C.D. Cal. Nov. 26, 2025). Plaintiff claimed the case was about off-label promotion. But thanks to the court taking judicial notice of PMA supplements, it’s mainly a case about on-label use and a plaintiff who waited far too long to sue.

The device at issue is an implantable stimulator used to regulate incontinence. It is prescribed when patients do not tolerate more conservative treatments and medications. The process starts with a two-week trial run where only a lead is implanted and the generator is worn outside the body. After a successful trial, the device is fully implanted. Plaintiff was prescribed the device to treat urinary incontinence. Her trial lasted only one day because the lead wire stuck to her bandage. Her physician recommended she go forward with the implant anyway and she did. Plaintiff alleges that after the implantation in 2011, she received no relief, but instead her condition worsened and she experienced painful shocks. She started asking to have the device removed in 2016. It was eventually explanted in 2018, but her surgeon did not completely remove the lead. Plaintiff claims she was unaware that the lead remained until it was revealed by x-ray in January 2024. Id. at *3. Plaintiff filed suit in December 2024 bringing claims for misrepresentation, breach of express warranty, manufacturing defect, and negligence.

At the core of all of plaintiff’s claims are allegations that the manufacturer promoted the device for an off-label use. Plaintiff claimed the device was only approved for fecal incontinence, not urinary. However, the court granted defendant’s request to take judicial notice of multiple PMA supplements and approvals which showed FDA approval of the device for both forms of incontinence. Because urinary incontinence was approved by PMA, “arguing that [the device] is inappropriate for use in that context is, in effect second-guessing the PMA process.” Id. at *9. So, plaintiff’s misrepresentation claims based on that allegation are preempted.

However, the device’s labeling states it is contraindicated in patients who “have not demonstrated an appropriate response to test stimulation.” Id. Plaintiff alleged that her device failed prior to completion of the test stimulation, that she informed defendant’s representative of that fact, and the representative recommended permanent implantation anyway. Id. at *10. Finding that was a sufficient allegation of promotion for an unapproved use, the misrepresentation claims based on that allegation were not preempted. Likewise, the same allegations were enough for the court to find plaintiff’s breach of express warranty claims not preempted. Id. at *12.

On her manufacturing defect claim, plaintiff failed to allege any violation of a specific FDA requirement for the device. She didn’t even try to rely on CGMPs. Rather, plaintiff relied on allegations that either the device had been improperly implanted or that it must be defective because the wire migrated and could not be fully removed. The former has nothing to do with the manufacture of the device and the latter is res ipsa loquitur—with the Ninth Circuit has rejected. Id. at *11. Therefore, plaintiff’s manufacturing defect claim was expressly preempted.

The court’s final preemption analysis was on the negligence claim. First, the court found that there is no state law duty to refrain from off-label promotion. The prohibition on off-label promotion is solely a function of the FDCA. So, to the extent plaintiff’s negligence claim was based on alleging such a duty, it was an impliedly preempted private attempt to enforce the FDCA. The only negligence claim that survived preemption was based on allegations of misrepresentations made during the off-label promotion. So, the claim has to be premised on false statements, not whether the statements were on or off label.   

While some of plaintiff’s claims survived preemption, for any to survive the statute of limitations challenge, plaintiff would have to significantly re-write or back off several of her current allegations.

According to the complaint, plaintiff experienced pain which she attributed to the device the entire time it was implanted. Further, she alleges she never received any relief of her symptoms from the device, but rather her incontinence worsened.  Id. at *17. She also alleges that she continued to experience pain at the incision cite following explant. What the complaint fails to allege is that plaintiff took any steps to follow up with any physician following the explant surgery, or indeed “that she took any other actions that would constitute reasonable diligence.” Id. at *16. She relies exclusively on learning in 2024 that the wire was still implanted as the point in time when she “discovered” her injury.  

Plaintiff’s injury didn’t just sneak up on her in 2024. According to her own account, it announced itself early and often. Yet plaintiff waited six years after explant to file suit. The court was unmoved by arguments that she didn’t connect the dots sooner. When a plaintiff alleges immediate and ongoing pain following a medical procedure, the law generally expects some curiosity. At least enough to ask, “Should I look into this?” The statute of limitations doesn’t pause indefinitely while a plaintiff hopes the answer will change. Being on notice doesn’t require knowing every legal theory or scientific detail. It requires awareness of an injury and a possible connection to its cause. Plaintiff had both—and still waited.

Preemption plus time-bar is a powerful one-two punch. Either is often enough; together, they end the fight early.

Photo of Bexis

This “just desserts” story caught our eyes earlier this year – a hot-shot expert witness, on artificial intelligence, no less, got caught with his own hand in the AI cookie jar.  As a result, his credibility was destroyed, and his testimony was excluded.  The litigation leading to Kohls v. Ellison, 2025 WL 66514 (D. Minn. Jan. 10, 2025), concerned a Minnesota anti-deepfake statute.  The plaintiffs were political operatives claiming a First Amendment right to create deep fakes of candidates they opposed.  Id. at *1.  The defendant hired a California-based professor to testify “about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy.”  Id.

The AI expert, however, used AI himself in preparing his material and “included fabricated material in his declaration.”  Id.  Specifically, the would-be expert “admitted that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article.”  Id.  AI had provided “fake citations to academic articles, which [the expert] failed to verify before including them in his declaration.”  Id.  The state sought to submit a belated amendment removing the fictitious citations, but the court was having none of it.  Id.

The AI expert’s AI-based report was excluded in its entirety. 

[T]he Court cannot accept false statements − innocent or not − in an expert’s declaration submitted under penalty of perjury.  Accordingly, given that the [expert] Declaration’s errors undermine its competence and credibility, the Court will exclude consideration of [that] expert testimony

Id. at *5.  “The irony. . . . a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI − in a case that revolves around the dangers of AI, no less.”  Id. at *3.  Moreover, the expert had committed a cardinal, but very common, sin of a litigation-engaged expert.  He had not lived up to his usual professional standards in reaching his paid opinions:

It is particularly troubling to the Court that [the expert] typically validates citations with a reference software when he writes academic articles but did not do so when submitting the . . . Declaration. . . .  One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles.

Id.  The expert “abdicate[d his] independent judgment and critical thinking skills in favor of ready-made, AI-generated answers.”  Id. at *4.

Even though counsel that hired this AI-dependent expert professed no knowledge of the AI-generated falsehoods in the expert’s declaration, that did not excuse them.  Rule 11 “imposes a ‘personal, nondelegable responsibility’ to ‘validate the truth and legal reasonableness of the papers filed’ in an action.”  Id. (citation and quotation marks omitted).  In this context, attorneys have an obligation “to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.”  Id.

The court ultimately excluded the AI-generated report because the false citations had destroyed the expert’s credibility and required “steep” sanctions:

[The expert’s] citation to fake, AI-generated sources in his declaration . . . shatters his credibility with this Court.  At a minimum, expert testimony is supposed to be reliable.  Fed. R. Evid. 702.  More fundamentally, signing a declaration under penalty of perjury is not a mere formality. . . .  The Court should be able to trust the indicia of truthfulness that declarations made under penalty of perjury carry, but that trust was broken here.

Moreover, citing to fake sources imposes many harms, including wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system. . . .  Courts therefore do not, and should not, make allowances for a party who cites to fake, nonexistent, misleading authorities − particularly in a document submitted under penalty of perjury.  The consequences of citing fake, AI-generated sources for attorneys and litigants are steep.  Those consequences should be no different for an expert offering testimony to assist the Court under penalty of perjury.

Id. at *4-5 (citations and quotation marks omitted).

We think that the court reached the right result in Kohls, but for the wrong reason.  The problem with AI-generated expert testimony is not limited to AI hallucinations.  Instead, it’s deeper and goes to the concept of “expertise” itself.  Who is the expert?  Is it the person who signs the report and is proffered as an expert, or is it whatever AI program the expert used?  It is well established that one expert cannot simply “parrot” the opinions of another.  We’ve written several posts that make this point.  Why should it be any different when the expert blindly parrots something that a black-box AI program spits out, rather than some other expert? 

With that question in mind, we decided to look for other decisions that have addressed experts who used AI to create their submissions.  We think that asking the right questions led to the right answer in In re Celsius Network LLC, 655 B.R. 301 (Bankr. S.D.N.Y. 2023).  Celsius Network was a bankruptcy case applying Rule 702.  The report in question, however, was not written by the expert who signed it.  Rather, it was written by AI, and for that reason it was excluded.

The [expert] Report was not written by [the expert]. Although [he] directed and guided its creation, the 172-page Report, which was generated within 72 hours, was written by artificial intelligence at the instruction of [the expert].  By his own testimony, a comprehensive human-authored report would have taken over 1,000 hours to complete.  In fact, it took [the expert] longer to read [the] report than to generate it.  The Court therefore separately evaluates the [expert] Report. . . .  [T]he Court finds that the . . . Report is unreliable and fails to meet the standard for admission.

Id. at 308.  This AI-generated expert report could not be reliable.

  • “In preparing the report, [the expert] did not review the underlying source material for any sources cited, nor does he know what his team did (or did not do) to review and summarize those materials.”
  • “There were no standards controlling the operation of the artificial intelligence that generated the Report.”
  • “The Report contained numerous errors, ranging from duplicated paragraphs to mistakes in its description of [relevant parameters].”
  • “The [expert] Report was not the product of reliable or peer-reviewed principles and methods.”

Id. at 308.  Thus, Celsius Network determined “that the Report does not meet the standard set forth under Rule 702.”  Id. at 309.

In an earlier case, an expert offered analysis of data that he had fed into a set of algorithms and “click[ed] ‘Go.’”  In re Marriott International, Inc., Customer Data Security Breach Litigation, 602 F. Supp.3d 767, 787 (D. Md. 2022).  That was not enough to be admissible under Rule 702.

Algorithms are not omniscient, omnipotent, or infallible.  They are nothing more than a systematic method of performing some particular process from a beginning to an end.  If improperly programmed, if the analytical steps incorporated within them are erroneous or incomplete, or if they are not tested to confirm their output is the product of a system or process capable of producing accurate results (a condition precedent to their admissibility), then the results they generate cannot be shown to be relevant, reliable, helpful to the fact finder, or to fit the circumstances of the particular case in which they are used. . . .  [The expert’s] willingness to rely on his own untested conclusion that his model could reliably be applied to the facts of this case is insufficient to meet the requirements of Rule 702.

Id. (footnote omitted).

A similar state-law case, Matter of Weber, 220 N.Y.S.3d 620 (N.Y. Sur. 2024), is from a state trial court.  It’s not entirely clear that the damages opinions excluded in Weber were even those of a qualified expert, the decision treated them as such and found them “inherently unreliable.”  Id. at 633.  They had been generated by an AI program (CoPilot).  The Weber expert simply parroted whatever the AI program generated:

Despite his reliance on artificial intelligence, [the expert] could not recall what input or prompt he used to assist him. . . .  He also could not state what sources [AI] relied upon and could not explain any details about how [the AI] works or how it arrives at a given output.  There was no testimony on whether these [AI] calculations considered any fund fees or tax implications.

Id.  The would-be expert nonetheless claimed that AI use was “generally accepted” in the relevant field.  Id. at 634 (New York state courts follow Frye).

The court had “no objective understanding as to how [the AI program] works,” and thus tried it out itself.  The program gave three different answers to what should have been a simple mathematical calculation – and none of those matched the supposed expert’s number.  Id. at 633 “[T]he fact there are variations at all calls into question the reliability and accuracy of [AI] to generate evidence to be relied upon in a court proceeding.”  Id.  Interestingly, when asked “are your calculations reliable enough for use in court,” the program responded that, standing alone, it was probably not ready for legal prime time.

[The AI] responded with “[w]hen it comes to legal matters, any calculations or data need to meet strict standards. I can provide accurate info, but it should always be verified by experts and accompanied by professional evaluations before being used in court. . . .  ”  It would seem that even [the program] itself self-checks and relies on human oversight and analysis.  It is clear from these responses that the developers of the [AI] program recognize the need for its supervision by a trained human operator to verify the accuracy of the submitted information as well as the output.

Id. at 634.  To prevent “garbage in, garbage out . . . a user of . . . artificial intelligence software must be trained or have knowledge of the appropriate inputs to ensure the most accurate results.”  Id. at 634 n.25.

Weber thus rejected the testimony, citing “due process issues” that “arise when decisions are made by a software program, rather than by, or at the direction of a [human].”  Id. at 634.

[T]he record is devoid of any evidence as to the reliability of [the AI program] in general, let alone as it relates to how it was applied here.  Without more, the Court cannot blindly accept as accurate, calculations which are performed by artificial intelligence.

Id.  Weber made several “finding” with respect to AI:

  • AI is “any technology that uses machine learning, natural language processing, or any other computational mechanism to simulate human intelligence, including . . . evidence creation or analysis, and legal research.”
  • “‘Generative A.I.’ [i]s artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query).”
  • “[P]rior to evidence being introduced which has been generated by an artificial intelligence product or system, counsel has an affirmative duty to disclose the use of artificial intelligence.”
  • AI generated evidence “should properly be subject to a Frye hearing prior to its admission.”

Id. at 635.

Concord Music Group, Inc. v. Anthropic PBC, 2025 WL 1482734 (Mag. N.D. Cal. May 23, 2025), is another instance of an expert exposed by an AI hallucination – “a citation to an article that did not exist and whose purported authors had never worked together.”  Id. at *3.  The court considered the infraction “serious,” but not as “grave as it first appeared.”  Id.

[Proponent’s] counsel protests that this was “an honest citation mistake” but admits that Claude.ai was used to “properly format” at least three citations and, in doing so, generated a fictitious article name with inaccurate authors (who have never worked together) for the citation at issue.  That is a plain and simple AI hallucination.

Id. (citation omitted).  However, “the underlying article exists, was properly linked to and was located by a human being using Google search.”  Id.  For that reason, Concord did not view the situation as one where “attorneys and experts have abdicated their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers.”  Id. (indirectly quoting Kohls).  Still the existence of the hallucination was fishy enough that the relevant paragraph from the expert report was stricken:

It is not clear how such an error − including a complete change in article title − could have escaped correction during manual cite-check by a human being. . . .  [The court’s] Civil Standing Order requires a certification “that lead trial counsel has personally verified the content’s accuracy.” Neither the certification nor verification has occurred here.

Id.  Further, as in Kohls “this issue undermines the overall credibility of [the expert’s] written declaration, a factor in the Court’s conclusion.”  Id.  Cf. Shoraka v. Bank of Am., N.A., 2023 WL 8709700, at *3 (C.D. Cal. Dec. 1, 2023) (excluding non-AI expert report that “consist[ed] almost entirely of paragraphs . . . simply copied and pasted from online sources”).

On the other hand, we have Ferlito v. Harbor Freight Tools USA, Inc., 2025 WL 1181699 (E.D.N.Y. April 23, 2025).  The plaintiff’s expert, lacking formal credentials, claimed considerable practical experience.  Among other reasons, the defendant sought to exclude his report because “after completing the report, he entered a query into ChatGPT about the best way to secure a hammer head to a handle, which produced a response consistent with his expert opinion.”  Id. at *1.  Ferlito denied exclusion because the expert had only used AI “after he had written his report to confirm his findings” – findings initially “based on his decades of experience.”  Id. at *4.  The expert “professed to being ‘quite amazed’ that the ‘ChatGPT search confirmed what [he] had already opined’” and claimed, “that he did not rely on ChatGPT.”  Id.  Taking that testimony at face value, Ferlito allowed the expert opinions:

There is no indication that [the expert] used ChatGPT to generate a report with false authority or that his use of AI would render his testimony less reliable.  Accordingly, the Court finds no issue with [the expert’s] use of ChatGPT in this instance.

Id.  Ferlito is not necessarily inconsistent with the previous decisions because of the expert’s denial that he had used AI to generate the actual report.

Considering this precedent, while it does seem that some courts are being distracted, in addressing Rule 702 issues by AI’s propensity for hallucinations, most of them do understand the more basic issue with expert use of AI – that the opinions are no longer those of the experts themselves.  Rather, when experts use AI to generate their reports, they have reduced themselves to “parrot” status, blindly reciting whatever the AI program generates.  As such, AI generated expert reports should not be admissible without some means of validating the workings of the AI algorithms themselves, which we understand is not possible in most (if not all) large-language AI models.

Photo of Bexis

As 2025 came to an end, we presented our loyal readers with our annual review of our ten worst decisions of the past year and our ten best decisions of the past year.

Now, in the new year, as we do each year, we’re pleased to announce that four (we hope) of your bloggers – Bexis, Steven Boranian, Stephen McConnell, and Lisa Baird – will be presenting a free 90-minute CLE webinar on “The Good, the Bad and the Ugly: The Best and Worst Drug/Medical Device and Vaccine Decisions of 2025” on Wednesday, January 14th at 12 p.m. EST to provide further insight and analysis on these cases.

This program is presumptively approved for 1.5 CLE credits in California, Connecticut, Illinois, New Jersey, New York, Pennsylvania, Texas and West Virginia. Applications for CLE credit will be filed in Colorado, Delaware, Florida, Georgia, Ohio, and Virginia. Attendees who are licensed in other jurisdictions will receive a uniform certificate of attendance, but Reed Smith only provides credit for the states listed. Please allow 4-6 weeks after the program to receive a certificate of attendance.

FOR VIEWERS OF RECORDED ON-DEMAND PROGRAMS: To receive CLE credit, you will need to notify Learning & Development CLE Attendance  once you have viewed the recorded program on-demand. 

**Please note – CLE credit for on-demand viewing is only available in California, Connecticut, Illinois, New Jersey, New York, Pennsylvania, Texas, and West Virginia. Credit availability expires two years from the date of the live program.

The program is free and open to anyone interested in tuning in, but you have to sign up in advance here.

Photo of Eric Hudson

Happy new year, and welcome to 2026. While we may still be pondering the meaning of auld lang syne or waxing philosophical about the new year, we’ll quickly move on  and get to work defending our clients. That’s what we do as defense hacks, and kudos to all of you for doing it so well.

We’ve written many times about plaintiffs who try (and fail) to plead injury by alleging hypothetical risks, speculative future harm, or buyer’s remorse untethered to actual loss. Today’s dismissal of a putative class action from the Northern District of California is a new year’s reminder that Article III and statutory standing remain stubbornly real requirements.  Druzgalski v. CVS Health Corp., 2025 U.S. Dist. LEXIS 265766 (C.D. Cal. Dec. 23, 2025).

Continue Reading New Year, Same Old Standing Problems
Photo of Bexis

Today’s guest post is by Reed Smith‘s Jamie Lanphear. Like Bexis, she follows tech issues as they apply to product liability litigation. In this post she discusses a pro-plaintiff piece of legislation recently introduced in Congress that would overturn the current majority rule that electronic data is not considered a “product” for purposes of strict liability, and impose product status on such data nationwide. As always our guest posters deserve 100% of the credit (and any blame) for their writings.

**********

Taking a page from the EU playbook, Senators Dick Durbin and Josh Hawley recently introduced the AI LEAD Act, a bill that would define AI systems as “products” and establish a federal product liability framework for such systems.  The structure is strikingly reminiscent of the EU’s new Product Liability Directive (“PLD”), which we previously unpacked at length here, here, and here.  Unlike the EU (aside from Ireland), however, the United States has a common-law system and a history of considering product liability to be a creature of state law.  Federal legislation to create uniform, substantive product liability principles for the entire country appears unprecedented, although an attempt forty some years ago was filibustered to death.

Like the attempt in the 1980s, it is unlikely the bill will pass in its current form (read it and you’ll understand why), but its introduction still matters.  It signals a policy tilt toward treating AI as a product, an issue U.S. courts have been wrestling with.

Before diving in, a word on tone.  The bill’s findings assert that AI systems, while promising, have already caused harm, and cite tragic incidents involving teenagers who allegedly died after “being exploited” by AI chatbots.  S. 2937, 2025, p. 2.  That is likely a nod to ongoing AI chatbot litigation—where no court has yet adjudicated liability.  Building a federal framework on such “findings,” without established liability, illustrates how current sentiment could shape future law—even if this bill never becomes one.

Key Provisions of the AI LEAD Act

The bill lifts familiar product liability doctrines into an AI-specific statute, then tweaks them in ways that matter.  The devil, as always, is in those tweaks.

  • First, causes of action. The bill would create four routes to liability: negligent design, negligent failure to warn, breach of express warranty, and strict liability for a “defective condition [that is] unreasonably dangerous.”  Conspicuously absent is a standalone manufacturing defect claim.  And unlike most state court regimes that parse strict liability by defect type, the bill would package strict liability into a single “defective condition” bucket, raising the obvious question of what counts as “defect” in a software context.
  • Second, noncompliance as defect. Under the bill, noncompliance with applicable safety statutes or regulations would deem a product defective with respect to the risks those rules target.  Compliance, by contrast, would be only evidence—it would not preclude a finding of defect.  This biased provision resembles how the new EU PLD treats noncompliance with safety regulations, though the LEAD Act is more aggressive in its approach, establishing defect rather than merely presuming it in the face of noncompliance.
  • Third, nontraditional harms. “Harm” would include not only personal injury and property damage but also “financial or reputational injury” and “distortion of a person’s behavior that would be highly offensive to a reasonable person.”  S. 2937, Sec. 3(8).  That expansion raises more questions than answers.  What exactly is “behavioral distortion”?  How is it shown?  And how would reputational injury be defined within the context of AI products?
  • Fourth, circumstantial proof of defect. Courts could infer defect from circumstantial evidence where the harm is of a kind that “ordinarily” results from a product defect and is not solely due to non-defect causes.  That is a familiar evidentiary concept in product cases involving shattered glass and exploding widgets, but translating “ordinarily” to AI, where baseline failure modes, expected outputs, and user modifications are still being defined, will be harder.  What “ordinarily” happens with a large model depends on the training corpus, the guardrails, the deployment environment, and the prompt.  In other words, results can vary widely, so it’s hard to say that any particular outcome is ordinary.  That may explain why “ordinarily” would expand liability beyond the usual state-law “malfunction theory” formulation, which requires plaintiffs to exclude reasonable secondary causes. 
  • Fifth, liability for deployers. The law would extend liability to deployers, not just developers.  “Deployers”—those who use or operate AI systems for themselves or others—could be liable as developers if they substantially modify the system (i.e., make unauthorized or unanticipated changes that alter the system’s purpose, function, or intended use) or if they intentionally misuse it contrary to intended use and proximately cause harm.  Separately, if the developer is insolvent, beyond the reach of the court, or otherwise unavailable, a deployer could be held liable to the same extent as the developer, with indemnity back against the developer if feasible.  That tracks the EU trend of extending exposure across the supply chain.
  • Sixth, a federal cause of action and limited preemption. The bill would create a federal cause of action that could be brought by the U.S. Attorney General, state AGs, individuals, or classes.  It would allow injunctive relief, damages, restitution, and the recovery of reasonable attorneys’ fees and costs.  On preemption, the bill would supersede state law only where there is a conflict, while expressly allowing states to go further.  That is not a clean sweep of state law; it is a floor with room for states to set a higher ceiling.
  • Seventh, a foreign-developer registration hook. Foreign developers would need a designated U.S. agent for service of process before making AI systems available in the U.S., with a public registry maintained by DOJ and injunctive enforcement for noncompliance.

The Bigger Picture: Software is Marching Toward “Product”

The AI LEAD Act fits a global trend of treating software and AI as products subject to strict liability.  The EU’s rebooted PLD makes this explicit.  This bill points in the same direction and, in places, pushes harder.  That matters because, as the Blog discussed in a previous post, U.S. courts traditionally treated software as a service, which often kept strict liability theories off the table.  Recent decisions, however, have nudged in the other direction, allowing product liability claims to proceed against software and AI systems.  Bexis just finished a law review article on this subject.  A federal statute that codifies AI as a “product” would accelerate that shift, harmonize some rules, and upend others.

Conclusion: What’s Next

While unlikely to pass as written, the AI LEAD Act is further evidence that AI and software are entering a new phase in the world of product liability law.  The bill reflects a growing interest in regulating AI through a product liability lens.  For companies developing or deploying AI, the practical takeaway at this stage is simple: keep watching.  Whether or not the AI LEAD Act advances, the center of gravity is moving toward treating at least some AI functionality like a product.

Photo of Michelle Yeary

Plaintiffs sometimes treat an MDL like a long layover—stretch their legs, grab a coffee, and assume that once they board the flight back to their home court, the airport rules no longer apply. Surprise! The TSA of civil procedure has a long memory, and your boarding pass still has the MDL stamp on it. Procedural orders entered in an MDL simply don’t evaporate when a case is remanded or transferred for trial. They follow the case home. And if they’re ignored, trial courts can—and do—dismiss cases for failure to comply.

Of all the things plaintiffs’ counsel need to track, there is one item that really should not fall through the cracks—whether their clients are still alive. This is not gallows humor. It’s basic federal civil procedure. Rule 25 requires a motion to substitute a proper party-plaintiff within ninety days of the filing of a suggestion of death. But Rule 25 does not set a deadline for the filing of a suggestion of death. Unfortunately, in MDLs with hundreds or thousands of cases it becomes dangerously easy for individual plaintiffs to become names on a spreadsheet rather than people whose status needs monitoring. Then months (or even years) go by before counsel and the court learn that the MDL inventory is full of deceased plaintiffs and procedurally defective cases. And since that can distort settlement metrics, bellwether pools, and case valuations, some MDL courts enter additional requirements to force plaintiffs’ counsel to keep track of their clients.

Such a pretrial order (PTO) was entered in the Bair Hugger MDL. In addition to Rule 25, it required plaintiffs’ counsel to file suggestions of death within ninety days of the death of a plaintiff or risk dismissal with prejudice. The MDL court, in fact, dismissed several plaintiffs for failing to file timely suggestions of death. See post here. But the plaintiff in Robinson v. 3M Company, 2025 U.S. Dist. LEXIS 263427 (M.D. Fla. Dec. 22, 2025), apparently thought that because the MDL judge didn’t personally dismiss their case before transferring it out of the MDL, the slate must be clean. That’s a bold strategy, Cotton.

Plaintiff Robinson passed away on June 4, 2025. So, to comply with the MDL PTO, plaintiff needed to file a suggestion of death by September 2, 2025. They did not, despite learning of the death on August 22nd.  Instead, defendants filed a suggestion of death on September 16th upon independently learning of the death and then moved to dismiss the case. Id. at *3-4.

Plaintiff’s main argument against dismissal was that the MDL PTO did not apply to remanded cases. However, that ignores that the Robinson case was not remanded. Because it was directly filed in the MDL, it was “transferred” to Florida for trial, not “remanded.” The PTO itself explicitly stated that it applied to all directly filed cases and the order transferring the case explicitly incorporated “Selected Orders” filed in the MDL, including the order regarding suggestions of death. Id. at *5-7.

Therefore, a suggestion of death should have been filed within 90 days of plaintiff’s death—and not, as plaintiff argued within 90 days of plaintiff’s counsel learning of the death. “The strict deadline is intended to ensure the timely progression of litigation and to prevent delays arising from counsel’s lack of diligence or communication.” Id. at *7. Plaintiff offered no excuse for not having a procedure in place to communicate with their clients at least every ninety days which would have enabled them to comply with the PTO. Nor was plaintiff entitled to ninety days from the filing of the suggestion of death to file a motion to substitute. Having failed to comply with the PTO’s threshold requirement, defendant was entitled to move to dismiss.  Id. at *8-9.

But here is where the transferor court cut the tardy plaintiff a little slack. The MDL PTO called for a dismissal with prejudice, but Eleventh Circuit precedent favored a without prejudice dismissal in this situation. So that is what the court ordered. Id. at *10-11.  

While the distinction between remand and transfer played a role here, nobody should assume that in either scenario the case reboots to Level One. The receiving trial court inherits the case as it stands and courts routinely hold that MDL orders remain binding post-remand/transfer. Think of it less like a reset and more like a baton pass. MDL courts issue real orders with real consequences. And when your case goes home, those orders pack their bags and come with you. So don’t be shocked when a trial court enforces them. The only surprising thing would be if it didn’t.