Photo of Lisa Baird

Yesterday we did our annual best of/worst of CLE, “The Good, the Bad and the Ugly: The Best and Worst Drug/Medical Device and Vaccine Decisions of 2025”.  It was good fun for us presenters and hopefully at least mildly educational and entertaining for the audience.  (If you missed it, the video replay will be available later today at this link, and you can get CLE credit to boot). 

One thing that was striking about this year’s countdown was how many opinions involved expert exclusion and Federal Rule of Evidence 702—7 out of 20 cases.  Expert exclusion issues are a key part of our practice, of course, but these cases made their respective lists because they either recognized that the 2023 amendments to Rule 702 intended to halt overly permissive, let-it-all-in-and-let-the-jury-sort-it-out approaches to expert testimony, or they flagrantly got it wrong.

Today’s post is about another Rule 702 opinion, and in particular its Georgia state court analogue, OCGA § 24-7-702 (b).  Sterigenics U. S., LLC v. Mutz, 923 S.E.2d 176 (Ga. Ct. App. 2025) is a little bit older (October 31, 2025) and involves expert testimony on general causation in a so-called “toxic tort” case involving ethylene oxide (EtO) emissions from a sterilization facility.  But it has application to our pharmaceutical and medical device area, so it is worth a closer look.

The appeals in Sterigenics arose from eight bellwether cases in which plaintiffs alleged that exposure to EtO caused various cancers and birth defects.  The trial court had granted in part and denied in part motions to exclude expert testimony and for summary judgment, prompting cross-appeals from both sides.

As you would expect, in addition to establishing exposure to the alleged agent that cause harm, plaintiffs in Georgia must offer proof of general causation — that exposure to a substance is capable of causing a particular injury or disease — and proof of specific causation — that exposure to a substance under the circumstances of the case contributed to his illness or disease.

The core question before the Court was whether Georgia courts should adopt the Eleventh Circuit’s framework for evaluating the reliability of expert testimony on general causation under Georgia Rule 702(b), which is materially identical to Federal Rule of Evidence 702.  

The Court held, as a matter of first impression, that Georgia courts should indeed apply the Eleventh Circuit’s approach to Rule 702 and its approach to general causation in toxic tort cases derived from McClain v. Metabolife Intl., Inc., 401 F.3d 1233 (11th Cir. 2005).

On Rule 702, Sterigenics said all the right things:  Trial courts must act as gatekeepers and assess the reliability of proposed expert testimony; they must consider whether the expert’s methodology is sufficiently reliable; and the proponent of the expert testimony bears the burden of establishing that reliability or the testimony will not be admitted.

With the Rule 702 parameters established, Georgia agreed with McClain’s approach of dividingtoxic tort cases into two categories:

  • Category One: Cases in which the medical community routinely and widely recognizes that the chemical at issue is both toxic and causes the type of harm alleged (e.g., asbestos causing mesothelioma).  In such cases, courts need not undertake an extensive analysis of expert testimony regarding general causation (whether that agent is capable of causing the harm alleged to humans, at the exposure level established) and the focus shifts to specific causation (did the agent cause the harm in this instance.
  • Category Two: Cases where the medical community does not generally recognize the agent as both toxic and causing the alleged injury.  Here, plaintiffs must establish general causation through reliable expert testimony, with courts paying careful attention to the expert’s dose-response analysis and whether a harmful level of exposure has been identified.

The trial court had adopted a “third way,” treating EtO as falling between the two categories and admitting expert testimony that “any exposure” to EtO could cause harm without requiring a specific dose-response relationship.  The Georgia Court of Appeals rejected this, clarified that, while precise quantitative dose-response evidence is not required, experts must lay a reliable groundwork for determining the dose-response relationship or otherwise establish causation through accepted scientific methodologies.

On remand, the trial court must first determine whether EtO is routinely and widely recognized by the medical community as both toxic and causative of the alleged harms. If so, the focus shifts to specific causation (and the reliability of expert testimony regarding same).  If not, the court must rigorously assess the reliability of the plaintiffs’ general causation experts, including their dose-response analysis and consideration of alternative methodologies.

We applaud Georgia to the extent is has better aligned state practice with federal standards and reinforced the importance of rigorous judicial scrutiny of expert causation testimony.

Photo of Stephen McConnell

This is a defense blog.  Are we biased? Yes, we are. We come by that bias honestly, via temperament, principle, and client loyalty. We are happy to report on defense wins. If we report at all on plaintiff wins, it will be grudgingly and typically accompanied by heaping helpings of regrets and criticisms. 

Have we occasionally said unkind things about plaintiff lawyers? Sure. As private eye Philip Marlowe said in The Big Sleep, “I don’t mind if you don’t like my manners.  They’re pretty bad.  I grieve over them during the long Winter evenings.” Look, we hate asymmetrical discovery, we hate phony moral posturing, and we hate blatant forum shopping. But we do not hate plaintiff lawyers.  In fact, we count many plaintiff lawyers as respected colleagues and even good friends.  Of course, we fight hard against each other. We do not mind the occasional knife fight versus our foes on the other side of the v, just as long as a knife does not land in our back. Professionalism and civility should prevail. The plaintiff lawyers we battle are usually the best of the best. They are smart, creative, and relentless. When we prosecuted cases for the U.S. Attorney’s office in C.D. Cal., we had to admit that the criminal defense attorneys were better at cross-examination than we were. One reason for that was that in some cases cross-examination was all the defense had. Still, skill is skill, and must get its due. On the civil side, plaintiff lawyers are superb at conjuring up moral dramas or even romance sagas (the defendant seduced my client, lied to my client, hurt my client, and now is abandoning my client). A couple of the leading lights of the plaintiff MDL bar are infuriatingly good at making their endless discovery demands sound almost reasonable.

In any event, maintaining cordial relations with the other side can redound to our clients’ benefit in terms of courtesies and compromises. Occasionally, plaintiff lawyers have given us a heads’ up on upcoming litigation. We’ve attended a couple of plaintiff bar events, and they are just as substantive and far more fun than defense bar events. We know at least one drug company GC who wonders whether he should have gone the plaintiff route, if only to mix in more excitement into his life.

Last week, a plaintiff lawyer sent us a law review article he had written.  The plaintiff lawyer was Michael Gallagher. He is with Morgan & Morgan (“For the People”). His article is entitled “Snap Removal and the Absurdity Doctrine.” It appears in 55 U. Mem. L. Rev. 915 (Summer 2025). The article is well-written. It is short and clear. It certainly has a point of view.  As we have mentioned before more than once, law review articles are usually helpful to the extent they set forth the governing law, and much less helpful when they start telling you how things ought to be. That is true with Gallagher’s article, too, though it is helpful to get a preview of the plaintiff-side argument.

Let’s go back to first principles for a moment.  28 U.S.C. section 1441(b) establishes removal based on federal diversity jurisdiction, and subsection (2) provides that a civil action otherwise removable based on diversity “may not be removed if any of the parties in interest properly joined and served as defendants is a citizen of the State in which such action is brought.” That is the forum defendant rule. The underlying theory is that the defense does not need to be in federal court to protect against local bias if one of the defendants is local. That theory makes us think of the last line in Hemingway’s The Sun Also Rises: “Isn’t it pretty to think so?”  The theory simply does not work in reality.  A Philly jury will still clobber an out of state defendant company even if a codefendant retailer is local. Heck, a Philly jury will clobber a company defendant that is located down the street from the courthouse.  Local-schmocal. 

From the perspective of corporate tort defendants, the real problem with state courts is that they too often lack either the appetite or resources to get rid of junk claims and junk science. Whether that is because of different procedures (e.g., FRE 702 vs Frye or whatever it is that California, Illinois, or Pennsylvania are doing), or elected judges or something else, being in federal as opposed to state court considerably changes the value of a case. Both sides know this. It is no wonder that a corporate tort defendant will seize upon any colorable basis to remove a case to federal court. It is no wonder that a plaintiff lawyer will seek to remand the case to state court.

Where does snap removal enter the picture?  Surely you noticed that section 1441(b)(2) precludes removal “if any of the parties in interest properly joined and served as defendants” is local. Snap removal happens when a defendant removes a case to federal court before the local defendant has been served. It is a race. When the plaintiff loses the race, the plaintiff cries foul and argues that snap removal is a dirty trick, or at least pure sophistry.

Or, as in Gallagher’s article, the plaintiff argues that the result is “absurd.” The article does a good job of collecting cases (all district courts) treating the “properly joined and served” language as a source of “mischief by defendants.” The article also discusses the absurdity doctrine as a “longstanding canon of statutory interpretation,” and cleverly lists Justices Brennan, Rehnquist, Stevens, O’Connor, Scalia, Kennedy, Sotomayor, Gorsuch, and Kavanaugh among those jurists who have recognized and applied the absurdity doctrine. That is a distinguished and varied group. But none of those jurists said that snap removal is absurd. More to the point, and as Gallagher’s article acknowledges, “Three circuits blessed this practice, while two circuits worried about it.”  No circuit court has read the practice out of the law. By our count, three circuits (Second, Third, and Fifth) have explicitly blessed snap removals, and another (Sixth) seems to have smiled upon it, albeit in a footnote.   

Why has the defense had the better of the argument on snap removal?  As Justice Kagan famously said, “we are all textualists now.”  The plain language of section 1441(b)(2) supports snap removal. To forbid snap removals would require an amendment of the law, which the Court cannot do, and which Congress has not seen fit to do.  Nor is there anything inherently “absurd” about snap removal.  As Gallagher’s article says, section 1441 was amended in 1948 to add the “properly joined and served” language to combat gamesmanship by plaintiffs. Plaintiffs were steering clear of federal court by naming local plaintiffs without serving them – because, all along, the plaintiffs had no real interest in going after the hapless local.

While we do not agree with Gallagher’s conclusion, we are grateful for his article. It is an honest and intelligent article. It cites the appropriate cases. It rehearses the expected counterarguments (snap removal is not absurd at all, or is not all that absurd, or it is up to Congress to fix the statute).  Gallagher certainly knows this topic well. Indeed, he has written a couple of other articles against snap removal. If you want to get ready to do battle on snap removal, reading Gallagher’s articles should be part of your preparation.

Not surprisingly, the scriveners on this highly-biased defense blog have also written some other articles on snap removal.  For example, Bexis back in 2019 discussed testimony before the Advisory Committee on the Civil Rules on whether anything needed to be done about snap removal. More recently, last month, the same ink-stained wretch penning this blogpost summarized a N.D. Fla. case that upheld snap removal.  In that case, the court held that the use of the snap removal did not bump against the “absurdity bar.” We do not think there is a level of absurdity or gamesmanship that would permit a court to ignore or rewrite 1441(b)(2), but it is clear that mere frustration of a plaintiff litigation tourist visit to state court is not it.     

Photo of Michelle Yeary

We litigators love a good hearing. Judges asking sharp questions, counsel delivering crisp arguments, everyone believing they’ve advanced the ball. What no one loves—especially our clients—is realizing after the hearing that confidential business information just galloped into the public record. That’s apparently what happened recently in In re Suboxone Buprenorphine/Naloxone Film Products Liability Litigation,  2026 U.S. Dist. LEXIS 2581 (N.D. Ohio Jan. 7, 2026), requiring defendants to move to seal portions of a hearing transcript.

The motion was granted in part and denied in part. The court agreed to seal certain portions of the transcript that contained “confidential business records, trade secrets, and other matters that the companies typically take efforts to protect against public disclosure.” Id. at *10. The court denied the motion where it found the transcript contained public information.  Id. But having the information sealed after its been disclosed is not a given. As the court points out,

The courts have long recognized . . . a strong presumption in favor of openness to court records. Overcoming this burden is a heavy one: Only the most compelling reasons can justify non-disclosure of judicial records. The greater the public interest, the greater the burden to justify seal.

Id. at *8.  

Plaintiffs obtain discovery containing a defendant company’s trade secrets or sensitive commercial information all the time—pricing models, internal processes, design specifications, customer data, you name it. Presumably, all such documents are marked “Confidential” under a stipulated protective order. Which is all well and good until plaintiffs’ counsel eager to make their point, quotes or paraphrases the confidential material in a publicly filed document or open court. Then, what was protected discovery material is transformed into a judicial record and the “strong presumption” of openness attaches. Unfortunately, dealing with that after the filing or the hearing can sometimes be like trying to put the toothpaste back in the tube. Or, to borrow another well-worn metaphor, you’re slamming the barn door after the horses are halfway to the next county. Only recovering horses is probably easier than trying to undo damage to a company’s competitive position.

We don’t know what the protective order in In re: Suboxone looks like, but the decision got us thinking about why it is important to think of confidentiality orders as more than just paperwork to get discovery moving. For defendants—particularly companies with valuable intellectual property or sensitive business data—they are frontline defense mechanisms. A well-drafted confidentiality order should do more than label documents “Confidential.” It should control how and when those documents can be used, especially in public-facing contexts like court filings and hearings. Defendants should consider including key protections like:

•             Advance notice provisions requiring a party to notify the producing party before using confidential material in a filing or at a hearing.

•             Time to object so the producing party can seek sealing, redactions, or other protections before disclosure occurs.

•             Clear obligations on the receiving party not to publicly disclose confidential information without court approval.

Without these provisions, a confidentiality designation may offer little real-world protection when it matters most.

Negotiating advance notice or disclosure provisions, however, can be tricky. From the producing party’s perspective, advance notice is about control and damage prevention. If sensitive documents are about to make a public appearance, they want time to object, seek sealing, or at least brace for impact. From the receiving party’s perspective, however, “advance notice” can sound suspiciously like “advance preview of my litigation strategy.”

And that’s where things get awkward. No one wants to explain why they’re using the document, how they plan to use it, or what argument it supports—especially not weeks before a filing is due. A notice provision that’s too vague is meaningless; one that’s too detailed feels like forced early disclosure of work product. In practice, the sweet spot is usually a notice requirement that’s specific about timing and scope (identify the documents x days before filing/hearing), but intentionally vague about theory. That way, the producing party gets fair warning without the receiving party having to tip its hand. Everyone leaves mildly dissatisfied, which in confidentiality-order negotiations generally means success.

And don’t forget that the notice should be before any public use, not just filings. Courtrooms are public. Transcripts are public by default. And once something is said on the record, it tends to stay said. Even if a transcript is later sealed, the information may already have been accessed, quoted, or summarized elsewhere. The internet, as they say, never forgets.

From a defense perspective, the best sealing motion is the one you never have to file. Preemptive procedures—built into the confidentiality order—allow disputes to be resolved before disclosure. If plaintiffs want to rely on confidential material at a hearing, they give notice. The defendant can then move to seal, request redactions, or ask the court to conduct that portion of the hearing under seal.  You might still lose—but losing in advance is very different from losing after the fact. At least you can warn the client, line up a sealing motion, prepare talking points, and generally move from “oh no, that’s in the public record” to “we saw this coming and planned accordingly.” Advance notice turns a scramble into a strategy session, which in high-stakes confidentiality fights is about as good as it gets.

Judges generally appreciate this approach too. It avoids last-minute emergencies and preserves the court’s interest in transparency while still protecting legitimate confidentiality concerns.

No one sets out to create a “barn door after the horses out” situation. But it happens when confidentiality orders are treated as boilerplate rather than strategic tools. Think of it this way–if you wouldn’t leave your office door unlocked overnight because you might be able to call the police in the morning, you shouldn’t rely on after-the-fact sealing motions to protect sensitive business information. Prevention beats damage control every time.

Confidentiality orders aren’t glamorous. They don’t win cases on their own. But when they’re done right, they quietly prevent very expensive problems—and keep those horses exactly where they belong–safely in the barn.

Photo of Bexis

Since it was published in 2011, the third edition of the Federal Judicial Center’s Reference Manual for Scientific Evidence has been the go-to guide for federal judges seeking to sort out scientific testimony, and a major source of non-precedential authority for both sides when arguing motions under Fed. R. Evid. 702.  2011, however, was fifteen years ago.  The FJC and its academic collaborators have been promising an update for several years.

It’s finally here, and you can get a free PDF copy of your very own here.

The Reference Manual on Scientific Evidence, Fourth Edition checks in at 1682 pages, so don’t expect a substantive analysis here.  But here is the table of contents:

Liesa L. Richter & Daniel J. Capra, “The Admissibility of Expert Testimony,” 1

Michael Weisberg & Anastasia Thanukos, “How Science Works,” 47

Valena E. Beety, Jane Campbell Moriarty, & Andrea L. Roth , “Reference Guide on Forensic Feature Comparison Evidence,” 113

David H. Kaye, “Reference Guide on Human DNA Identification Evidence,” 207

Thomas D. Albright & Brandon L. Garrett, “Reference Guide on Eyewitness Identification,” 361

David H. Kaye & Hal S. Stern, “Reference Guide on Statistics and Research Methods,” 463

Daniel L. Rubinfeld & David Card, “Reference Guide on Multiple Regression and Advanced Statistical Models,” 577

Shari Seidman Diamond, Matthew Kugler, & James N. Druckman, “Reference Guide on Survey Research,” 681

Mark A. Allen, Carlos Brain, & Filipe Lacerda, “Reference Guide on Estimation of Economic Damages,” 749

M. Elizabeth Marder & Joseph V. Rodricks, “Reference Guide on Exposure Science and Exposure Assessment,” 831

Steve C. Gold, Michael D. Green, Jonathan Chevrier, & Brenda Eskenazi, “Reference Guide on Epidemiology,” 897

David L. Eaton, Bernard D. Goldstein, & Mary Sue Henifin, “Reference Guide on Toxicology,” 1027

John B. Wong, Lawrence O. Gostin, & Oscar A. Cabrera, “Reference Guide on Medical Testimony,” 1105

Henry T. Greely & Nita A. Farahany, “Reference Guide on Neuroscience,” 1185

Kirk Heilbrun, David DeMatteo, & Paul S. Appelbaum, “Reference Guide on Mental Health Evidence,” 1269

Chaouki T. Abdallah, Bert Black, & Edl Schamiloglu, “Reference Guide on Engineering,” 1353

Brian N. Levine, Joanne Pasquarelli, & Clay Shields, “Reference Guide on Computer Science,” 1409

James E. Baker & Laurie N. Hobart, “Reference Guide on Artificial Intelligence,” 1481

Jessica Wentz & Radley Horton, “Reference Guide on Climate Science,” 1561

We compared this table of contents to the one for the Third Edition, and the differences are in bold.  There is considerable turnover in authorship, with only one chapter unchanged from 2011.  That’s not surprising, since authors who were considered grey-beard experts in their fields fifteen years ago have only gotten greyer (and older) since.  What’s of more import is the addition of three entirely new chapters – on computer science, artificial intelligence, and climate science.

Frankly, we’re surprised and disappointed that there wasn’t a fourth chapter on genetics and genomics, which we view as being of far greater general impact than “climate science,” which is much more of a niche area.

We have skimmed the chapter on AI, because Bexis has been involved with the Lawyers for Civil Justice’s recent submission on the proposed Fed. R. Evid. 707 that would create a possible avenue for admission of computer-generated evidence without a supporting expert.  “Computer-generated” is a broad term and encompasses a lot more than AI (one of the problems with the current draft), but as for AI, the Reference Manual’s new chapter only reinforces our previously-stated belief that there is no way AI can be admissible without the proponent offering expert testimony to support it.  The Reference Manual lists multiple questions that are “essential to authenticating and validating the use of AI.”  Reference Manual (4th), at 1514.

  • What is the AI trained to identify, how has it been weighted, and how is it currently weighted?
  • Does the system have a method to transparently identify these answers?  If not, why not?
  • Are the false positive and false negative rates known, if applicable, or hallucination rates?  If so, how do these rates relate to the case at hand?
  • How has AI accuracy been validated, and is the accuracy of the AI updated on a constant basis?
  • What are the AI’s biases? (See “Probing for Bias” questions below.)
  • Is authenticity an issue?
  • How do each of these questions and answers align with how the AI application is being used by the court or proffered as evidence?

Id.  None of these questions is self-evident, nor do we think that any layperson could competently address them.  Nor does the manual:

Judges might also consider that a qualified AI expert or witness ought to be able to credibly answer these questions, or perhaps the expert or witness may not be qualified to address the application at issue.

Id.

As mentioned in connection with the above questions, the Manual also suggests still more questions specifically designed “to probe for bias.”  Id. at 1529.  This set is even more extensive – and more detailed – than the recommended general questions.  Here are only some of the questions recommended by the manual – limited to those that could be applicable to prescription medical product liability litigation:

  • Who designed the algorithm at issue?
  • What process of review was the algorithm subjected to?
  • Were stakeholders – groups likely to be affected by the AI application – consulted in its conception, design, development, operation, and maintenance?
  • What is in the underlying training, validation, and testing data?
  • How has the chosen data been cleaned, altered, or assessed for bias?
  • How have the data points been evaluated for relevancy to the task at hand?
  • Is the data temporally relevant or stale?
  • Are certain groups improperly over-or under-represented?
  • How might definitions of the data points used impact the algorithm analysis?
  • Do the data or weighted factors include real or perceived racial, ethnic, gender, or other sensitive categories of social identity descriptors, or any proxies for those categories?  If so, why?
  • Have engineers and lawyers reviewed the way these criteria are weighted in and by the algorithm as part of the design and on an ongoing basis?  In accord with what process of validation and review?
  • Is the model the state of the art?  How does it compare against any industry standard evaluation metrics or application-specific benchmarks?
  • How might the terms or phrasings in the user-generated prompts bias the systems’ outputs?  Can these prompts be phrased in a more neutral way?
  • Do any of the terms used have alternative meanings?
  • Are the algorithm’s selection criteria known?  Iterative?  Retrievable in a transparent form?  If not, why not?
  • Does the application rely on a neural network?  If so, are the parameters and weights utilized within the neural network known or retrievable?
  • Does the design allow for emerging methodologies that provide for such transparency?  If a transparent methodology is possible, has it been used?  (If not, why not?)
  • If transparency is not possible with state of the art technology, and a less transparent methodology is employed, what is the risk that the system will rely on parameters that are unintended or unknown to the designers or operators?  How high is the risk?  Is the risk demonstrated?  How is the risk mitigated?
  • Is the input query or prompt asking for a judgment, a fact, or a prediction?
  • Is the judgment, fact, or prediction subject to ambiguity in response?
  • Are there situational factors or facts in play that could, or should, alter the algorithm’s predictive accuracy?
  • Is the application one in which nuance and cultural knowledge are essential to determine its accuracy or to properly query it?
  • Are the search terms and equations objective or ambiguous?  Can they be more precise and more objective? If not, why?
  • What is the application’s false positive rate?
  • What is the false negative rate?
  • What information corroborates or disputes the determination reached by the AI application?
  • Is the application designed to allow for real-time assessment?  If not, is operational necessity the reason, or is it simply a matter of design?
  • Is there a process for such assessment that occurs after the fact?
  • Is the AI being used for the purpose for which it was designed and trained?
  • Is the AI being used to inform or corroborate a human decision?  Are humans relying on the AI to decide or to inform and augment human decisions?

Id. at 1529-30.

To the extent – if at all − that proposed Rule 707 contemplates AI being admissible without expert testimony that could answer these questions (to the extent relevant in a given case), we wonder whether the federal judiciary’s right hand (the Rules Committee) has been following what its left hand (the Committee on Science for Judges) has been doing.  We frankly don’t see any plausible avenue (other than consent of the parties) that AI evidence could possibly be admitted without supporting – and probably extensive supporting – expert testimony that is prepared to address the questions posed in this new chapter of the Reference Manual on Scientific Evidence.

And that’s just one aspect of one chapter.  Have fun reading.

Photo of Steven Boranian

We have spilled a lot of blog ink on Federal Rule of Evidence 702 recently, so it was nice to see a case from our home state of California driving home the importance of following the rules when it comes to expert opinions.  California has a reputation for allowing expert opinions into evidence more permissively than under the Federal Rules, and that reputation is probably well deserved. 

There are, however, rules—and the plaintiff in McDonald v. Zargaryan, No. B329565, 2025 Cal. App. LEXIS 850 (Cal. Ct. App. Dec. 22, 2025), learned the hard way that there can be consequences to playing fast and loose.  In McDonald, the plaintiff claimed injuries to his hip and leg, and later to his neck and groin.  It did not, however, seem to slow him down much:  He continued to snowboard and rollerblade, but nonetheless pursued litigation and disclosed 30 experts under California’s rules requiring the exchange of expert information if any party demands it. 

This is where it gets weird.  Sixteen months after disclosing experts, and one week before trial, the plaintiff went to a new doctor, who recommended spine surgery.  As the California Court of Appeal would later describe it, “Until then, no one had proposed spine surgery [and] . . . [s]pine surgery had not been an issue in the case.”  Id. at *2.  The doctor, moreover, had a “professional relationship” with the plaintiff’s lawyer, and when asked whether his lawyer had referred him to the doctor, the plaintiff replied, “I don’t recall, but possibly.  Maybe.  I think so, before the trial.”  Id. at *3.  Hmm.

Despite these curious circumstances, the trial court allowed the (very) late disclosure of the doctor as an expert and denied a motion to exclude the testimony, so long as the plaintiff made the expert available for a deposition, which occurred.  The expert testified, and a jury returned a substantial verdict for the plaintiff.  Id. at *4-*5.

Allowing the testimony was an abuse of discretion, and the Court of Appeal reversed.  As the court explained,

The goal [of expert witness disclosure] is to avoid surprise at trial.  Surprise at trial is unfair.  It also is inefficient.  Surprise at trial is unfair because ambushes, while effective in warfare, are disfavored in court.  For legal disputes, California has replaced free-for-all trial by combat with rules of professionalism and fair play.  Surprise at trial is inefficient because, if both sides know exactly what evidence the trial will produce, they have a better chance of agreeing in advance on the true value of the case. 

Id. at *5-*6 (emphasis in original, citations omitted).  Here, the plaintiff bent the rules beyond the breaking point.  He did not seek the trial court’s permission to add a new expert, which was required, and he had no reasonable justification for the delay.  Trial counsel submitted his own declaration attempting to explain, but the declaration was just a legal brief and was “worthless as a piece of evidence.”  Id. at *10.

There can be valid justifications for late expert designation—sudden unavailability of an expert through death, illness, incapacitation, and “other serious and uncontrollable events.”  Id. at *10.  We can think of other possible reasons too, such as an unexpected change in the case or the plaintiff’s condition.  But none of that occurred here.  Not even close. 

Editor’s note: This post was revised on January 9, 2026, to remove a reference to this opinion being unpublished. The Court of Appeal certified the opinion for publication.

Photo of Eric Alexander

Perhaps driven by fear of retribution for saying what you really think, an indirect method of communication has gained some popularity on the social media platforms of late.  It goes like this:  1) a historical fact or spin on one is presented, such as on a past military conflict or a criminal conviction; and 2) there is a sentence at the end saying something like “This is not a post about [the subject matter discussed directly].”  Readers who think they know what the post was really about can feel special, whereas others may simply enjoy the bedlam that is the comments section on just about any social media post.  Still others may wish the author had opted for directness.  Our post today on this non-social (or asocial) media site concerns a case with claims and counterclaims that, on first blush, seem far from the Blog’s bailiwick.  First impressions, however, can be misleading.

In Eli Lilly & Co. v. Premier Weight Loss of Ind., LLC, No. 1:25-cv-00664-TWP-TAB, 2025 U.S. Dist. LEXIS 268138 (S.D. Ind. Dec. 31, 2025) (“PWL”), manufacturer A sued quasi-manufacturer B over trademark infringement and false advertising under the Lanham Act and its Indiana state law cousin.  Quasi-manufacturer B brought a counterclaim against manufacturer A for defamation based on statements the manufacturer had made to news outlets about its allegations in the lawsuit.  To resolve manufacturer A’s motion to dismiss, the PWL court had to delve into FDA issues that we find interesting and relevant to issues facing drug companies these days.  The connection, perhaps predictable from the names of the parties, is that the quasi-manufacturer was repackaging and selling the manufacturer’s prescription weight loss and diabetes medications.  (For some other posts on the intersection of the Lanham Act and the FDCA, try here, here, here, and here.)  This is not the increasingly common situation where a compounding pharmacy does its thing, because the defendant was not a licensed pharmacy, compounding or otherwise.  Instead, the defendant allegedly took the plaintiff’s “factory-sealed single-dose autoinjector pens containing 0.5 mL of tirzepatide fluid in various strengths,” repackaged the drug into “lower and/or different doses” in “third-party insulin syringes,” and sold those to patients with unapproved labeling and package inserts.  Id. at *3-4.  Before plaintiff sued over the alleged trademark infringement and false advertising, it shared the proposed complaint with a few local news outlets, at least one of which also allegedly received a statement saying, “We will continue to take action to stop these illegal actors and urgently call on regulators and law enforcement to do the same.”  Id. at *4-5.  The allegedly defamatory statements underpinning the counterclaim were:

1. Lilly stated that PWL is an “illegal actor” such that “regulators and law enforcement” should take action against it;

2. Lilly’s Complaint states that PWL is putting its patients’ lives at risk; and

3. Lilly’s Complaint states that PWL breaks apart or cracks open Lilly’s autoinjector pens.

Id. at *7 (internal parentheticals omitted).  Because truthfulness is a complete defense to defamation, by bringing this counterclaim, the quasi-manufacturer opened itself up to an early test of whether its conduct was illegal.

The PWL court first gave some background on the Indiana standards for defamation and the truthfulness defense, and then addressed the counterclaimant’s argument that it was premature to rule on a defense in the context of a motion to dismiss.  We will skip over that stuff to get to the substance of PWL.  Taking the allegedly defamatory third statement first, the court determined that its “gist” or “sting”—terms from the homey Indiana defamation caselaw—was true because the admitted repacking of a liquid drug from an autoinjector pen into an insulin syringe was “synonymous” with saying the counterclaimant “breaks apart” or “cracks open” the pens to extract the drug.  Id. at *13.  So, the counterclaim was dismissed as to alleged defamation based on that statement.  The second statement required little analysis because the pleadings shed insufficient light on whether the repackaging activities put patients’ lives at risk, such as by compromising sterility.  Id. at *11 & 23-24.  Accordingly, the motion to dismiss was denied on that issue.

The first statement engendered more analysis.  The counterclaimant argued that the statement could not be true unless its conduct violated a criminal statute.  The court rejected this based on Seventh Circuit law interpreting “illegal conduct” to encompass non-criminal acts.  Id. at *14.  Of course, the plaintiff’s claims did allege violations of the Lanham Act.  While evaluating on the “truthfulness” standard instead of the burden of proof applicable to the claims, the court found that “[d]ispensing the contents of Lilly’s medicines into third-party syringes and changing the packaging, labeling, and dosages contained therein, plausibly violates both the language and purpose of the Lanham Act.”  Id. at *19.  The PWL court specifically noted the importance to a trademark holder of preserving the quality control standards for the medications it manufactures.  Id, at *18.  Similar considerations informed the court’s related finding that “PWL’s conduct violates the FDCA,” a finding stated with any plausibility qualifier.  Id. at *20.  The court noted that FDA approval “constitutes approval of the product’s design, testing, intended use, manufacturing methods, performance standards, and labeling [and is] specific to the product,” and held that the counterclaimant’s repackaged products were not covered by the NDA approvals for medications and, thus, needed their own.  Id. at *22-23 (citations omitted).  Those products had not been approved, so their sale violated 21 U.S.C. § 355(a).  So, the first statement was also truthful in calling the counterclaimant an “illegal actor” based on an FDCA violation.  Like we said, this post is not really about trademarks or defamation.  Of course, by filing the counterclaim, the quasi manufacturer invited this ruling, which will make its defense of the manufacturer’s claims that much harder.  Being hacks for the companies that develop the medications and other innovative medical products in the first place, we are not too troubled by this.

The last part of the PWL decision involved the plaintiff’s argument that its statements, specifically the second one that survived the first part of the motion to dismiss, were subject to the litigation privilege applicable to “all relevant statements made in the course of a judicial proceeding, regardless of the truth or motive behind those statements.”  Id. at *25 (citation omitted).  As far as we can tell, this is a new one for the Blog.  We have had our own cases where we challenged scurrilous statements in plaintiff’s filings and where Rule 11 sanctions for unsupported positions in plaintiff’s filings were at issue.  We have also written about proceedings to go after plaintiffs’ experts or journal authors for trade libel.  But not this particular animal.  In PWL, it came down to a timing issue.  Passing around a complaint before it is filed, and presumably other statements made at that point about what a not-yet-filed lawsuit will allege, does not implicate the litigation privilege because the statements are not “made in the course of a judicial proceeding.”  Id. at *26-28.  This made us think about other targets of medical product manufacturer efforts to protect against misstatements about their (trademarked) products:  plaintiff lawyer websites and ads.  Just because the lawyers bring one suit or many suits at some point does not mean they can retroactively cloak their pre-suit statements in any litigation privilege.  This part of the post is actually about the blight of plaintiff lawyer websites and ads.  We are certainly not troubled by limiting protection for those potential vehicles for false advertising.

Photo of Michelle Yeary

Today we are talking about the decision in Govea v. Medtronic, Inc., 2025 WL 3467214 (C.D. Cal. Nov. 26, 2025). Plaintiff claimed the case was about off-label promotion. But thanks to the court taking judicial notice of PMA supplements, it’s mainly a case about on-label use and a plaintiff who waited far too long to sue.

The device at issue is an implantable stimulator used to regulate incontinence. It is prescribed when patients do not tolerate more conservative treatments and medications. The process starts with a two-week trial run where only a lead is implanted and the generator is worn outside the body. After a successful trial, the device is fully implanted. Plaintiff was prescribed the device to treat urinary incontinence. Her trial lasted only one day because the lead wire stuck to her bandage. Her physician recommended she go forward with the implant anyway and she did. Plaintiff alleges that after the implantation in 2011, she received no relief, but instead her condition worsened and she experienced painful shocks. She started asking to have the device removed in 2016. It was eventually explanted in 2018, but her surgeon did not completely remove the lead. Plaintiff claims she was unaware that the lead remained until it was revealed by x-ray in January 2024. Id. at *3. Plaintiff filed suit in December 2024 bringing claims for misrepresentation, breach of express warranty, manufacturing defect, and negligence.

At the core of all of plaintiff’s claims are allegations that the manufacturer promoted the device for an off-label use. Plaintiff claimed the device was only approved for fecal incontinence, not urinary. However, the court granted defendant’s request to take judicial notice of multiple PMA supplements and approvals which showed FDA approval of the device for both forms of incontinence. Because urinary incontinence was approved by PMA, “arguing that [the device] is inappropriate for use in that context is, in effect second-guessing the PMA process.” Id. at *9. So, plaintiff’s misrepresentation claims based on that allegation are preempted.

However, the device’s labeling states it is contraindicated in patients who “have not demonstrated an appropriate response to test stimulation.” Id. Plaintiff alleged that her device failed prior to completion of the test stimulation, that she informed defendant’s representative of that fact, and the representative recommended permanent implantation anyway. Id. at *10. Finding that was a sufficient allegation of promotion for an unapproved use, the misrepresentation claims based on that allegation were not preempted. Likewise, the same allegations were enough for the court to find plaintiff’s breach of express warranty claims not preempted. Id. at *12.

On her manufacturing defect claim, plaintiff failed to allege any violation of a specific FDA requirement for the device. She didn’t even try to rely on CGMPs. Rather, plaintiff relied on allegations that either the device had been improperly implanted or that it must be defective because the wire migrated and could not be fully removed. The former has nothing to do with the manufacture of the device and the latter is res ipsa loquitur—with the Ninth Circuit has rejected. Id. at *11. Therefore, plaintiff’s manufacturing defect claim was expressly preempted.

The court’s final preemption analysis was on the negligence claim. First, the court found that there is no state law duty to refrain from off-label promotion. The prohibition on off-label promotion is solely a function of the FDCA. So, to the extent plaintiff’s negligence claim was based on alleging such a duty, it was an impliedly preempted private attempt to enforce the FDCA. The only negligence claim that survived preemption was based on allegations of misrepresentations made during the off-label promotion. So, the claim has to be premised on false statements, not whether the statements were on or off label.   

While some of plaintiff’s claims survived preemption, for any to survive the statute of limitations challenge, plaintiff would have to significantly re-write or back off several of her current allegations.

According to the complaint, plaintiff experienced pain which she attributed to the device the entire time it was implanted. Further, she alleges she never received any relief of her symptoms from the device, but rather her incontinence worsened.  Id. at *17. She also alleges that she continued to experience pain at the incision cite following explant. What the complaint fails to allege is that plaintiff took any steps to follow up with any physician following the explant surgery, or indeed “that she took any other actions that would constitute reasonable diligence.” Id. at *16. She relies exclusively on learning in 2024 that the wire was still implanted as the point in time when she “discovered” her injury.  

Plaintiff’s injury didn’t just sneak up on her in 2024. According to her own account, it announced itself early and often. Yet plaintiff waited six years after explant to file suit. The court was unmoved by arguments that she didn’t connect the dots sooner. When a plaintiff alleges immediate and ongoing pain following a medical procedure, the law generally expects some curiosity. At least enough to ask, “Should I look into this?” The statute of limitations doesn’t pause indefinitely while a plaintiff hopes the answer will change. Being on notice doesn’t require knowing every legal theory or scientific detail. It requires awareness of an injury and a possible connection to its cause. Plaintiff had both—and still waited.

Preemption plus time-bar is a powerful one-two punch. Either is often enough; together, they end the fight early.

Photo of Bexis

This “just desserts” story caught our eyes earlier this year – a hot-shot expert witness, on artificial intelligence, no less, got caught with his own hand in the AI cookie jar.  As a result, his credibility was destroyed, and his testimony was excluded.  The litigation leading to Kohls v. Ellison, 2025 WL 66514 (D. Minn. Jan. 10, 2025), concerned a Minnesota anti-deepfake statute.  The plaintiffs were political operatives claiming a First Amendment right to create deep fakes of candidates they opposed.  Id. at *1.  The defendant hired a California-based professor to testify “about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy.”  Id.

The AI expert, however, used AI himself in preparing his material and “included fabricated material in his declaration.”  Id.  Specifically, the would-be expert “admitted that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article.”  Id.  AI had provided “fake citations to academic articles, which [the expert] failed to verify before including them in his declaration.”  Id.  The state sought to submit a belated amendment removing the fictitious citations, but the court was having none of it.  Id.

The AI expert’s AI-based report was excluded in its entirety. 

[T]he Court cannot accept false statements − innocent or not − in an expert’s declaration submitted under penalty of perjury.  Accordingly, given that the [expert] Declaration’s errors undermine its competence and credibility, the Court will exclude consideration of [that] expert testimony

Id. at *5.  “The irony. . . . a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI − in a case that revolves around the dangers of AI, no less.”  Id. at *3.  Moreover, the expert had committed a cardinal, but very common, sin of a litigation-engaged expert.  He had not lived up to his usual professional standards in reaching his paid opinions:

It is particularly troubling to the Court that [the expert] typically validates citations with a reference software when he writes academic articles but did not do so when submitting the . . . Declaration. . . .  One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles.

Id.  The expert “abdicate[d his] independent judgment and critical thinking skills in favor of ready-made, AI-generated answers.”  Id. at *4.

Even though counsel that hired this AI-dependent expert professed no knowledge of the AI-generated falsehoods in the expert’s declaration, that did not excuse them.  Rule 11 “imposes a ‘personal, nondelegable responsibility’ to ‘validate the truth and legal reasonableness of the papers filed’ in an action.”  Id. (citation and quotation marks omitted).  In this context, attorneys have an obligation “to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.”  Id.

The court ultimately excluded the AI-generated report because the false citations had destroyed the expert’s credibility and required “steep” sanctions:

[The expert’s] citation to fake, AI-generated sources in his declaration . . . shatters his credibility with this Court.  At a minimum, expert testimony is supposed to be reliable.  Fed. R. Evid. 702.  More fundamentally, signing a declaration under penalty of perjury is not a mere formality. . . .  The Court should be able to trust the indicia of truthfulness that declarations made under penalty of perjury carry, but that trust was broken here.

Moreover, citing to fake sources imposes many harms, including wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system. . . .  Courts therefore do not, and should not, make allowances for a party who cites to fake, nonexistent, misleading authorities − particularly in a document submitted under penalty of perjury.  The consequences of citing fake, AI-generated sources for attorneys and litigants are steep.  Those consequences should be no different for an expert offering testimony to assist the Court under penalty of perjury.

Id. at *4-5 (citations and quotation marks omitted).

We think that the court reached the right result in Kohls, but for the wrong reason.  The problem with AI-generated expert testimony is not limited to AI hallucinations.  Instead, it’s deeper and goes to the concept of “expertise” itself.  Who is the expert?  Is it the person who signs the report and is proffered as an expert, or is it whatever AI program the expert used?  It is well established that one expert cannot simply “parrot” the opinions of another.  We’ve written several posts that make this point.  Why should it be any different when the expert blindly parrots something that a black-box AI program spits out, rather than some other expert? 

With that question in mind, we decided to look for other decisions that have addressed experts who used AI to create their submissions.  We think that asking the right questions led to the right answer in In re Celsius Network LLC, 655 B.R. 301 (Bankr. S.D.N.Y. 2023).  Celsius Network was a bankruptcy case applying Rule 702.  The report in question, however, was not written by the expert who signed it.  Rather, it was written by AI, and for that reason it was excluded.

The [expert] Report was not written by [the expert]. Although [he] directed and guided its creation, the 172-page Report, which was generated within 72 hours, was written by artificial intelligence at the instruction of [the expert].  By his own testimony, a comprehensive human-authored report would have taken over 1,000 hours to complete.  In fact, it took [the expert] longer to read [the] report than to generate it.  The Court therefore separately evaluates the [expert] Report. . . .  [T]he Court finds that the . . . Report is unreliable and fails to meet the standard for admission.

Id. at 308.  This AI-generated expert report could not be reliable.

  • “In preparing the report, [the expert] did not review the underlying source material for any sources cited, nor does he know what his team did (or did not do) to review and summarize those materials.”
  • “There were no standards controlling the operation of the artificial intelligence that generated the Report.”
  • “The Report contained numerous errors, ranging from duplicated paragraphs to mistakes in its description of [relevant parameters].”
  • “The [expert] Report was not the product of reliable or peer-reviewed principles and methods.”

Id. at 308.  Thus, Celsius Network determined “that the Report does not meet the standard set forth under Rule 702.”  Id. at 309.

In an earlier case, an expert offered analysis of data that he had fed into a set of algorithms and “click[ed] ‘Go.’”  In re Marriott International, Inc., Customer Data Security Breach Litigation, 602 F. Supp.3d 767, 787 (D. Md. 2022).  That was not enough to be admissible under Rule 702.

Algorithms are not omniscient, omnipotent, or infallible.  They are nothing more than a systematic method of performing some particular process from a beginning to an end.  If improperly programmed, if the analytical steps incorporated within them are erroneous or incomplete, or if they are not tested to confirm their output is the product of a system or process capable of producing accurate results (a condition precedent to their admissibility), then the results they generate cannot be shown to be relevant, reliable, helpful to the fact finder, or to fit the circumstances of the particular case in which they are used. . . .  [The expert’s] willingness to rely on his own untested conclusion that his model could reliably be applied to the facts of this case is insufficient to meet the requirements of Rule 702.

Id. (footnote omitted).

A similar state-law case, Matter of Weber, 220 N.Y.S.3d 620 (N.Y. Sur. 2024), is from a state trial court.  It’s not entirely clear that the damages opinions excluded in Weber were even those of a qualified expert, the decision treated them as such and found them “inherently unreliable.”  Id. at 633.  They had been generated by an AI program (CoPilot).  The Weber expert simply parroted whatever the AI program generated:

Despite his reliance on artificial intelligence, [the expert] could not recall what input or prompt he used to assist him. . . .  He also could not state what sources [AI] relied upon and could not explain any details about how [the AI] works or how it arrives at a given output.  There was no testimony on whether these [AI] calculations considered any fund fees or tax implications.

Id.  The would-be expert nonetheless claimed that AI use was “generally accepted” in the relevant field.  Id. at 634 (New York state courts follow Frye).

The court had “no objective understanding as to how [the AI program] works,” and thus tried it out itself.  The program gave three different answers to what should have been a simple mathematical calculation – and none of those matched the supposed expert’s number.  Id. at 633 “[T]he fact there are variations at all calls into question the reliability and accuracy of [AI] to generate evidence to be relied upon in a court proceeding.”  Id.  Interestingly, when asked “are your calculations reliable enough for use in court,” the program responded that, standing alone, it was probably not ready for legal prime time.

[The AI] responded with “[w]hen it comes to legal matters, any calculations or data need to meet strict standards. I can provide accurate info, but it should always be verified by experts and accompanied by professional evaluations before being used in court. . . .  ”  It would seem that even [the program] itself self-checks and relies on human oversight and analysis.  It is clear from these responses that the developers of the [AI] program recognize the need for its supervision by a trained human operator to verify the accuracy of the submitted information as well as the output.

Id. at 634.  To prevent “garbage in, garbage out . . . a user of . . . artificial intelligence software must be trained or have knowledge of the appropriate inputs to ensure the most accurate results.”  Id. at 634 n.25.

Weber thus rejected the testimony, citing “due process issues” that “arise when decisions are made by a software program, rather than by, or at the direction of a [human].”  Id. at 634.

[T]he record is devoid of any evidence as to the reliability of [the AI program] in general, let alone as it relates to how it was applied here.  Without more, the Court cannot blindly accept as accurate, calculations which are performed by artificial intelligence.

Id.  Weber made several “finding” with respect to AI:

  • AI is “any technology that uses machine learning, natural language processing, or any other computational mechanism to simulate human intelligence, including . . . evidence creation or analysis, and legal research.”
  • “‘Generative A.I.’ [i]s artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query).”
  • “[P]rior to evidence being introduced which has been generated by an artificial intelligence product or system, counsel has an affirmative duty to disclose the use of artificial intelligence.”
  • AI generated evidence “should properly be subject to a Frye hearing prior to its admission.”

Id. at 635.

Concord Music Group, Inc. v. Anthropic PBC, 2025 WL 1482734 (Mag. N.D. Cal. May 23, 2025), is another instance of an expert exposed by an AI hallucination – “a citation to an article that did not exist and whose purported authors had never worked together.”  Id. at *3.  The court considered the infraction “serious,” but not as “grave as it first appeared.”  Id.

[Proponent’s] counsel protests that this was “an honest citation mistake” but admits that Claude.ai was used to “properly format” at least three citations and, in doing so, generated a fictitious article name with inaccurate authors (who have never worked together) for the citation at issue.  That is a plain and simple AI hallucination.

Id. (citation omitted).  However, “the underlying article exists, was properly linked to and was located by a human being using Google search.”  Id.  For that reason, Concord did not view the situation as one where “attorneys and experts have abdicated their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers.”  Id. (indirectly quoting Kohls).  Still the existence of the hallucination was fishy enough that the relevant paragraph from the expert report was stricken:

It is not clear how such an error − including a complete change in article title − could have escaped correction during manual cite-check by a human being. . . .  [The court’s] Civil Standing Order requires a certification “that lead trial counsel has personally verified the content’s accuracy.” Neither the certification nor verification has occurred here.

Id.  Further, as in Kohls “this issue undermines the overall credibility of [the expert’s] written declaration, a factor in the Court’s conclusion.”  Id.  Cf. Shoraka v. Bank of Am., N.A., 2023 WL 8709700, at *3 (C.D. Cal. Dec. 1, 2023) (excluding non-AI expert report that “consist[ed] almost entirely of paragraphs . . . simply copied and pasted from online sources”).

On the other hand, we have Ferlito v. Harbor Freight Tools USA, Inc., 2025 WL 1181699 (E.D.N.Y. April 23, 2025).  The plaintiff’s expert, lacking formal credentials, claimed considerable practical experience.  Among other reasons, the defendant sought to exclude his report because “after completing the report, he entered a query into ChatGPT about the best way to secure a hammer head to a handle, which produced a response consistent with his expert opinion.”  Id. at *1.  Ferlito denied exclusion because the expert had only used AI “after he had written his report to confirm his findings” – findings initially “based on his decades of experience.”  Id. at *4.  The expert “professed to being ‘quite amazed’ that the ‘ChatGPT search confirmed what [he] had already opined’” and claimed, “that he did not rely on ChatGPT.”  Id.  Taking that testimony at face value, Ferlito allowed the expert opinions:

There is no indication that [the expert] used ChatGPT to generate a report with false authority or that his use of AI would render his testimony less reliable.  Accordingly, the Court finds no issue with [the expert’s] use of ChatGPT in this instance.

Id.  Ferlito is not necessarily inconsistent with the previous decisions because of the expert’s denial that he had used AI to generate the actual report.

Considering this precedent, while it does seem that some courts are being distracted, in addressing Rule 702 issues by AI’s propensity for hallucinations, most of them do understand the more basic issue with expert use of AI – that the opinions are no longer those of the experts themselves.  Rather, when experts use AI to generate their reports, they have reduced themselves to “parrot” status, blindly reciting whatever the AI program generates.  As such, AI generated expert reports should not be admissible without some means of validating the workings of the AI algorithms themselves, which we understand is not possible in most (if not all) large-language AI models.

Photo of Bexis

As 2025 came to an end, we presented our loyal readers with our annual review of our ten worst decisions of the past year and our ten best decisions of the past year.

Now, in the new year, as we do each year, we’re pleased to announce that four (we hope) of your bloggers – Bexis, Steven Boranian, Stephen McConnell, and Lisa Baird – will be presenting a free 90-minute CLE webinar on “The Good, the Bad and the Ugly: The Best and Worst Drug/Medical Device and Vaccine Decisions of 2025” on Wednesday, January 14th at 12 p.m. EST to provide further insight and analysis on these cases.

This program is presumptively approved for 1.5 CLE credits in California, Connecticut, Illinois, New Jersey, New York, Pennsylvania, Texas and West Virginia. Applications for CLE credit will be filed in Colorado, Delaware, Florida, Georgia, Ohio, and Virginia. Attendees who are licensed in other jurisdictions will receive a uniform certificate of attendance, but Reed Smith only provides credit for the states listed. Please allow 4-6 weeks after the program to receive a certificate of attendance.

FOR VIEWERS OF RECORDED ON-DEMAND PROGRAMS: To receive CLE credit, you will need to notify Learning & Development CLE Attendance  once you have viewed the recorded program on-demand. 

**Please note – CLE credit for on-demand viewing is only available in California, Connecticut, Illinois, New Jersey, New York, Pennsylvania, Texas, and West Virginia. Credit availability expires two years from the date of the live program.

The program is free and open to anyone interested in tuning in, but you have to sign up in advance here.

Photo of Eric Hudson

Happy new year, and welcome to 2026. While we may still be pondering the meaning of auld lang syne or waxing philosophical about the new year, we’ll quickly move on  and get to work defending our clients. That’s what we do as defense hacks, and kudos to all of you for doing it so well.

We’ve written many times about plaintiffs who try (and fail) to plead injury by alleging hypothetical risks, speculative future harm, or buyer’s remorse untethered to actual loss. Today’s dismissal of a putative class action from the Northern District of California is a new year’s reminder that Article III and statutory standing remain stubbornly real requirements.  Druzgalski v. CVS Health Corp., 2025 U.S. Dist. LEXIS 265766 (C.D. Cal. Dec. 23, 2025).

Continue Reading New Year, Same Old Standing Problems