Photo of Bexis

We continue to be cautiously optimistic that the recent amendments to Fed. R. Evid. 702 – enacted because too many courts had been too flaccid for too long in admitting dubious “expert” testimony – will actually improve things in the courtroom.  Our latest data point is In re Paraquat Products Liability Litigation, ___ F. Supp.3d ___, 2024 WL 1659687 (S.D. Ill. April 17, 2024).  While Paraquat is not drug/device litigation (the substance is a widely used herbicide), the Rule 702 analysis has broad applicability – as demonstrated by the decision’s reliance (in part) on the Acetaminophen decision that we discussed here.

Paraquat excluded the plaintiffs’ “sole expert witness on the critical issue of general causation” under amended Rule 702.  The amendments were clearly on the court’s mind and were mentioned in a couple of footnotes.  They became effective after the motion had been briefed, but before it was decided.  2024 WL 1659687, at *4 n.8 (applying amended version).  The 2023 amendments:

emphasized that the proponent bears the burden of demonstrating compliance with Rule 702 by a preponderance of the evidence, and that each expert opinion must stay within the bounds of what can be concluded from a reliable application of the expert’s basis and methodology.

Id. (quoting and following Acetaminophen).  The amendments thus specify “that expert testimony may not be admitted unless the proponent demonstrates to the court that it is more likely than not that the proffered testimony meets the admissibility requirements set forth.”  Id. at 4 n.9 (quoting Advisory Committee notes to 2023 amendments) (emphasis added by the court).

Further, the 2023 amendments were necessary because “courts had erroneously admitted unreliable expert testimony based on the assumption that the jury would properly judge reliability.”  Id.  That is a no-no.  “[S]ome courts had ‘incorrect[ly]’ held that an expert’s basis of opinion and application of her methodology were questions of weight, not admissibility.  Id. (again quoting Advisory Committee notes).  Thus:

Mindful of its role as the witness stand’s “vigorous gatekeeper,” the Court will closely scrutinize the reliability of proffered expert testimony before permitting an expert to share her opinion with the jury.  Expert testimony that is not scientifically reliable should not be admitted.  The gatekeeping function, after all, requires more than simply taking the expert’s word for it.

Id. (citations and quotation marks omitted).

Thus steeled against “expert” malarkey, Paraquat proceeded with its close scrutiny – and found the expert’s opinions miserably inadequate.  Here are the reasons why.

Occupational exposure:  The expert’s definition of the allegedly causative factor, “occupational exposure,” was “strikingly amorphous.”  Paraquat, 2024 WL 1659687, at *23.  During the course of two reports and two depositions, the expert “redefined ‘occupational’ exposure no less than three times, creating more questions than answers.  Id. (emphasis original).  That definition “evolved from being related to a person’s workplace, to focusing on the risk of dermal exposure, to direct contact.”  Id. at *24 (quotation marks omitted).  “This nebulous definition of the type of exposure that, according to him, is causally related . . . leaves it to the court − and if he were to testify, the jury − to figure out the precise contours of his opinion.”  Id.  His “meandering definition” was thus “impossible to discern.”  Id. at *25.  Such a “dynamic definition . . . exposures obfuscates the scope and meaning of his ultimate opinion on general causation.”  Id. at *27.

Meta-analysis:  The expert offered his own “meta-analysis” of the medical literature.  But conducting such systematic literature searches “require[s] the reviewer to (a) search for relevant studies; and (b) decide which studies to include and exclude in the review.”  Paraquat, 2024 WL 1659687, at *10.  That requires “develop[ing] a protocol for the review before commencement and adher[ing] to the protocol regardless of the results.”  Id.  This expert did neither.

First, his meta-analysis included only seven of the 36 studies that the expert himself identified as relevant to the causation question he was addressing.  Id. at *10, 16 (the big gap omits the decision’s description of the studies).  Thus, at the outset the analysis “excluded a significant amount of relevant information.”  Id. at *16.  Next, his exclusions occurred “in an ad hoc manner,” and were not mentioned at all until he submitted a supplemental “rebuttal” report.  Id.  It became glaringly obvious that the expert was making up his inclusion criteria as he went along.  He didn’t follow the particular criteria he listed in his first report, id. and ultimately claimed “that he selected studies for his meta-analysis based on a holistic assessment of whether or not that study was reliable enough for inclusion.”  Id. (quotation marks omitted).  The expert “never reduced his ‘holistic’ review process to writing and as a result, appeared to concede that his process was not objectively replicable.”  Id. at *18.  Even for a p-side expert, in an MDL that’s pretty pathetic.

After that embarrassing testimony, the expert’s second “rebuttal report” “reflected a methodological sea change” by reciting “much more granular and even previously undisclosed explanations of his  study selection methodology.”  Id.  But that only provided the defendants with the opportunity to establish that he hadn’t followed those criteria, either.

  • “[He] was unable to articulate (or at least recall) a search strategy that led to his identification of 36 studies that were systematically reviewed.”
  • “[H]e was unable to point to any prior publication that would validate” his claim “that it was his standard practice to apply [this method] in every meta-analysis he had previously done.”
  • “[He] admitted that he came up with the five quality factors by which he evaluated the eight eligible case-control studies after he read them.”
  • He changed “his understanding of ‘occupational’ paraquat exposures” multiple times to “g[i]ve himself more flexibility to justify his inclusion and exclusion decisions.”
  • He used a 25-day “temporal limitation” on “occupational exposure” that was only addressed in one study.

Paraquat, 2024 WL 1659687, at *19-20.  Essentially, the expert conducted the meta-analysis as a fig leaf to conceal his overwhelming reliance on only one of the published studies.  That single study “made up over 90% of the weight of the resulting pooled odds ratio.”  Id. at *17.

But meta-analysis has rules. 

For systematic reviews, a clear set of rules is used to search for studies, and then to determine which studies will be included in or excluded from the analysis. Since there is an element of subjectivity in setting these criteria, as well as in the conclusions drawn from the meta-analysis, we cannot say that the systematic review is entirely objective. However, because all of the decisions are specified clearly, the mechanisms are transparent.

Id. at *25 (citation and quotation marks omitted).  The expert’s “violations of the rules of meta-analysis are evident from the very beginning of his process.”  Id. at *26.  His “failure to document his search for relevant studies makes it impossible to replicate or even critique.”  Id.

The expert’s eligibility criteria were equally opaque, and allowed him to create whatever result his p-side employers paid for:

[His] failure to define his eligibility criteria in advance suggests that he selected the studies he wanted to include in his meta-analysis and then crafted his inclusion/exclusion criteria to justify his decisions. This type of post hoc methodology is the very antithesis of a systematic review.

Paraquat, 2024 WL 1659687, at *26 (emphasis original).  His “problematic” eligibility criteria allowed him to shoehorn his favorite study into the meta-analysis, despite its not meeting his evolving exposure definition.

[H]is failure to clearly define this eligibility criterion also undermined the methodological soundness of his meta-analysis because he was forced to concede that the study that almost singlehandedly generated his elevated odds ratio . . ., did not meet his own stated criteria for occupational exposure.

Id. at *27.  He thereby “violated the basic rules of meta-analysis.”  Id.  Instead of objective and unchanging criteria he used “nothing but a subjective assumption” that was “a good illustration of why mere expertise and subjective understanding are not reliable scientific evidence.”  Id. at *28.

As a result, the only basis for the purported meta-analysis was classic expert ipse dixit that isn’t allowed anymore (if it ever was):

[His] reliance on an unwritten, “holistic” methodology presents an ideal example of “because I said so” expertise that is impermissible under Rule 702.  [He] insisted that he “ha[s] the credentials to do this” and that he “had a process that [he] followed.”  But these assurances, without more, do not show that [he] faithfully applied the necessary steps of his chosen methodology.

Paraquat, 2024 WL 1659687, at *29 (citation omitted).

There is more that aficionados of meta-analysis will want to review, but for us it only makes the Rule 702 reliability rubble bounce.

Bradford-Hill:  As is often the case, this frequent flier p-side expert (“not [his] first rodeo as an expert witness,” id. at *21) purported to employ the notoriously malleable “Bradford-Hill” causation criteria.  Id. at *23.  What’s more, he claimed that he combined it with another paradigm of scientific mushiness, “weight of the evidence.”  Id. at *34.  But, as Paraquat held, combining one pile of garbage with a second pile of garbage, just leaves you with more garbage.  Id. (“while the methodology offers the benefit of flexibility, it is vulnerable to results-driven analysis, which, of course, raises significant reliability concerns”). 

[I]t is imperative that experts who apply multi-criteria methodologies such as Bradford Hill or the “weight of the evidence” rigorously explain how they have weighted the criteria.  Otherwise, such methodologies are virtually standardless and their applications to a particular problem can prove unacceptably manipulable.  Rather than advancing the search for truth, these flexible methodologies may serve as vehicles to support a desired conclusion.

Paraquat, 2024 WL 1659687, at *34 (citation and quotation marks omitted).  “[A]n expert who relies on a weight of the evidence review based on Bradford Hill framework must, at a minimum, explain how conclusions are drawn for each Bradford Hill criterion and how the criteria are weighed relative to one another.”  Id. at *35 (citation and quotation marks omitted).

Again, the good doctor failed miserably.  His purported “weight of the evidence/Bradford Hill analysis [wa]s a textbook example of the type of standardless presentation of evidence that courts have cautioned against.”  Id.  He never “offer[ed] any explanation of the relative weight or importance assigned to each of the [various] factors he analyzed.”  Id.  “[T]he lack of any relative weight assignments means that [his] general causation opinion is virtually non-falsifiable, one of the most basic requirements of the scientific method.”  Id.  Thus, in Paraquat, the plaintiffs’ only expert was told to go home and take his garbage with him:

Against the backdrop of [his] departure from the most basic methodological requirements of a weight of the evidence review, it is not surprising that his analysis reveals extensive selection bias.  [He] appears to have fallen prey to the temptations of selection bias in his discussion of several Bradford Hill factors, most notably those concerning a dose-response relationship and strength of association.  Because reliance on an anemic and one-sided set of facts casts significant doubt on the soundness of an expert’s opinion, [his] outcome-driven Bradford Hill analysis compels the exclusion of his general causation opinion.

Id. at *36 (citations and quotation marks omitted).  The decision goes on to dissect the expert’s treatment of the Bradford-Hill elements of “dose-response relationship” and “strength of the association” in great detail, but we don’t think we need to because most of it is case specific, and Paraquat isn’t a drug/device case.

Isolation from the Scientific Community:  Why did this expert have to go through all these bogus methodological contortions?  Paraquat also touches on this question.  He was being paid to come up with some sort of rationale for a conclusion nobody else agrees with.  Plaintiffs purchased an opinion that was “alone in the scientific community.”  Paraquat, 2024 WL 1659687, at *40.  The Rules Advisory Committee got this right in 2000:

[W]hen an expert purports to apply principles and methods in accordance with professional standards, and yet reaches a conclusion that other experts in the field would not reach, the trial court may fairly suspect that the principles and methods have not been faithfully applied.

Id. (quoting 2000 Advisory Committee notes).  Although directly asked during oral argument for any other “peer-reviewed publication had found [the] causal relationship” in question, plaintiffs’ counsel could point to nothing but “an advocacy piece, not a scientific analysis.”  Id.  No scientist “outside of this litigation” had ever drawn the claimed causation conclusion – using this expert’s methods – or, for that matter, any other methodology.  Id.

Paraquat is venued in the Seventh Circuit, and Judge Posner famously declared in Rosen v. Ciba–Geigy Corp., that “[l]aw lags science; it does not lead it.”  78 F.3d 316, 319 (7th Cir. 1996).  The excluded expert in Paraquat “admitted that he is not aware of any peer-reviewed literature that establishes” the causal relation he was claiming.  Paraquat, 2024 WL 1659687, at *41.  His “causation theory has not been adopted or independently validated in any peer-reviewed scientific analysis outside of this litigation.”  Id.  (emphasis original).  That singular result, in addition to the many methodological failings detailed above (and even more in Paraquat itself), was a final “an evidentiary red flag.”  Id. (string cite omitted).

We note that meta-analysis, Bradford-Hill, and “weight of the evidence” have all (in declining order of correctness, in our opinion) been considered valid Rule 702 types of methodology.  Thus, the vast majority of the analysis in Paraquat bore directly on Rule 702(d) – the requirement that “the expert’s opinion reflects a reliable application of the principles and methods to the facts of the case.”  This is the only one of the four Rule 702 factors that the 2023 amendments changed.  It was also the factor that Paraquat most “closely scrutinized.”

We hope Paraquat is a harbinger of things to come