Photo of Bexis

As we’ve discussed, such as here, Fed. R. Civ. P. 702 was amended in late 2023, because the Civil Rules Advisory Committee concluded that too many courts were erroneously admitting expert testimony that proponents had not established was reliable.  It does appear that at least some courts are cracking down.  Here’s one from an Eighth Circuit court, which is significant since the Eighth Circuit was one of the worst offenders under the prior version of Rule 702.

“[M]any courts have held that the critical questions of the sufficiency of an expert’s basis, and the application of the expert’s methodology, are questions of weight and not admissibility.  These rulings are an incorrect application of Rules 702 and 104(a).”  Sprafka v. Medical Device Business Services, Inc., 2024 WL 1269226, at *2 (D. Minn. March 26, 2024) (quoting Advisory Committee Note to 2023 Amendment).  In Sprafka, the plaintiff in a knee implant case trotted out the well-traveled p-side “expert,” Mari Truman.  She of course opined the device was defectively designed.  Id. at *3.

It didn’t go well for this litigation frequent flier, first because, as is almost always the case with repeat players, the opinion was generated solely for litigation purposes.  Courts “discount[] the reliability of experts who formed their opinions only within the context of litigation.”  Id. (citation and quotation marks omitted).  The opinion here was “not based on independent research, and . . . developed these opinions expressly for the purpose of litigation.”  Id.

Strike one.

Nor was this litigation opinion based on good science.  Rather, it “relied heavily on anecdotal evidence” – a couple of “published case studies” and MAUDE adverse event reports to assert that this defendant’s implant was “more susceptible to debonding” than other competing products.  That is, of course precisely what the FDA’s MAUDE database is not intended for:

MDR data is not intended to be used either to evaluate rates of adverse events, evaluate a change in event rates over time, or to compare adverse event occurrence rates across devices.

FDA, “About Manufacturer and User Facility Device Experience (MAUDE) Database,” at “Limitations of Medical Device Reports (MDRs)” (available here).  Nor were the case reports any better.  One of them “failed to establish any ‘rate’” of failure, because it had “a numerator . . . but no denominator.”  Sprafka, 2024 WL 1269226, at *3.  Nor did it account for a raft of confounding factors.  Id. (listing factors).  The failure rate in the second study was likewise unknown.  Id. at *4.

This was bottom-dwelling data.  “Uncontrolled anecdotal information offers one of the least reliable sources to justify opinions about both general and individual causation.”  Id. (citation and quotation marks omitted).  All told, it amounted to “around thirty reports of debonding” and did “not establish [any] cause . . . or rate.”  Id. 

Conversely, the expert “discounted” much more reliable “registry data − which overwhelmingly shows that the [device]  has the same or better revision rates than other devices on the market.”  Id. at *6.  Oops.  Distinguishing a pre-amendments case “involving a different device,” Sprafka found the basis of the expert’s opinion did not establish what she claimed.  Id. at*4.  “Without more, the fact that there has been reports of debonding with the . . . device does not explain how this debonding compares with other devices on the market or what caused the debonding.”  Id.

Strike two.

The expert’s report also claimed that some studies looking at something else (“radiolucent lines”) supported her position, because that other thing supposedly “represent[ed] an established modality for the prediction of component loosening.”  Id.  But the expert retracted that position at her deposition.  Id. at *5.

Strike three.

The expert also relied on “pullout strength data.”  Id.  But she “mischaracterized” that data far beyond what the study’s authors found.  “Contrary to [her] report, however, the authors could not conclude that the [device] was ‘predisposed’ to early failure, as the results of their testing showed that the [other devices]  performed similarly.”  Id.

Strike four.

Finally, the expert “could not opine on the exact specifications” of a purported alternative design.  Id.  That was because she hadn’t done any of the testing that even she conceded would be required.  Id. (“she would need to do testing”).  Other “clinically successful” designs did not include the features she asserted in her report.  Id.

Strike five.

Not surprisingly, Sprafka found “several red flags with [the expert’s] methodology.”  Id. at *6.  Thus, Sprafka rejected the plaintiff’s “weight” argument:

While [plaintiff] argues that challenges to [the expert’s] testimony should go to weight and not admissibility, the Court should exclude expert testimony when it is so fundamentally unsupported that it can offer no assistance to the jury.  And while some guesswork is necessary for expert testimony, too much is fatal to admission.

Id. (citations and quotation marks omitted).

The fundamental problem was that the expert’s hypothesis “ha[d] not been tested or supported by reliable data.”  Id.  “[T]he function of expert testimony is to explain how something happened, not to speculate as to how something could possibly have happened.”  Id. (citation and quotation marks omitted).  Theories without facts produce inadmissible ipse dixit opinions.  Id.

In sum, [the expert] asserts that [an alternative design] “should perform better than” the [device at issue], but she does not have any testing, data, or peer-reviewed articles to support this proposition.  Without more, [this] opinion that a safer alternative existed is too speculative to be admissible.

Id. at *7.

One final predictable, but garbage, opinion was also excluded in Sprafka – that the defendant “should have conducted additional testing.”  Id.  Anybody can always do “more” testing.  Here, however, the expert “could not specify the parameters of such test.”  Id.  She could not point to “any other manufacturer” that did such testing.  Id.  “She did not conduct the test herself or explain the costs and benefits of such a test.”  Id.  More ipse dixit – that opinion was also “too speculative to be admissible.”  Id.

At best, [the expert]  has an educated guess.  Ultimately, the courtroom is not the place of scientific guesswork, even of the most inspired sort.  Law lags science; it does not lead it.

Id. at *8 (citation and quotation marks omitted).

Exclusion of the plaintiff’s expert testimony in Sprafka was predictably fatal to all product liability claims, since “[e]xpert testimony is required under Minnesota law in products liability cases involving medical devices because they involve “complex medical issues with which a jury is unlikely to have experience.”  Id. (citing one of the many Minnesota cases in our 50 state survey “Prescription Medical Product Causation – Expert Required”).

This is a final judgment.  We understand that plaintiff recently appealed. To us, that means Sprafka would be a excellent vehicle to argue that, in light of the 2023 Rule 702 amendments, the Eighth Circuit should repudiate its lousy pre-Amendment Rule 702 precedents that the Rules Advisory Committee concluded were “incorrect.”