Everybody knows that it is three strikes and you are out in baseball. (Bexis and Ken Burns could discuss the history of baseball’s rules on balls and strikes in the nineteenth century, but we will stick with the public consciousness at least since “Take Me Out to the Ball Game” became popular.) In MDL product liability litigation, there is no magic number of times that proffered general causation experts must get knocked out before it is apparent that no plaintiff suing over the injury at issue will have a viable case. In boxing, many jurisdictions have a three knockdown rule, where a knockout victory is awarded if the opponent is knocked down to the canvas three times in the same round. In litigation, there is no such rule, nor would the parties ever agree on what constitutes the same round.
The Zoloft MDL in the Eastern District of Pennsylvania has been playing this out. First, the MDL court excluded the generic causation opinion of the plaintiffs’—all of them, since this is common discovery—epidemiology expert. Next, the court excluded the general causation opinions of three separate experts who focused on purported mechanisms. It looked like plaintiffs were dead in the water. Then, plaintiffs got a mulligan—yes, we are mixing sports metaphors like some courts mix up the standards for expert evidence on medical causation—and were allowed to put up another expert on general causation, this time a late-designated, frequent flyer biostatistician. (Along the way, a motion to reconsider was denied and, across town, a state court excluded the causation opinions of an epidemiologist and mechanism expert in an individual case.) Now, after every apparent opportunity for plaintiffs to do better, the court also excluded the general causation opinion of the new expert in In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342, 2015 U.S. Dist. LEXIS 161355 (E.D. Pa. Dec. 2, 2015).
Whereas the first four experts whose general causation opinions were tossed had focused on whether maternal use caused birth defects in general, the new expert, Nicholas Jewell, Ph.D., focused on certain cardiac defects, which is part of why the attempted do-over was permitted. Id. at *18. He also had the benefit of something of a roadmap from the court’s view of Daubert, epidemiology, the Bradford Hill criteria, and the shortcomings of the other experts, particularly the epidemiologist. Still, he could not muster an admissible general causation opinion, which seems to reflect more on what the science shows than whether plaintiffs picked the right experts. (We say this knowing that Dr. Jewell was tossed from another MDL two weeks earlier. In re Lipitor (Atorvastatin Calcium) Mkt’g, Sales Practs. & Prods. Liab. Litig., 2015 WL 7422613 (D.S.C. Nov. 20, 2015).) To start, Dr. Jewell’s qualifications as a biostatistician were not challenged and a challenge to his lack of expertise “to make certain assumptions about embryological developments, heart defects, and categories of antidepressant medications” was rejected. Id. at **14-17. We could quibble with this holding in that, for instance, taking studies on one type of cardiac defect and/or one particular drug as evidence of causation for another cardiac defect and/or another drug can be an impermissible leap without competent foundational evidence, but a ruling on the merits of the core causation evidence matters more.
For those who enjoy wallowing in the details of such issues, the opinion is worth a detailed read. In part because we have discussed the prior Daubert decisions in this litigation before, we will skip some details in this post. The court looked to see if the expert could point to statistically significant support from multiple non-overlapping studies, noting that “[s]cientists are expected to address and reconcile data that does not support their opinions, and not simply rely upon data which does.” Id. at *22. Here, there were three related studies that apparently reported statistically significant increased odds ratios and a later, larger and more rigorous study—that included all of the data in the first three—did not. The other two studies that reported statistically significant increased risk for cardiac or septal defects each came with some drama. The first had been published with a statistically significant doubling of the risk of septal defects, but the lower bound of the confidence interval had been miscalculated—something the defendant’s expert figured out—and a correction had to be published to say the result was not statistically significant. The second, authored by—wait for it—the same Dr. Berard who the court excluded last year, in part because her published statements on the subject did not go as far as her litigation opinions—came out after she was excluded. Lo and behold, whereas the earlier abstract
had reported no statistically significant increase risks, the final publication now had one—but the opinion does not say what it was. When Dr. Jewell and others tried to re-create the dubious result from the study data, they failed. Dr. Jewell had no explanation for what had happened, so the court precluded him from relying on the Berard study “as evidence of replication of statistically significant findings.” Id. at *25. Noting that the authors of recent studies had “uniformly failed to replicate the associations noted in early studies” and had “concluded that the reported association between Zoloft and cardiac birth defects may have been the result of chance, confounding by indication, or other confounders,” the court still gave Dr. Jewell a chance to offer a reliable causation opinion. Id. at *29.
Although the court previously had rejected Dr. Berard’s suggestion that non-significant odds ratios could be seen as a trend that supports a causation opinion, Dr. Jewell tried to say the same thing while avoiding the term “trend.” He also suggested there was a way to multiply p-values from studies with non-significant results to come up with something like statistical significance, but the court was not having it because there was no proof that this was an accepted approach and Dr. Jewell had not even tried it with the studies at issue. Id. at *32. He also could not suggest some consistency in the studies without ignoring the ones he did not like. This legerdemain did not go unnoticed: “Dr. Jewell’s selective emphasis on trends and general consistency only when such concepts support his opinion is one example of ‘situational science’ which renders his opinion unreliable.” Id. at *35. From there, Dr. Jewell had to swing for the fences, throw a Hail Mary, or launch a desperation three—depending on which sporting reference you prefer—starting with calculating heterogeneity from meta-analyses. The way he did that was “results-driven” and unreliable, in part because he ignored studies he had relied on as an expert in other SSRI birth defect litigation. Why? Because the results for Zoloft were not statistically significantly increased, which was not a good enough reason. Next, he tried to rely on various company documents that were fairly far removed from reliable evidence of causation for cardiac birth defects. Id. at **40-42.
As a last gasp, he tried to re-analyze and meta-analyze various published studies to pump up the ones that reported increased risk (e.g., by saying they were not confounded even if the authors said they were) or push down the ones that reported no increased risk (e.g., by saying they were confounded even if the authors said they were not).
It is appropriate for a statistician to design a study and statistically analyze the data collected when testing a hypothesis. However, results-oriented post-hoc re-analyses of existing epidemiological studies are disfavored by scientists and often deemed unreliable by courts, unless the expert can validate the need for reanalysis in some way.
Id. at *50. Dr. Jewell could not validate such a need or follow a reliable methodology in attempting his analysis, so “no reliable inferences or conclusions can be drawn from the meta-analysis conducted by Dr. Jewell for purposes of this litigation.” Id. at *56. With his every attempt to get around the absence of meaningful support from the available epidemiological studies having failed, Dr. Jewell’s testimony was excluded in its entirety and plaintiffs again failed to offer an admissible general causation opinion. It seems to us that this should be it, at least for cardiac defect claims. There will surely be an appeal and there may be arguments made in various fora that new science—perhaps authored by plaintiffs’ experts—moves the needle on general cause. At some point, though, the lack of reliable evidence of general causation should mean that the litigation is over.