After eschewing our blogging duties during a very long trial—followed by short deliberations and a verdict for the good guys—we are back at it. Normally, a significant criterion in how we select a case for a post is the length of the decision—the shorter, the better for our normally busy work lives. After trial, there is an inevitable bit of a lull and we were willing to take on a longer decision, in part because of a scintilla of guilt at having not blogged for so long, but also because we had deposed two of the three experts at issue a few times many years ago. Back then (and for a while before and after), we were involved in a number of Daubert decisions on alleged serotonin-mediated effects of prescription drugs. We have also been involved in and blogged about a range of litigation over claimed impacts on offspring from drug use during pregnancy. Thus, the decision in Daniels-Feasel v. Forest Pharms., Inc., No. 17 CV 4188-LTS-JLC, 2021 U.S. Dist. LEXIS 168292 (S.D.N.Y. Sept. 3, 2021), covered some familiar ground. It also covered it well, as the general causation opinions of the three plaintiff experts were excluded as unreliable.
The issue in Daniels-Feasel was whether plaintiff’s experts had offered admissible general causation opinions about the relationship between the pregnant mother’s use of defendants’ prescription SSRI for depression and the development of autism spectrum disorder (“ASD”) in an offspring. The familiar experts, Dr. Moye (epidemiologist) and Dr. Plunkett (pharmacologist), were paired with Dr. Whitaker-Azmitia, a neuropharmacologist and something of an advocate for such a relationship. Even though these opinions were not solely generated for-litigation, as is usually the case, and the defendants conceded the qualifications of each of the plaintiff’s experts were sufficient, the court could not get past the cherry picking of evidence and general result-driven shoddiness of their methodologies. In short, the trial judge—the current chief judge of “the Southern District”—assiduously carried out her role as gatekeeper and the sort of “junk science” that Daubert addressed will not get to a jury. (This is our assumption at least, as a plaintiff without admissible evidence on general causation should lose summary judgment.) Even though we note that the decision cited some pre-Daubert cases and many pre-2000 amendment of Rule 702 cases, it is a good example of why a rigorous application of Rule 702 makes sense and why the pending revisions to Rule 702 would be a good thing to adopt.
In terms of the causation theory that plaintiff’s experts espoused—and with a healthy disclaimer on our knowledge of the science here—we do note that it seemed pretty out there. The causes of and risk factors for ASD have been studied extensively. SSRI use is prevalent and has been the subject of lots of research and litigation. If there really were a relationship between maternal use of SSRIs and ASD in offspring, then one would think it would have garnered significant attention. By contrast, when the court cited Dr. Moye’s report for the proposition that “[a]lthough ‘changes in neural growth during prenatal and postnatal periods’ and genetics may play a role in causing ASD, there is no ‘gene for autism’ and the precise cause of the disorder is unknown,” we knew plaintiff’s experts were reaching. If Rule 702 and Daubert have any teeth, then how can an opinion that a drug causes a condition be admissible when science does not yet know any causes for the condition? The court did not cite it, but the old Rosen saw of “law lags science; it does lead it” comes to mind. That said, we were surprised that plaintiff’s experts did have some studies reporting an increased risk of ASD with SSRI use. That meant the court would dig into the details of epidemiology, which is something we like to see and something we think courts do not do enough (especially those courts that excuse the absence of proof of increased risk from epidemiologic or clinical studies in an effort to let mere theories be presented to juries).
We were not surprised, however, that Dr. Moye purported to apply the Bradford Hill criteria as the key part of forming his causation opinion. He was doing that when he first appeared in litigation more than twenty years ago and the criteria are still widely accepted in epidemiology, although often misapplied in litigation. In addition, we were pleased that the court’s recounting of the applicable law emphasized what hurdles the expert and proponent of the evidence had to clear rather than the notion that a jury could hear untested and unsupported theories as long as the opponent of the evidence had a chance to cross-examine the expert. The court also accurately presented the Bradford Hill criteria as applying only when there is an association between an exposure and condition, rather than a framework that be used in the absence of supporting epidemiologic evidence. 2021 U.S. Dist. LEXIS 168292, **18-20. In doing so, the court relied heavily on the decisions from the Mirena MDL—also in the S.D.N.Y.—and Zoloft MDL, both of which were affirmed on appeal. We posted a bunch on these decisions, which garnered cherished spots in our Top Ten lists over the last six years, and think they represent the state-of-the-art when it comes to Daubert and general causation. See here, here, here, and here for Mirena and here, here, here, and here for Zoloft. With this background, and knowing that Dr. Moye’s causation opinion in Mirena was also excluded, the court walked though Dr. Moye’s opinion, considering not just what he said studies found but what the study authors said they actually found.
The court concluded that Dr. Moye’s causation opinion was unreliable, as “he fails to adequately support his conclusions using the selectively favorable data he relies upon, unjustifiably disregards inconsistent data, and admittedly ignore categories of relevant evidence.” Id. at *21. We could spend some time wading into the specific conclusions about how Dr. Moye’s application of the Bradford Hill criteria fared—it is a good yet weedy read—but we will instead focus on something we noticed here that seems pretty new and potentially useful. The court held that “Dr. Moye’s opinion is not generally accepted, as no regulatory agency, professional organization, peer-reviewed study, or medical treatise concludes that Lexapro causes ASD, and the FDA has approved its prescription to pregnant women.” Id. at *22. That is a double-dip on looking to what regulatory agencies say and do. Later, the court dinged Dr. Moye for failing to address a 2016 report by the European Medicines Agency (“EMA”) that concluded the then-available data “on prenatal exposure to SSRI/SNRI and ASD do not support a causal relation.” Id. at **39-40. While we do not know if the plaintiff here tried the usual dismissal of FDA and EMA as under-staffed, incompetent, pawns of industry, etc., it seems noteworthy that the court evaluated a novel causation theory with some credit to what regulatory agencies do. At a minimum, the expert should have to answer “why did everybody else looking at the same science come to the opposite conclusion?”
Next up was Dr. Plunkett (addressed here and here), who limited her opinion to whether a relationship between prenatal exposure to SSRIs and ASD was biologically plausible. “Biological plausibility” is one of the Bradford Hill criteria, but Dr. Plunkett did not try to offer a general causation opinion and the other experts did not rely on her for theirs. This probably made her opinions excludable as to causation simply based on relevance. Nonetheless, the court analyzed the reliability of her opinions. After noting that she failed all the Daubert factors of testing, peer review, error rate, and general acceptance, the court walked through her methodology. As an initial matter, the court was skeptical about Dr. Plunkett’s reliance on animal studies data without reliable evidence of correlation to humans. Id. at **45-48. Her treatment of human studies and data was “inscrutable as a scientific method of weighting that is used and explained, and therefore falls should or meeting the rigorous examination standards of Daubert and Rule 702 under the weight of the evidence methodology.” Id. at *48 (internal citation and quotation omitted). For us, the obvious analytical gap here was whether Dr. Plunkett could identify animal behaviors or findings that correlated to ASD, not just to alterations in serotonergic pathways. Similar to what we said above, if the causes of and risk factors for ASD are not known, then evidence on biological plausibility will need to be really specific and convincing. This was recognized later, when the court noted Dr. Plunkett’s concession that “animals cannot even be diagnosed with autism in the same way humans can, because ‘human brains are different than rodent brains’ and animals ‘are not communicative in the way . . . humans are.’” Id. at *56. Without correlation, discussion of animal studies was, at best, an interesting exercise disconnected from any causation issue.
Along the way, the court found numerous methodological failings from Dr. Plunkett, generally categorized as cherry-picking data and failing to address contrary data. Id. at *50. As with Dr. Moye, one bit of contrary data was the EMA report, which she disingenuously said was outside the scope of her review (as a purported regulatory expert who reviewed other regulatory materials). Id. at *59. This all added up to her opinions being excluded as unreliable.
Last was Dr. Whitaker-Azmitia, who did not fail every one of the Daubert factors like Drs. Moye and Plunkett did. She had done rat studies to test her hypothesis that hyperserotonemia, high serotonin levels in developing brains, causes ASD and had published at least some of her work describing her hypothesis. Id. at *65. This is where form meets substance, however, as it is not enough for a hypothesis to have been presented or even published. This particular hypothesis based on animal studies had been widely rejected in the medical literature and by the EMA and not supported in the “wealth of epidemiological data on the associative and causal relationship between maternal SSRI use and ASD.” Id. at **66-67. Dr. Whitaker-Azmitia’s attempt to support her opinion with epidemiological data, based on a “face validity” methodology, was rejected as unreliable for failings similar to what plagued the other experts. Most notably, she focused her analysis on studies that might support her hypothesis and not those that might undercut it. Id. at *69. While that would have no doubt made for effective cross-examination at trial, when the trial court fulfills its gatekeeping role, the jury should never hear opinions based on a results-driven and unscientific analysis. If that is all that the plaintiff can offer on causation, then summary judgment should be granted and a huge waste of judicial resources, not to mention the cost of trial and the burden on jurors, can be avoided.