Photo of Michelle Yeary

Last week we brought you both the federal and state court decisions in the Incretin Litigation granting summary judgment on the grounds of preemption.  But that was not the only obstacle in plaintiffs’ way.  Even if the claims were not preempted, plaintiffs’ experts fell woefully short of the mark.  Today we bring you part two of the state court decision – exclusion of plaintiffs’ experts.  As a reminder, these cases were dismissed back in 2015 and revived in 2018.  So, five years had passed since plaintiffs’ experts had authored their original opinions.  Five years of new science to be considered.  Five years to complete analyses.  Five years to be thorough and complete.  It’s telling that the court’s first words about these experts is: “The unwillingness or inability of these experts . . . to grapple with all the available epidemiological evidence is troubling.”  In re Byetta Cases, slip op. at 2 (Cal. Super. Ct. Apr. 6, 2021).  Troubling is just the beginning.

Maybe plaintiffs’ experts experienced a “blip”?  For those not versed in the Marvel Cinematic Universe (“MCU”), Thanos, a super villain, wanted to “save” the world by halving its population.  Despite heroic efforts by an extended group of Avengers, Thanos succeeded in collecting all six infinity stones.  With those and a snap of his fingers, half the people on the planet turned to dust and faded into oblivion.  They remained gone for five years, until the remaining Avengers figured out time travel and undid the blip.  Just as they had vanished, so did billions return, with no memory of the intervening five years.  For those who were gone, it was like those five years did not exist.  So too apparently for some of plaintiffs’ experts, or at least they chose to be ignorant of it.  Plaintiffs’ experts failed to consider “all the available epidemiological evidence and not just the state of research as it existed in 2015,” leaving the court to conclude “that a medical causation opinion which not only lacks, but ignores, this essential logical support is of highly doubtful provenance.”  Id. at 3.

Meanwhile, plaintiffs’ counsel appear to have been among those who survived the blip.  They experienced the last five years and so at oral argument they presented lots of more recent scientific references, including analyses the lawyers conducted themselves.  Putting aside the lack of a sound scientific method to support an attorney-prepared risk analysis, none of the “science” counsel touted was considered or relied upon by their experts.  Id. 3-4.  “[P]laintiffs’ counsels’ vigorous arguments about the science do not and cannot fill the methodologic and foundational gaps that exist in their experts’ analyses.”  Id. at 5.

Let’s start with one of those gaps.  Plaintiffs’ biostatistician conducted an analysis in 2015 that found no statistically significant association between incretin-based therapies and pancreatic cancer.  Id. at 5-6.  In 2019, that same expert conducted a separate analysis but only as to one of the incretin drugs, meaning he excluded the others and only considered a limited set of clinical trial data.  At his deposition, the expert admitted he could have re-done his analysis as to all of the drugs, but “that was not what he was asked to do” by plaintiff’s counsel.  Id. at 6.  That was certainly not a sound scientific basis for changing his methodology in 2019.  Nor was it supported by any of the peer-reviewed literature which found no basis for treating any of the drugs separately.  Id. at 7.  Moreover, even with the studies plaintiffs’ expert did consider, he chose to exclude certain events with no reasonable or scientific basis for doing so.  He thereby deflated cancer events in placebo groups and inflated them in incretin groups.  Id. at 7-8. That same expert also failed to consider one of the largest observational studies done; performed in 2019 it found no association between the drug and pancreatic cancer.  Id. at 9.  Plaintiff’s other experts took a similar litigation-driven approach to their opinions, reflecting “unsound, ramshackle methodology” and “an undisciplined analysis suggestive of an authorial interest on achieving certain results rather than examining the data objectively.”  Id. at 11.

Moving from epidemiology to medical causation – because a statistical association is not the same as causation.  Plaintiffs offered only one medical causation expert.  He claimed he performed a “weight of the evidence” analysis.  Putting aside whether that is a scientifically sound methodology, the problem was that the expert did not apply the methodology in a reliable fashion.  At a minimum, the expert was required to “review the totality of available evidence” and “supply his method for weighing the studies he has chosen to include.”  Id. at 12.  He did neither.  To start, the expert admitted that he considered no epidemiological data published since 2015.  Id. at 13.  There’s that blip again.  To the extent he relied on the epidemiological experts discussed above, we already know they did not come close to considering the “totality” of the evidence.  What evidence the expert did consider, he failed to weigh.  He could not explain what weight he gave to the studies or why.  “As a result, [the expert’s] causation analysis is a black box that cannot be examined, replicated, or reproduced by other scientists or physicians.”  Id. at 13.

As if his own flaws were not enough, the sins of the epidemiology experts were laid at the feet of the medical causation expert.  Plaintiffs’ medical causation expert’s opinion was largely predicated on the analyses of the other excluded experts.  As a medical doctor, the expert could have evaluated whether certain events were properly included or excluded by the statistician and epidemiologist.  Instead, he blindly relied on their conclusions, which having been excluded were not the type on which an expert could reasonably rely.  Id. at 15.

Finally, the medical causation expert seemed to have the most to gain from a blip.  He certainly would have blipped away his 2016 peer-reviewed article in which he stated that “[s]ome data suggest insulin therapies may further increase pancreatic risk, although some drugs may decrease risk, and others “have no discernible effect,” while still others “are controversial, requiring further study.”  Id. at 15.  So, in 2016 in a peer-reviewed journal, plaintiffs’ expert could not or would not repeat his 2015 litigation-based opinion that incretin-based therapies cause or contribute to the development of pancreatic cancer.  A conclusion he quickly reverted to in 2019 in the context of the lawsuit.  Id. The only explanation offered for the “flip-flop” was that he relied on confidential materials produced in the litigation for his expert opinion.  But plaintiffs could not point to what those confidential discovery materials were.  Given his change of heart when expressing his opinion to the medical community at large, the court could only conclude that in authoring his expert opinion he did not use “the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.”  Id. at 16.

Plaintiffs’ last principal expert sought to testify as to how incretin stimulated cell growth which he opined led to an increased risk of pancreatic cancer.  Id.  But at deposition, he admitted his opinion was an “unproven hypothesis” and “it was a mere hypothetical possibility that incretin-based therapies could cause pancreatic cancer.”  Id. at 17.  Since his opinion was nothing more than “inadmissible speculation,” it too was excluded.  Id.

The court also addressed three additional plaintiff experts.  The first two were clearly victims of the blip.  The first testified in 2015 that he ran out of time to finish his analysis, but still had not fixed that problem five years later.  Id.  The second testified in 2015 that there was additional research he would like to have done, but still had not done it by 2020.  Id. at 18.  Neither seemingly offered an explanation more plausible than they had turned to dust for five years.  Because their analyses necessarily were missing several key studies and having not considered the totality of the evidence, their “conclusion[s are] not worthy of consideration by a jury.”  Id.

The final expert studied images of lesion in baboons, but with no standard for assessing those lesions “he simply did an ad hoc analysis applying human specimen standards.”  Id. at 19.  With no literature or other analytical support for his use of animals to draw conclusions about humans, his testimony was likewise excluded.  Id.

Having excluded all of plaintiffs’ experts, the court granted summary judgment based on a lack of evidence of general causation.  Id.  Plaintiffs tried to argue biological plausibility, but that is not causation and without causation, plaintiffs cannot meet their burden of proof on an essential element of their claims.  Id. at 20.  Finally, plaintiffs tried to argue they could establish general causation through a differential diagnosis.  They relied on a case involving a rare type of cancer that occurred only a few hundred times in medical history.  With such a rare injury, there was a lack of epidemiological data and so plaintiffs’ experts were allowed to opine on specific causation without demonstrating general causation.  Pancreatic cancer, on the other hand and unfortunately, is not rare and there is a wealth of epidemiological data.  Data plaintiffs’ experts failed to consider.  Plaintiffs cannot simply “skip the general causation question.”  Id. at 21.  Nor can they “blip” away the years and the evidence developed in that time that contradicted their litigation-driven opinions.  And no simple “snap” can undo those years of failing to employ scientific rigor.