April might be the cruelest month according to T.S. Eliot, but the last month hasn’t been very kind to plaintiffs’ expert Nicholas Jewell, Ph.D. As we posted recently, at the beginning of December, Prof. Jewell was booted from the Zoloft MDL. See generally In re Zoloft (Sertraline Hydrocloride) Products Liability Litigation, 2015 WL 7776911 (E.D. Pa. Dec. 2, 2015). Only two weeks earlier, however, he was also given the bum’s rush in In re Lipitor (Atorvastatin Calcium) Marketing, Sales Practices & Products Liability Litigation, ___ F. Supp.3d ___, 2015 WL 7422613 (D.S.C. Nov. 20, 2015). His being shown the door in two MDLs in two weeks is pretty impressive defense work. Anybody out there in a position to turn this brace into a hat trick?
Prof. Jewell is a statistician, not a medical doctor. Lipitor, 2015 WL 7422613, at *14 (“Prof. Jewell is a statistician, not a medical doctor or medical professional. He has no expertise in diabetes, has never treated participants of any kind, and is not a clinician.”). As in Zoloft, the Lipitor plaintiffs called him in to second-guess the statistical bona fides of studies involving the drug and condition (diabetes) in question. As we discussed in detail in the Zoloft post, Prof. Jewell started with the “a priori opinion” needed by his plaintiff-side paymasters and “t[ook] a results-driven approach . . ., molding his methodology and selectively relying upon data so as to confirm his preconceived opinion.” Zoloft, 2015 WL 7776911, at *16. Thus, it’s no surprise at all that he committed the same statistical sins in Lipitor:
The Court finds that Prof. Jewell’s analysis of the [statistical] data was results driven, that Prof. Jewell’s methodology and selection of relevant evidence changed based on the results they produced, and that Prof. Jewell chose to ignore and exclude from his report his own analyses that did not support his ultimate opinions. It is apparent to the Court that rather than conducting statistical analyses of the data and then drawing a conclusion from these various analyses, Prof. Jewell formed an opinion first, sought statistical evidence that would support his opinion and ignored his own analyses and methods that produced contrary results.
Lipitor, 2015 WL 7422613, at *18.
What did he do, and why was it bad math and/or science? Here are the results in bite-sized nuggets written in a way so as not to raise your cholesterol level – although we make no such claims as to your blood pressure:
- Prof. Jewell “mined” the data in the seven clinical trials supporting the Lipitor New Drug Application, and after a “whole lot” of effort, was able to massage from that data “a statistically significant three-fold higher incidence” of heightened blood glucose, while literally discarding any analyses that didn’t support the desired result. 2015 WL 7422613, at *3 & n.4.
- Prof. Jewell’s data mining used as a “surrogate marker” for diabetes a “single” elevated glucose result, despite his being “unwilling to testify about the role or use of blood glucose as a surrogate marker for diabetes” – while using a statistical definition for diabetes that the plaintiffs had already disclaimed. Id.
at *4 (observing that “Plaintiffs state a single elevated glucose measurement is insufficient to infer diabetes”).
- When examining other data, Prof. Jewell “chose not to look at glucose measurements at all and chose not to run any statistical analyses using glucose values,” and instead relied solely on “adverse event reports.” Id. (emphasis original).
- Prof. Jewell failed to exclude from his analysis participants who went into NDA studies with a “baseline” glucose level that was already elevated. Id. at *5. This confounded his results because several times as many already-pre-diabetic subjects were in the Lipitor arm as in the placebo arm. Id. at *6. It’s hard for a drug to cause what already existed.
- Prof. Jewell “assume[d] all of the [NDA study] participants . . . had ‘clinically meaningful abnormal increases in blood glucose,’ despite data to the contrary.” Id. at *6. “Prof. Jewell had the data that proved this assumption false and chose to ignore it.” Id.
- Prof. Jewell did not exclude NDA study baseline pre-diabetics because “doing so would make the already small and limited data set even smaller,” meaning “that [he] would not be able to obtain a statistically significant result.” Id. at *7.
- Prof. Jewell’s inclusion of “participants with elevated baseline glucose is contrary to [his] methodology in all of his other analyses.” Id. at *7. In those other analyses, exclusion of such participants was the only “method by which [he] could obtain a statistically significant result.” Id.
- Prof. Jewell substituted p-values from data analyzed with one testing method (“Fisher exact test”) to another test (“mid-p test”) that produced lower figures more likely to appear significant, and covered up the switch by “not report[ing] this Fisher exact p-value in his report.” Id. at *7-8. He did not use the mid-p method in any other analyses. Id. at *8.
- Prof. Jewell “lumped” placebo and Lipitor arm increases in glucose together, and then purported “to attribute the average increase to Lipitor.” Id. at *9. An actual comparison of the two arms, however, “leads to the opposite conclusion.” Id. In fact, “individuals in the placebo group had greater glucose increases than those in the Lipitor group.” Id. at *11.
None of Prof. Jewell’s statistical tricks passed Daubert muster. His exclusion – or non-exclusion – of already pre-diabetic study participants, depending on what produced statistical significance, was an “internal inconsistency” that “weigh[ed] heavily against reliability.” Lipitor, 2015 WL 7422613, at *7. Likewise, his p-value switcheroo was “results driven” because he “only used” the other test after his initial testing “returned a non-significant result.” Id. at *8. “Coming to a firm conclusion first and then doing research to support it is the antithesis of the scientific method,” and was “what Prof. Jewell has done here.” Id. at *8. His lumping glucose increase data was “misleading and results driven” because “rather than conducting statistical analyses of the data and then drawing a conclusion from these various analyses, Prof. Jewell formed an opinion first, sought statistical evidence that would support his opinion and chose to exclude his own contrary analyses from his report.” Id. at *11.
As we see so often, after his initial methodology had been thoroughly trashed, Prof. Jewell submitted the dreaded “supplemental report” trying to salvage the situation. Lipitor, 2015 WL 7422613, at *11. Not only didn’t it help, but Prof. Jewell’s additional statistical gyrations only underscored his unscientific, result-oriented approach. He “again had to try multiple statistical models before reaching the result he wanted. He did not specify beforehand which statistical model he would use in his analysis. Instead, he ‘played around with making sure [he] was getting the right result.’” Id. at *12.
Prof. Jewell re-analyzed a different published study, and “contrary” to its authors’ published results, managed to wring out “an association between Lipitor and new-onset diabetes.” Id. at *13. The difference was largely because, while “researchers used adjudicated data, . . . Prof. Jewell used unadjudicated data.” Id. “Adjudication” is a pre-specified process for analyzing targeted “events” (such as the onset of diabetes), and this study entrusted that process to “an independent and blinded endpoint committee.” Id. Good statisticians (who conducted the original study) prevent their data from being manipulated after the fact. Bad statisticians (such as frequent flyer plaintiffs’ experts) know that after-the-fact slicing and dicing of data can almost always come up with something significant. Id. at *13 n.31. After all, since significance is usually defined as a one-in-twenty chance of error, running more than twenty analyses on various data sets almost guarantees some that random, erroneous result will ultimately pop up. See Reference Manual on Scientific Evidence, at 2256 (3d ed. 2011) (“If enough comparisons are made, random error almost guarantees that some will yield ‘significant’ findings, even when there is no real effect.”).
That’s essentially what Prof. Jewell did. We don’t know exactly how many analyses, with made-up endpoints, Prof. Jewel actually did before he found one that worked – because he “chose not to keep his analyses that were not part of his report,” id. at *8 n. 19 – but eventually, utilizing “lab values of his choice,” id. at *15, he was able to invent something he could call “significant.” This didn’t pass Daubert muster either:
The [study] data did not support Prof. Jewell’s conclusions, so he first simply “chose” to ignore the study and “not to study the data in [it].” . . . Then he, without explanation, chose to ignore and not consider the adjudicated data of new-onset diabetes. Despite the fact that he didn’t “quite know what [new-onset diabetes] means,” he decided that, instead of using the data adjudicated by a blinded committee of clinicians that did understand the term, he would use unadjudicated raw data (particular lab values) that conveniently resulted in a statistically significant finding. This is the very definition of cherry picking data to reach a pre-determined conclusion.
Lipitor, 2015 WL 7422613, at *16 (record citations omitted). Exclusion was further warranted by precedent that “warns against use of medical literature to draw conclusions not drawn in the literature itself.” Id. “[G]eneral causation opinions cannot be based on cherry-picked studies and the avoidance of all contrary evidence.” Id. at *17.
Salvador Dali once said, “The difference between false memories and true ones is the same as for jewels: it is always the false ones that look the most real, the most brilliant.” Dali was a surrealist, which is fine in art, but not a good idea in statistics. The court in Lipitor did the right thing, excluding this false Jewell, so that no jury would be confused by his faux “brilliance.”
* * * *
While we’re on the subject of the Lipitor litigation, we would be remiss not to mention another excellent expert-related decision recently handed down in the same MDL. In In re Lipitor (Atorvastatin Calcium) Marketing, Sales Practices and Products Liability Litigation, 2015 WL 6941132 (D.S.C. Oct. 22, 2015), the court held that plaintiffs could not, consistently with accepted science, attempt to prove causation without first establishing the doses of the drug to which they were individually exposed. Plaintiffs could not submit causation opinions in a pharmaceutical case that did not account for the doses actually taken by specific individuals:Th[e] question of whether there is a no-effect threshold dose is raised in a variety of toxic substance areas. Indeed, the idea that the ‘dose makes the poison,’ in other words that there is a safe dose below which an agent does not cause any toxic effect, is a “central tenet of toxicology. For agents that produce effects other than through genetic mutation, it is assumed that there is some level that is incapable of causing harm. Even Plaintiffs admit that they do not claim Lipitor causes diabetes in very small doses but only at therapeutic doses. Thus, it is not surprising that the question of a threshold amount is also raised in the context of pharmaceutical drugs, particularly where the experts agree there is a dose-response relationship and studies at low therapeutic doses of the drug do not show an association.
Id. at *2 (citations and quotation marks omitted). “[E]specially in cases like this one where studies have found no association between low doses of a drug and a particular adverse effect, the requirement of stating whether the drug is capable of causing the adverse effect at particular dosages serves the same purpose of weeding out meritless claims.” Id. at *6.
Thus, the court imposed on the plaintiffs something that, had it occurred earlier in the litigation, might have been viewed as a Lone Pine order – requiring that they all come up with new expert reports that take account of their individual levels of exposure to the drug, but don’t expand their opinions in any other way:
[A]t least where the experts agree that there is a dose-response relationship and where there is evidence that an association no longer holds at low doses, dose certainly matters, and Plaintiffs must have expert testimony that Lipitor causes, or is capable of causing, diabetes at particular dosages. The Court will allow Plaintiffs’ expert(s) to submit supplemental reports addressing whether Lipitor causes diabetes at particular dosages but with specific parameters discussed below.
(A) Plaintiffs may not retain new experts. . . .
(B) . . . These report(s) may only address the issue of whether Lipitor causes diabetes at [specified] dosages . . . . For each dosage level on which the expert opines, the report must set forth the facts and data that form the basis for the expert’s opinion(s) that Lipitor causes diabetes at particular dosages and describe the methodology used to reach her opinion(s).
(C) No such supplemental report may rely on Dr. Jewell. . . . The Court intends to exclude this testimony under Rule 702 by separate order.
(D) An expert may only consider and rely on studies or data [already] submitted to the Court . . . or specifically cited in an expert’s prior report.
Lipitor, 2015 WL 6941132, at *6-7. This is the kind of order we like to see in MDLs. All too often, this type of aggregated litigation allows plaintiffs to persist with weak cases that would never survive individualized scrutiny.