Photo of Bexis

You’re stuck with Bexis this time. The recent Zandi v. Wyeth decisions that we’re going to discuss are too close for comfort to some matters that Herrmann’s defending – so he’s taken a pass on this one. You may now flee for the exits. Line forms on the right. No pushing.
We’ve remarked before that the Women’s Health Initiative’s rather abrupt and intemperate response to the initial detection of an association between breast cancer and hormone replacement therapy (“HRT”) has recently become something of an embarrassment to that august body. As is so often the case, the more mature science looks rather less frightening than the first glance.
That, as much as anything explains the result in Zandi v. Wyeth, No. 27-CV-06-6744 (Minn. Dist. Oct. 15, 2007). In Zandi, a state-court judge threw out the plaintiffs’ causation experts on “junk science” grounds and granted summary judgment in an HRT case. This win had to come at just the right moment for Wyeth – coming as it did about the same time the company was suffering through being home-cooked to the tune of more than $100 million in a Nevada jackpot-justice HRT proceeding.
Nine digits in an HRT case in one state. Summary judgment in another. We can’t put into words just how much we love America’s federalist system of tort law.
We can’t tell you what happened in Nevada because, if there are opinions, we haven’t seen them. We’ve got the two Minnesota opinions in Zandi – one on expert testimony (this one also has a Westlaw cite: 2007 WL 3224242), and the second on summary judgment – so at least we can take a crack at what happened in Minnesota.
Ms. Zandi produced expert reports from two doctors, Lester Layfield, M.D., and Gail Bender, M.D. These doctors had a lot of general training and experience. The problem was that, although they had all this fancy-sounding experience and training, they just didn’t use it.
We see this happen so infuriatingly often in this type of litigation that … that…. Well, suffice it to say that if Louis Pasteur had taken money from plaintiffs’ lawyers, we’d have jury verdicts based upon spontaneous generation.
Anyway, it appears that Dr. Layfield set out to determine the specific cause of Ms. Zandi’s breast cancer, even though he had never done this before in his life. Slip op. at 9 (“admits he has never determined the cause of breast cancer in a particular individual”). He even knew there was no accepted “protocol” for doing this. Id. at 8. What did he do, then? Well, he relied on a particular test (called Ki-67) that everybody in the medical profession apparently knows is not used to determine patient-specific causation. Id. For anyone who’s interested, there’s a whole chunk of the opinion (pp. 22-24) related to explaining just how and why Dr. Layfield went wrong in trying to apply this test to plaintiff-specific causation. We’ll skip it. There’s a limit to wonkishness – even for us.
Dr. Bender at least had determined patient-specific breast cancer etiology a dozen times over the preceding five years. Id. at 9. If that doesn’t sound like all that much, you’re right, it isn’t. Just more than the other guy. But she apparently does this sort of thing on a gestalt basis, with “no protocol or guideline” that she could describe. Id. All she did – at least all she told the court – was “consider[] whether a woman was on HT and whether the woman developed breast cancer during that time.” Id. The court rightly found that was no methodology worthy of the name.
Even though these opinions were nothing more than gussied up statements of the post hoc ergo propter hoc fallacy, plaintiff tried to salvage them – by seeking to trash the relevant legal standard. They claimed that, because Minnesota still follows the Frye general acceptance rule, there are no safeguards at all unless the expert opinion employs “novel” methodology. Id. at 11.
As opposed to what kind of methodology? None? In a weird way, plaintiff did have a point though. Expert opinions based on no methodology at all are hardly “novel” in mass tort litigation. That shouldn’t make them admissible.
Fortunately, it didn’t.
First, the court pointed out that plaintiffs were confusing apples and oranges, since “studies showing correlation” are hardly the same as “studies showing causation.” Id. That was plaintiff’s first fallacy. The second one was more fundamental, and goes to what epidemiology can and can’t do. Epidemiology is great for identifying “relationships over large populations.” It is lousy at establishing causation in individuals. Indeed, trying to do the latter is an utter misuse of epidemiology. Id. at 11-12. That’s why we think it should be essential in proving the former and useless in proving the latter (except for establishing a negative). The court, to its credit, thinks enough of this point to return to it later in even more detail. Id. at 17-20.
This is the whole general versus specific causation dichotomy…. It’s been around for what? Close to forever it seems. Breast cancer is a non-specific injury – it has many causes, not all of which are even known. For all their fancy training and experience, these witnesses just didn’t get it.
Plaintiff retainers have a way of doing that to people.
That wasn’t all. These experts even bollixed the epidemiology. They relied on studies for support rather than illumination – simply regurgitating their conclusions without any concern for their inherent limitations. Thus the court pointed out that in their opinions, “there is no evidence . . . of the[ir] reliability, error, bias, or accuracy.” Id. at 12.
Ouch.
That by itself was plenty enough to exclude these experts, but there was more. It’s well established that, for an epidemiologic study even to get to first base in a tort case, it must first show an association of the purported risk to the claimed cause of at least twice (2.0) background – translating (we won’t bore you with the math) to the classic tort “more likely than not” standard. Only after a study meets the 2.0 threshold, do we even bother trying to see if there’s any way to particularize that result to an individual plaintiff. Loads of cases hold this. Allison v. McGhan Medical Corp., 184 F.3d 1300, 1315 n.16 (11th Cir. 1999) (“[t]he threshold for concluding that an agent more likely than not caused a disease is [a relative risk of] 2.0”). Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311, 1320 (9th Cir. 1995); DeLuca v. Merrell Dow Pharmaceuticals, 911 F.2d 941, 958-58 (3d Cir. 1990); In re Silicone Gel Breast Implants Products Liability Litigation, 318 F.Supp.2d 879, 893 (C.D. Cal. 2004); Soldo v. Sandoz Pharmaceuticals Corp., 244 F.Supp.2d 434, 533 (W.D. Pa. 2003); Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp.2d 584, 591 (D.N.J. 2002); Siharath v. Sandoz Pharmaceuticals Corp., 131 F.Supp.2d 1347, 1356 (N.D. Ga. 2001), aff’d, 295 F.3d 1194 (11th Cir. 2002); Pozefsky v. Baxter Healthcare Corp., 2001 WL 967608, at *3 (N.D.N.Y. Aug. 16, 2001); Grant v. Bristol-Myers Squibb, 97 F. Supp.2d 986, 992 (D. Ariz. 2000); In re Breast Implant Litigation, 11 F. Supp.2d 1217, 1226-27 (D. Colo. 1998); Pick v. American Medical Systems, Inc., 958 F. Supp. 1151, 1160 (E.D. La. 1997); Sanderson v. International Flavors & Fragrances, Inc., 950 F. Supp. 981, 1000 (C.D. Cal. 1996).
This 2.0 stuff isn’t anything new, and it isn’t exactly rocket science either. It’s so accepted a point that the federal court system includes it in what it teaches federal judges about statistics – and has for years. M. Green , “Reference Guide on Epidemiology,” Reference Manual on Scientific Evidence, at 384 (Fed. Judicial Center 2d ed. 2000). One would have thought that these supposed “experts” would have understood at least this point.
What did we say earlier about the adverse side effects of retainers?
Of the large number (around 40) of epidemiologic studies on estrogens (being simplistic, that’s what’s in HRT) and breast cancer, only a couple show a relative risk above 2.0. The court wisely refused to allow plaintiffs experts to “cherry pick” – that is, to ignore the great weight of the evidence and just rely on one or two scraps that happen to support their bought-and-paid-for thesis. The court instead pointed out the obvious, “the studies provide inconsistent results about the degree to which the risk is increased.” Zandi (experts), slip op. at 13. The cherry picking about the 2.0 relative risk was particularly offensive in Zandi. It happened repeatedly, and extended even to parts of the same study:

First, this language was taken from only a fraction of the approximately 40 studies that Plaintiff has submitted. Second, the results of the studies vary substantially and many of the studies report a relative risk of less than 2. Third, the plain language of the studies Plaintiff has submitted speaks only of association and risk, not of causation. Fourth, some of the 12 studies that Plaintiff relied on for this point showed a relative risk of less than 2 in some groups and populations, and a relative risk of more than 2 in other groups and populations. Plaintiff highlighted only those of the figures which supported her contention of a relative risk of greater than 2. Finally, there is no evidence that this relative risk is statistically significant.

Slip op. 13-14 (emphasis added). This sort of manipulation of statistics speaks for itself. We won’t even bother inserting our own adjectives.
Just think what a field day plaintiffs’ lawyers would have if our clients submitted this kind of analysis to the FDA – or, worse, if the FDA accepted it.
The forty or more studies therefore boiled down to snippets from only two. One of them was not a study at all, but an analysis of the Women’s Health Initiative study mentioned at the beginning of this post. This particular issue was essentially a cherry pick of a cherry pick, because the author (hostile to HRT) had already cherry picked the WHI study – and plaintiffs cherry picked the article. The court pointed out that WHI didn’t show a relative risk of 2.0 in the first place and made “no conclusions as to causation.” Slip op. at 15. That left one study out of 40, which the court observed hardly met the “general acceptance” standard that applies in Minnesota. Id. at 16. Indeed, at the usual 95% level of statistical significance, we’d expect one study out of 40 to be wrong, just as a matter of chance.
The court also pointed out that, because of the difference between causation in the abstract and causation in specific individuals, even if it were possible to convert mere correlation into causation, it was not possible to translate causation of “some” cancer into causation of “this” cancer. Id. at 20-21. The court found “no logical connection” between the generic material the plaintiffs’ experts put forward with the plaintiff-specific causation opinions they purported to offer. Id. at 21. It excluded them. In Daubert-speak (not spoken in Minnesota), the court identified a problem of “fit.”
Finally the court took dead aim at what, in this area of the law, is truly the “last refuge of a scoundrel” (with apologies to Samuel Johnson) – that is, the purported “differential diagnosis.” Of course, such diagnoses were offered here, but they suffered from insurmountable flaws, even by the low standards of this questionable technique:

[T]heir attempt at differential diagnosis fails, and is unreliable, for two reasons: 1) breast cancer does not lend itself to differential diagnosis because the scientific community has not accepted that breast cancer has a limited number of discrete and recognized possible causes such that ruling out one cause would implicate another; 2) even if the Court could assume that the risk factors for breast cancer could be considered “other possible causes,” the Doctors did no more than give conclusory statements, with no reasoning or foundational basis, to rule out Plaintiff’s other risk factors.

Slip op. at 26-27. Consequently the supposed differential diagnoses were “conclusory,” “circular,” and in the end the experts “could not tell any woman to a reasonable degree of medical certainty whether she would get breast cancer or not.” Id. at 27-28.
All of these are valid points, but to us, the first point is really important. It serves as a much needed conceptual limitation upon the just-as-much-abused technique of differential diagnosis. Unless a particular malady has a limited number of possible clauses, the process of elimination (which is all differential diagnosis really is) simply doesn’t work reliably. There’s just too much else remaining out there unexplained. Kudos to this court for “getting it.”
The other shoe dropped in the second Zandi opinion, which applied the exclusion rulings and entered summary judgment. While the result was pretty much foreordained, the court did have at least something useful to say in this opinion. It conducted a choice of law analysis and applied the law of the plaintiff’s domicile. Summary judgment slip op. at 4-5. Especially since Minnesota has a capacious consumer fraud statute (a claim this New York plaintiff asserted, but dropped), we’re interested in rulings that individualize the choice of law analysis in this way.
We’ve discussed this issue before. Anything that prevents class certifications, we like.
Most of the rest of the opinion (pp. 6-8) was simply, “you can’t recover under [fill in the blank] cause of action without proving causation.” That’s good. That’s right. But it’s not very interesting. So we’ll skip it. If you need it, you know it’s there.
Finally, presumably under the impression that the best offense is to be really offensive, the plaintiff had moved – in the face of the unmasking of all of the deficiencies of her experts – to add a claim for punitive damages. That became moot where all of the substantive causes of action were dismissed. Id. at 8.
We’ve discussed before that things seem to be looking up a bit in Minnesota. Previously, we were only considering the application of that state’s statute of limitations. Zandi’s more of an acid test, because not every drug/medical device case is subject to that kind of defense. Every case of this type does, however, need expert witnesses. We’ll be watching this one closely on appeal.