Having three experts couldn’t save plaintiff’s claims in Robinson v. Davol Inc., 2019 WL 275555 (7th Cir. Jan. 22, 2019).  Plaintiffs’ decedent underwent surgery involving a surgical mesh patch and approximately one year later, she developed an abdominal wall abscess that led to various infections that ultimately led to her death.  Id. at *2.  The model of the mesh patch used in the surgery was the subject of a recall a few months after the decedent’s surgery.  Id. The reason for the recall was that the design of the patch caused it to adhere to the bowel or to break and perforate organs.  Id. at *1.

An autopsy concluded that decedent’s death was caused by pneumonia and complications therefrom.  The autopsy report also reported abdominal adhesions but noted that the “small bowel and colon [were] intact without perforation.”  Id. at *2.  Despite that conclusion, plaintiffs filed suit under the Indiana Products Liability Act alleging that decedent’s injuries were caused by the mesh patch.  The district court granted summary judgment after excluding all three of plaintiffs’ experts and the Seventh Circuit affirmed.

Plaintiffs’ first expert was Dr. William Hyman, a biomedical engineer who opined that the device was “inherently dangerous” and then speculated that the device caused decedent’s injuries.  But Dr. Hyman had to concede that he had never examined the images of the mesh patch used in decedent’s surgery and that he wasn’t qualified to offer an opinion “on the microbiology of her infection.”  Id. at *3.  In other words, he wasn’t offering a causation opinion.  First strike.

Plaintiffs’ second expert was the coroner who had performed decedent’s autopsy.  He tried to distance himself from his original findings by testifying that “there could have been superficial breaches scarred over with additional inflammation” and the adhesions “suggested the possibility of a breach.”  Id.  As it turns out, plaintiffs neglected to disclose that they would be relying on the coroner as an expert and so his opinion was excluded on that ground.    Id.  Strike two.

That left only plaintiffs’ medical expert, Dr. Stephen Ferzoco.  Dr. Ferzoco had testified in other mesh cases where the device broke or adhered to the intestines.  But because those things didn’t happen to decedent, Dr. Ferzoco came up with a new suggestion of causation – that the device didn’t break, but “buckled,” rubbed up against the bowel, and caused a perforation that “sealed up” prior to the explantation of the device.  Id. at *2.

New theories may be OK, but they still have to pass Daubert.  Dr. Ferzoco’s causation theory had never been presented in any formal or professional setting and was not published in any medical literature.  His only support for his theory was that he had seen it in other patients, but he was unwilling to identify those patients or provide their medical records for corroboration.  He also admitted, there was no support for his conclusions in either the decedent’s medical records or the autopsy report.  Id. at *3.  So the district court excluded his opinion as failing to meet the reliability threshold of FRE 702.

On appeal, plaintiffs only challenged the court’s ruling as to Dr. Ferzoco without whom they could not establish medical causation, an essential element of their claim.  The Seventh Circuit agreed with the district court’s reasoning based on the facts identified but they also had one more to consider.  Plaintiffs argued for the first time on appeal that Dr. Ferzoco’s conclusion was the “equivalent of a differential diagnosis.”  Id. at *4.  Putting aside the procedural argument that you can’t raise issues for the first time on appeal, do we really need to bastardize differential diagnoses any further?  Doctors perform differential diagnoses to diagnose disease in their patients so they can eliminate possibilities and prescribe the most effective treatment. They do not use differential diagnoses in the regular course of clinical practice to determine substantial factor causation, which is a litigation-driven concept.  You can find plenty of our thoughts on differential diagnosis in prior posts.   We certainly don’t welcome any equivalents.

Fortunately, the court didn’t have open arms either.  An expert’s decision “to rule in or rule out potential causes must itself be scientifically valid.”  In other words, you can’t “rule in” an unsupported causation theory.  “Dr. Ferzoco needed to establish the reliability of his [buckling] theory in order to rule [it] in as a potential cause of [decedent’s] death.”  Id.  “Differential diagnosis” isn’t a magic incantation that gets a causation opinion over the Daubert hurdle.  All the standard scientific processes must be in place.

And with strike three . . . plaintiffs are out.

Happy birthday, Hans Mattson. Did you, dear reader, forget? No worries. Mattson was born in Sweden in 1832, played a key role in Swedish settlement in Minnesota, served in the U.S. Civil War, and was consul to India. It’s been a long time since he blew out any candles, so your oversight will offend nobody. But Mattson’s birthday reminds us of how much we like Minnesota, with its pleasant people, strange accents, Bob Dylan, Kevin McHale, law school classmate Senator Amy Klobuchar, Prince, Morris Day, and pro football team that is often good, but never good enough.

Last March, we reported on a Minnesota trial court decision that was more than good enough. The court excluded a plaintiff expert’s opinion because it was not generally accepted. We liked that opinion. It turns out that the Minnesota appellate court did, too, and today we will share that affirmance with you. The case is In re: 3M Bair Hugger Litigation, 2019 WL 178498 (Minn. Ct. App. Jan. 14, 2019). This is the Bair Hugger litigation, which has a backstory straight out of Dostoyevsky or the most tawdry soap opera. The Bair Hugger is a forced-air warming device (FAWD) used to maintain patients’ normal body temperature during surgery. The inventor of the device ran into some legal problems and left the company. He then invented a rival device, which he charmingly called the Hot Dogger, and proceeded to compete with his former product. His notion of competition was a bit brutal. He did not simply claim that the Hot Dogger was better than the Bair Hugger. Instead, he claimed that the Bair Hugger increased the risk of surgical-site infection (SSI). The FDA investigated the claims that the Bair Hugger increased the risk of bacterial contamination and rejected them.

Then the inventor (whose name is Augustine) funded a study purporting to find an association between the Bair Hugger and increased SSIs. One of the study’s authors, a former employee of Augustine, testified that “[t]he study does not establish a causal basis” and characterized it as marketing rather than research. In 2017, the FDA sent a Safety Alert to healthcare providers “reminding [them] that using thermoregulation devices during surgery, including [FAWDs], ha[s] been demonstrated to result in less bleeding, faster recovery times, and decreased risk of infection for patients”; advising them that “[a]fter a thorough review of available data, [the FDA was] unable to identify a consistently reported association between the use of [FAWDs] and [SSIs]”; and recommending “the use of thermoregulating devices (including [FAWDs]) for surgical procedures. This is all very good news for the Bair Hugger and very bad news for its competitors.

Did Augustine confess error? He did not. Did Augustine take this setback lying down? He did not. He supported lawsuits against the Bair Hugger. Those lawsuits depended on medical experts who linked the Bair Huggers to infections, even though said experts had never previously studied the efficacy of FAWDs or published peer-reviewed articles relevant to the claims in this litigation and none of whom claimed that their general-causation opinions were generally accepted within the relevant scientific community. The Bair Hugger folks quite rightly moved to preclude these threadbare opinions. The trial court excluded the testimony of appellants’ experts and consequently granted responded summary judgment with respect to general causation. That is the opinion we reported on last year and that is the opinion upheld by the court of appeals. Oddly, the last word of the appellate opinion is the usual “Affirmed.” Having watched Fargo at least ten times, we would have thought the court would have concluded with a “You betcha!”

The issue turned on application of Minn. R. Evid. 702, which governs expert testimony. When you see “702” and “experts,” you might think of Federal Rule of Evidence 702 and the Daubert requirements. But Minnesota does not follow Daubert. Rather, Minnesota calls itself a “Frye-Mack state.” Frye is the old “general acceptance” theory trotted out a long time ago and which now finds itself ousted from the federal courts and most state courts. But the Frye theory is alive and well in the land of ten thousand lakes. The key Minnesota case is called Mack. Get it? Under the Frye-Mack test, if an expert’s opinion involves a “novel scientific theory,” the proponent must establish “that the underlying scientific evidence is generally accepted in the relevant scientific community.” The Minnesota appellate court reviewed de novo whether the underlying scientific evidence was generally accepted in the relevant scientific community. (Federal appellate courts reviewing Daubert decisions usually employ a less rigorous abuse of discretion standard.)
The appellants tried to escape the Frye-Mack general acceptance test by arguing that the science used here was not “novel” within the meaning of Minn. R. Evid. 702. Nice try. But the court held that Minn. R. Evid. 702 pertains to “novel scientific theory,” not novel science. The fact that air and particulate movement is not a new science does not mean that appellants’ premise, i.e., that FAWDs increase the risk of SSIs, is not a novel scientific theory. So much for that first line of attack. (This “novel scientific theory” issue is quite different from a typical Daubert analysis.)

Now on to the main feature.

The appellants repeated their argument from below that Minn. R. Evid. 702 does not require a showing that “proof of the expert’s ultimate opinion” is generally accepted; instead, it requires only a “showing that the tools employed by the expert[s] in reaching their opinion [i.e., the experts’ methodologies] have gained general acceptance.” The appellate court agreed with the trial court that Minn. R. Evid. 702 is not restricted to novel methodologies; it refers to “opinion or evidence involv[ing] novel scientific theory” for which “the underlying scientific evidence is generally accepted.”

The appellate court deferred to the assessment of the relevant scientific community rejecting appellants’ novel scientific theory (that’s the swell, easy thing about the Frye test – it is really about deferring to scientific consensus) and concluded that there is no demonstrated causal relationship between FAWDs and increased risk of SSI. Accordingly, the appellate court affirmed the district court’s decision to exclude appellants’ experts’ evidence. It also affirmed the trial court’s summary judgment as to general causation. It occurs to us that the holding that regardless of methodology, your result is crazy and therefor excludable, indicates that a properly enforced “general acceptance” based exclusionary rule can be as good as, or even better than, Daubert.

In any event, this Bair Hugger opinion is useful and nice. Minnesota nice.

This guest post is from long-time friend of the blog Bill Childs, from Bowman & Brooke, who also wishes to thank Elizabeth Haley for research assistance.  It’s a reworking of a piece on bogus scholarly literature that Bill previously published here.  We thought it was both good and relevant enough that we approached Bill with a request to re-run it as a guest post on the Blog, and he graciously accepted.  As always, our guest bloggers are 100% responsible for the content of their posts (and here that disclaimer also extends to B&B and its clients), and deserve all the credit (and any blame).

**********

The Daubert court, in interpreting Rule 702 of the Federal Rules of Evidence, laid out various non-exclusive criteria for consideration in evaluating proposed scientific evidence, one of them peer review. As the Court put it:  “The fact of publication (or lack thereof) in a peer reviewed journal…will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”  Daubert v. Merrell Dow Pharms., 509 U.S. 579, 594 (1993).  Peer review, or the absence thereof, was mentioned repeatedly by the New Jersey Supreme Court in endorsing Daubert in the recent decision in In re: Accutane Litigation, 191 A.3d 560, 586, 592, 594 (N.J. 2018).  Among other things, the Court noted that the plaintiffs’ expert had not submitted “his ideas…for peer review or publication,” considering that failure to be a strike against his methodology. Id. at 572.

Compared to other Daubert factors (or those described in the subsequent comments to Rule 702), the presence or absence of peer review may seem more binary than other factors − i.e., easier for a court to evaluate − it’s either there or it’s not, it seems.  Not so, either in the traditional sense of peer review or the changing world of things that now get called peer review.  Given this perceived simplicity, though, it frequently gets less attention than it deserves.  Litigants should think about peer review as being more complex than it appears, and in some specific contexts, additional exploration − whether through discovery into your adversaries’ experts, or early investigation of your own potential experts − may make sense.

Daubert vs. Predator

One fascinating consequence of this consideration of peer review in the Daubert context is the potential for experts publishing litigation-related work in what are called “predatory journals” (sometimes also called “vanity publications).”  See Kouassi v. W. Illinois Univ., 2015 WL 2406947, at *10-11 (C.D. Ill. May 19, 2015); Jeffrey Beall, “Predatory Publishing Is Just One of the Consequences of Gold Open Access,” 26 Learned Pub’g 79-84 (2013); John Bohannon, “Who’s Afraid of Peer Review?” 342 Science 60-65 (Oct 4, 2013).

Predatory journals, like the eponymous Predator in the 1987 film and its 2018 reboot, camouflage themselves.  They make themselves look not like the Central American jungle background, but like legitimate medical or scientific journals.  Their publishers’ websites generally look like legitimate publishers’ websites (if sloppy at times), their PDFs look like “real articles,” and their submission process might even look normal.  They’ll even claim to have peer review and editorial boards and all the rest of what you expect from journals.  Like the Predator, they even try to manipulate their editorial voices to sound like real journals.

These journals are, however, just aping the façades of real journals.  They typically do not have legitimate peer review processes − or possibly any review processes at all.  Frequently, if an author pays the exorbitant fees, the submitted article will get published.

Myriad examples exist revealing such journals as frauds.  My favorite is probably the publication of a case report of “uromysitisis” an entirely fictional condition − first referenced in Seinfeld as a condition from which Jerry claims to suffer after being arrested for public urination − by the purported journal Urology & Nephrology Open Access Journal.  The author of the intentionally nonsensical article − not a urologist, nor a medical doctor at all − wrote about his experience here. After that article’s exposure as an obvious fake, and something that even the most casual of reviewers should have rejected, the article was removed, but the “journal” is still up and publishing on the MedCrave site, described, a bit awkwardly, as “an internationally peer-reviewed open access journal with a strong motto to promote information regarding the improvements and advances in the fields of urology, nephrology and research.”  A few years earlier, a computer scientist published an article consisting solely of the phrase “Get me off your [obscenity] mailing list,” with related graphs, repeated for eight pages.  That journal remains in existence as well.

Such journals are largely set up to entrap new (and naïve) scholars who are under tremendous pressure to publish for promotion and tenure purposes − but they also can provide an opportunity for dubious expert witnesses to get something published they can cite as “peer reviewed,” especially as courts more and more often note the presence or absence of peer review.  It isn’t news to many litigation experts that having peer review for some of their more outlandish assertions can increase the odds of their testimony being admitted.  If an expert in fact has published in a predatory journal (and it can be shown that the expert knew or should have known about that fact), that fact should count against the admissibility of the testimony.

Given the camouflage, it is fortunate that there are resources and strategies that can help identify such publications.  Retraction Watch, published by the Center for Scientific Integrity and headed by science writer Adam Marcus and physician and writer Ivan Oransky (full disclosure: Ivan and I are friends, based in large part on our shared love for power pop like Fountains of Wayne and western Massachusetts bands like Gentle Hen.  He should not be blamed for my Predator references) while not focused solely (or even largely) on predatory journals, is an accessible look at the world of retractions “as a window into the scientific process.”  They keep an eye out for interesting developments in the world of predatory journals, and scientific publications generally, and their coverage is what made me suspicious when, in one of my cases, an adversary’s expert’s article was published by a MedCrave journal (home to the Seinfeld article).  Retraction Watch’s coverage of that article led to what I assume will be the only time in my career I had the chance to ask a Ph.D./M.D. if he was familiar with Seinfeld and if the show is, in fact, fiction, based on him publishing − and in fact being listed as an editor of − another MedCrave journal.

There is also a list of suspected predatory journals archived at Beall’s List.  The appearance of a journal on that list is not conclusive evidence that it is predatory, but it is enough to raise questions.  The removal of a journal from the Directory of Open Access Journals for “editorial misconduct” or “not adhering to best practices” (see list, here) is another giveaway.  The Loyola Law School’s “Journal Evaluation Tool” can also provide a useful rubric, accessible to non-scientifically-trained lawyers, for evaluating whether a journal is likely legitimate or not. And your own experts can likely provide feedback to you about journals.

Most experts will not have published in predatory journals.  But it is still worth the time to explore the question, especially about pivotal articles on which the experts are relying − whether the expert is your adversary’s or your own.  Even if the publication offer was innocently accepted (i.e., even if the author did not realize she was publishing in a predatory journal), the lack of rigor in evaluating the article by the publisher should at a minimum eliminate any weight given to the peer review factor. And if an author has intentionally published in such a journal, that should be the equivalent of an intentionally false statement in a C.V.

Not All Peer Review Is the Same

Of course, these relatively new faux journals are not the only way experts get published.  Consider the most traditional form of peer review, where editors of a journal have outside reviewers, usually with their identities screened from the authors, evaluate the quality and originality of the work, confirming that the methodologies presented appear legitimate and that the conclusions reached are reasonable based on what’s described.  Given that those goals line up nicely with the goals of a Daubert analysis, it is sensible indeed for a court to look at that as a potential indicator of reliability − indeed, that’s why peer review is a factor in the first place.

But even if a proffered expert testifies to having followed a methodology that matches something in a peer-reviewed publication, it is often worth at least a few deposition questions about the review process and a line in your subpoena duces tecum requesting copies of any materials the author has received relating to the review, or to attempt some third party discovery on the journals in question − though some courts may limit or refuse that discovery.  See, e.g., In re Bextra & Celebrex Mktg. Sales Practices & Prod. Liab. Litig., 2008 WL 859207 (D. Mass. March 31, 2008) (granting protective order for non-party medical journal publisher, expressing concerns about a chilling effect).  The propriety of allowing such discovery is beyond the scope of this article, but I addressed it in more detail in The Overlapping Magisteria of Law and Science: When Litigation and Science Collide, 85 Neb. L. Rev. 643 (2007).

If you get peer review notes, it’s possible you’ll find that a reviewer recommended the removal of a conclusion that the expert is now presenting, or that the reviewer warned against a particular inference from what is in the article.  Making it even easier, some journals, traditional and, more often, “open access,” are now posting their reviewers’ comments online.  Even if you do not find anything relevant, most experts will readily concede that peer review reflects at most an “approval” of the overall approach and is not a guarantee of correctness as to conclusions.  And sometimes you’ll be able to establish that the study in question was based on flawed data or that the work done for litigation did not, in fact, use the same methodology as that in the publication.  See, e.g., In re Mirena IUS Levonorgestrel-Related Prods. Liab. Litig., ___ F. Supp.3d ___, 2018 WL 5276431, at *11-13, *28, *34, 37-38, *50-51 (S.D.N.Y. Oct. 24, 2018) (rejecting expert’s reliance on “repudiated” open access journal article by author that did not disclose retention as a plaintiff’s litigation expert); In re Viagra Prods. Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009) (reversing an initial denial of defendants’ Daubert motion after learning of flaws in underlying data and processing, noting that “Peer review and publication mean little if a study is not based on accurate underlying data.”); Palazzolo v. Hoffman La Roche, Inc., No. A-3789-07T3, 2010 WL 363834, at *5 (N.J. Super. App. Div. Feb. 3, 2010) (finding no abuse of discretion in excluding an expert’s conclusion based on conclusion that the expert did not in fact use the methodology claimed to have used in the underlying peer-reviewed study).

Sometimes, even in a more traditional context, the peer review that was performed was not what was likely pictured by the Daubert court, particularly when the work at issue is outside the so-called “hard sciences.”  In a publicized example, the review of a history-oriented book about the lead and vinyl chloride industries, authored by frequent plaintiffs’ experts and published by the University of California, involved reviewers known to − and in some cases recommended by − at least one of the authors . See 85 Neb. L.R. at 660-63 (describing this situation; original book website was removed).  Whether or not that review was adequate for the academic purpose, it was materially different from, say, the reviewers of a double-blind clinical trial, and the facts surrounding it seem plainly relevant to how much weight a court should give it under Rule 702 and Daubert.  Without that discovery, the court may well not have learned about what “peer review” meant in that context.

Consider also the scenario where an expert says that their methodology has gone through peer review but the article has not yet been published.  Again, it may be worth pursuing more details, especially if the expert seems likely to cite to that review in defending their position.  If it has not yet been accepted for publication, consider requesting a copy of the comments the expert received from the reviewers. If those comments are provided, they may be helpful; if their production is refused, the fact of that review should be rejected as a basis for admissibility.

What To Watch Out For

Fundamentally, the important thing is to look through your and your adversaries’ experts’ C.V.s with care, especially as to articles that are directly on point with the issue you’re addressing.  It is not enough to think about what the articles say, and it also is not enough to think to yourself, “Well, that sounds like a legitimate journal.”  Look at the publishers’ site; look for hints in the article itself; and do some searches.  Ask a few questions of the expert about author fees and what the peer review entailed and throw in a document request to see if there is something worth exploring further.  And if you are dealing with a situation with what you think is a predatory journal, be ready to teach a judge about what that means; as of this writing, no court has referenced “predatory journals” in a reported Daubert decision.

This post comes from the Cozen O’Connor side of the blog only. 

 

Last year, we favorably cited Canary v. Medtronic, Inc. 2017 WL 1382298 (E.D. Mich. April 18, 2017), on two occasions, once to highlight its use of TwIqbal at the motion to dismiss stage and again as a part of our preemption scorecardCanary was a winner.

That decision, however, dismissed only plaintiff’s product liability claims, letting her fraud claim move forward. Last month, the court issued a summary judgment decision on that fraud claim. Canary is not such a winner anymore.

Backing up a little bit, the product was Medtronic’s PrimeAdvanced spinal cord stimulator. It was implanted in plaintiff to address her neck and back pain. Canary v. Medtronic, Inc., 2018 U.S. Dist. LEXIS 192794, at *2 (E.D. Mich. Nov. 13, 2018). Yet, after the surgery, plaintiff complained that she was experiencing hives all over her body, abdominal pain and bowel inflammation. Id. at *3-4. Almost a month later, she had the device removed. She claimed that her problems stopped. Id. at *4.

During discovery, however, plaintiff produced no causation opinion from an expert—to be clear, no expert on either general or specific causation. Instead, she relied exclusively on the testimony of her treating doctors. Their testimony, however, was only marginally helpful to her, as it was laced with words like “possible” and “plausible” and missing any declaration that the opinions were reached to a degree of medical certainty or even probability. Yet the court allowed her fraud claim to survive summary judgment.

In particular, Plaintiff’s internal medicine doctor testified that the device was a possible cause, one of the top possibilities, and that allergy and skin testing was needed. But he offered no opinion to a reasonable degree of medical certainty:

Dr. Thammineni, the internal medicine doctor who treated Plaintiff when she was admitted to the intensive care unit on May 28, 2013, explained during his deposition that while he stated that Plaintiff had “contact dermatitis secondary to spinal cord stimulator” in his notes, he recommended that she get allergy and skin testing to ascertain exactly what caused the reaction. When Dr. Thammineni was asked if the spinal cord stimulator was more of a possibility than other causes, he responded with a “yes.” And when asked “[c]an you say to a reasonable degree of medical certainty that the hives were caused by the implantation of the spinal cord stimulator on May 16th and not some other source,” he stated that the stimulator was “one of the top possibilities.”

Id. at *6.

Plaintiff’s dermatologist testified that something at the time of the surgery triggered the hives, that the timing of the implant and removal created an “association,” but that there was no test to determine the cause of the hives:

There was something around the time of the surgery that triggered a hive-like reaction. The device was used, but other things were used at the time of the procedure, such as prep and — however, when the device was removed, her hives and itching in that area went away, so there’s an association with that. Unfortunately, there’s no test for [these types of] reactions in this situation.

Id. at *6-7. She did not testify on causation to a reasonable degree of medical certainty.

Finally, plaintiff’s allergist stated that it was “plausible” that plaintiff had an allergic reaction to the device, and later said that she “believed” the device caused the hives:

Q: You stated it was plausible that she had an allergic reaction to the device; is that correct?

A: I think everything is plausible, but, yes.

Q: Okay. The fact that she reported she had hives after the device, and her hives stopped after the device was removed, would that further support the conclusion that it was plausible that the device caused a reaction?

[Defense counsel]: Object to the form.

The witness: I would — yes.

[Plaintiff’s counsel]: And earlier you testified regarding the correlation and the difference between general hives versus local hives.

A: Yes.

Q: Do you believe the stimulator caused the local hives she reported to you?

A: Yes.

[Defense counsel]: Object to the form.

[Plaintiff’s counsel]: That was a yes?

A: Yes.

Id. at *7. Again, there is no indication that the allergist testified that her “belie[f]” was held to a reasonable degree of medical certainty, or to any degree of certainty or even probability.

The court, discussing two cases involving less complex science, one of which did actually involve a plaintiff’s general causation expert, held that this treater testimony, along with the general sequence of events, was enough. In fact, while Medtronic had two experts opine to a reasonable degree of medical certainty that the device did not cause the allergic reaction—the only two causation experts in the case—the court noted that these experts “did not eliminate the possibility that it could have done so.” Id. *15.

If the analysis comes down to this, it’s not clear that a defendant could ever win summary judgment. Almost any causation issue would be trial ready, despite what could be a lack of any real science behind it or an expert to opine on that science to a reasonable degree of medical certainty.

The Canary doesn’t look so good anymore. Time to get out of the coal mine.

A couple of weeks ago, we reported on the terrific Daubert decision in the Mirena IIH MDL in the Southern District of New York, In re Mirena IUS Levonorgestrel-Related Prods. Liab. Litig., 2018 WL 5276431 (S.D.N.Y. Oct. 24, 2018), in which the court granted the defendants’ motions to dismiss all seven of the plaintiffs’ remaining general causation experts.  In that post, we explained that all seven opined that the defendants’ intrauterine contraceptive device caused idiopathic intracranial hypertension (“IIH”), a rare and potentially serious condition market by increased cerebrospinal pressure in the skull.  Only one study had ever found a causal link between Mirena and IIH.  That study was by Dr. Mahyar Etminan, who was on the plaintiffs’ payroll at the time he published the study, a fact he failed to disclose.  After a prominent scientist in the field attacked the methodology of the Etminan study because it failed to control for age and gender, Dr. Etminan repudiated much of the study’s analysis and withdrew as an expert.

Of the seven remaining experts, four drew their general causation conclusions largely by drawing on existing sources, including varying combinations of case reports regarding Mirena, case reports regarding other contraceptive products containing LNG, another product’s warning label, the repudiated portions of the Etminan study, and another study (the “Valenzuela study”) that reported a statistically significant association between LNG-containing devices and IIH but which, the authors emphasized, found only a correlation, not a causal link.  The other three experts were “mechanism” experts, each of whom postulated a supposed mechanism by which the defendant’s product could cause IIH.   In our last post, we reported on the court’s decisions regarding two of the experts in the first group but we promised to tell you more.  Today, we focus on one more expert in that group as well as highlights of the court’s decisions about the “mechanism” experts.

The former, the plaintiffs’ ophthalmology expert, was the only one of the plaintiffs’ experts who had written about the relationship between levonorgestrel (“LNG”), the synthetic hormone in  the defendants’ contraceptive device, and IIH, and was the only one of the plaintiffs’ experts who had written an expert report on the supposed causal  link before the MDL was formed.      In a 2015 book about drug-induced ocular side effects, the expert had stated that he believed there was a “possible,” but not a “probable,” association between LNG and IIH.  In the book, he explained that he assessed causation as possible “when there is a temporal relationship, but the association could also be explained by concurrent disease or other drugs or chemicals,” and dechallenge data (information about what happens when the treatment is withdrawn) is “lacking or unclear.”  Mirena, 2018 WL 5276431 at *47 (citation omitted).   In contrast, causation is “probable” or “likely” when “there is a temporal relationship unlikely to be attributed to a concurrent disease or drugs” and there is positive dechallenge data.  Id.   The book’s conclusion that there was a “possible” causal association between LNG and IIH was based largely on case reports involving the defendants’ product and other LNG-containing products and upon two publications discussing several of those case reports.  The book did not consider the Etminan and Valenzuela studies, which had not yet been published.

Unlike the other experts in the first group, the ophthalmologist did not claim to have performed a Bradford Hill analysis.  Instead, in support of the expert’s causation conclusion, his report cited case reports, discussion of another LNG-containing product, case reports, and citations to both the Etminan and Valenzuela studies.  In addition, the report included a vague discussion of broad propositions from which he suggested that there was a biological mechanism for LNG to cause IIH.  In his deposition, however, he repudiated this “mechanism” opinion, testifying that the mechanism was “unknown” and that he was not being offered as a “mechanism” expert.

Analyzing the expert’s opinion, the court stated, “[The] proposed testimony amounts to a blend of disparate items that [the expert] contents together show that Mirena causes IIH. . . . [The expert] does not purport to use the flexible Bradford Hill methodology to guide his analysis.”  Instead, he used a “non-replicable mode of analysis” consisting of “listing factors that he argues support his conclusion.”  Id. at *50.   The court held that the expert’s proposed testimony “fails to meet any of the Daubert reliability factors.”   The expert’s causation conclusion “has not been tested; it has not been subject to peer review; it has no known error rate and there are no standards controlling its operation; and it has not been generally accepted by the scientific community.”  Moreover, the expert’s “handling of virtually every one of the individual items on which he relies” was “methodologically suspect.”  Id.   This included overlooking the fatal flaws in the Etminan study, the expert’s failure “to engage with consequential evidence contrary to his outcome.”   Id.  at *51-52.  Finally, though the expert “[made] his mechanism opinion an important component of his expert report,” he “repeatedly distanced himself” from that opinion at his deposition, “repudiate[ing] any mechanism opinion as beyond his expertise.”  The court held, “”The removal of that pillar alone is fatal to [the expert’s] weight of the evidence analysis;” moreover, the “mechanism” opinion would have been inadmissible in any event because the expert was not qualified to offer it.  Id. at *52.   And so, holding that the expert’s opinions failed to satisfy Daubert’s reliability standards, the court excluded the opinions in their entirety.

The plaintiffs’ “mechanism” experts fared no better.  One, an OB/GYN and the founder of a clinical and epidemiological research organization devoted to reproductive health issues, “embrace[d] the ‘androgen theory’ by which Mirena purportedly causes IIH—specifically that androgens cause IIH and that because LNG, while a progestin, has androgenic effects, LNG in turn may cause  IIH.”  Id.  at *53.  The court held, “As a threshold consideration, [the] theory that Mirena  causes IIH through androgenic side effects does not satisfy any of the four Daubert reliability factors.”  Id. at *58.   But beyond the flaws in the opinion’s premises, the court “discern[ed] a broader overarching lapse of methodology” affecting the mechanism opinion: the expert’s report did not address the threshold issue of “what IIH is and how this condition comes about.”   Id.  In addition, the court criticized (in extensive and thorough detail that is beyond the capacity of these pages) the expert’s “scant attention to the pharmacokinetic process that must underlie the causal sequence that he postulates” and his “speculative leaps in support of his two central premises:  that androgens can cause IIH, and that LNG, a progestin with androgen receptor affinity, can cause IIH.”  Id.   The court concluded,

“In the end, while [the expert’s] credentials are sterling, the methodology underlying his opinion in this case is not.  He relies on supposition and attempts to link disconnected studies by others.   And he uses some of his source material for more than it can fairly support.  The result is a hypothesis that may or may not bear up when and if it is ultimately tested, not a reliable expert opinion admissible under the governing standards.  The Court therefore must exclude his testimony.”  Id. at *62.

The court similarly dispatched the plaintiffs’ other two mechanism experts because their methodologies were unsound and their theories failed to satisfy Daubert’s reliability standards.  Both discussions are lengthy and we cannot do justice to them here, but we again recommend that you read the whole opinion when you have time to appreciate its rigor and its unflinching confrontation and dissection of the technicalities underlying all of the experts’ opinions.  Doing justice to Daubert analysis of opinions like these is a monumental task.   All too frequently, and perhaps understandably, courts decline even to try, counting on juries (or, more likely, settlements) to do their work for them.   The Mirena court displayed rare dedication to the principle that the system can work only if courts properly discharge their duties as “gatekeepers.”  We applaud this decision, urge you to read it and cite it, and we hope that more courts will accept similar challenges.  We will keep you posted.

This will be the third consecutive week for us to discuss a favorable expert ruling out of the Cook Medical IVC filter litigation in the Southern District of Indiana. By this point, we really do expect some sort of remuneration from the Indiana Chamber of Commerce – maybe tickets for the surprisingly impressive Colts, or a reuben sandwich from fabulous Shapiro’s delicatessen. (To our palate, the Shapiro’s outpost at the Indy airport offers better corned beef than any you can find at LaGuardia, JFK, Newark, O’Hare, or even our hometown PHL. How weird is that?). Today’s Daubert ruling is In re Cook Med., Inc., IVC Filters Mktg., Sales Practices & Prod. Liab. Litig., 2018 U.S. Dist. LEXIS 196637, 2018 WL 6047018 (S.D. Indiana Nov. 19, 2018), and, as with earlier orders from this court, is clear, concise, and eminently sensible. The Daubert motion at issue here was filed by the plaintiff, who wanted to shut down a defense psychiatrist from rendering opinions that the plaintiff’s emotional injuries were not caused by the IVC filter. We won’t make a secret of the fact that we think the plaintiff’s Daubert arguments were uncommonly silly. For example, the plaintiff contended that the defense experts opinions were not offered to a reasonable degree of certainty when, in fact … they were. Still, there are commonly silly courts out there that might have given some credence to some of the plaintiff’s arguments. Thankfully, though, there is nothing silly about the In re Cook court.

The plaintiff attempted to preclude the defense expert’s differential diagnosis because it listed several alternative causes, not just one, for the plaintiff’s injuries. That position is a misfire in several respects. To begin with, the etiology of the plaintiff’s injuries was multifactorial. If reality puts more than one cause in play, why must a defense expert be forced to pick one? Further, the defense, of course, does not bear the burden of showing any cause. That burden is on the plaintiff. It is powerful stuff for a defense expert to show that there are many other plausible causes out there for the plaintiff’s injury, and those other causes exist independently of the defendant’s product. Indeed, the defense expert does not even need to exclude the product as a possible cause; it is enough if the expert can show that there is no reasonable way to put the finger on the product as opposed to one of the other possible causes. The defendant has absolutely no obligation to alight upon only one alternative cause. At a minimum, the multiplicity of alternative causes is a multiplicity of confounders that undermines the plaintiff’s false certainty.

The plaintiff also objected to the defense expert’s differential diagnosis because that expert had never physically examined the plaintiff. First, the defense expert had done plenty of work to substantiate his opinions. He had reviewed the plaintiff’s medical records, the plaintiff’s two videotaped depositions, the depositions of five treating physicians, an independent medical examination, and the reports of the plaintiff’s experts. That would be enough in pretty much any court to get to the jury. If the plaintiff wanted to make something out of the lack of a physical exam, that is fine fodder for cross-examination. But there is an additional wrinkle here. The court tells us that the defense expert “was not given the opportunity for an appropriate, direct, clinical examination” of the plaintiff. Apparently, there is a dispute between the parties as to why, exactly, that was so. Reading between the lines, we suspect that the plaintiff objected to the exam, or at least to some aspect of the exam. Obviously, then, the plaintiff cannot be heard to object to the absence of a medical examination that the plaintiff refused. Call it estoppel or fairs-fair, or whatever. (At this point, we cannot resist raising one of our chief gripes with other defense lawyers. In almost every mass tort litigation we’ve seen, there will be some defense lawyers who oppose having their defense experts perform physical examinations on the plaintiffs. Why? They are afraid of what the expert might find. Huh? Do you believe in your case or not? Plus, a good expert will be able to work with whatever the facts are. More information is better than less information. Our job is to deal with the facts, not alter or ignore them. Finally, as the In re Cook case demonstrates, you are simply in a much better position if you at least tried to perform the examination. Plaintiff lawyers often oppose these exams. Make them pay for that opposition. Anyway, the next time we hear from a plaintiff lawyer that this blog does nothing but bash plaintiff lawyers, we’ll point them to this parenthetical.)

Finally, the plaintiff tendered a back-up position that, if the defense expert would be permitted to opine on alternative causes, then that expert should not mention opioid use disorder. Nice try. The court observed that the defense expert had found “ample evidence in the record” to suggest that the plaintiff met the criteria for opioid use disorder, and such condition could be a cause of the mental health injuries of which the plaintiff complained. As much as the plaintiff was trying to keep out evidence of alternative causes, this was the alternative cause that the plaintiff most feared. And with good reason. It is, no doubt, powerful evidence. It reminds us of the old prosecutor’s joke about criminal defense lawyers objecting “Prejudicial, your Honor – tends to show guilt.” The In re Cook court held that the evidence of opioid abuse was part of the plaintiff’s medical record and was “essential” to the defense expert’s differential diagnosis. It was, therefore, admissible.

Last week we praised the S.D Indiana court’s Daubert decision in the Cook IVC filters litigation. Apparently the court is an expert on experts, because it came out with another sensible decision on experts, this time on the use of treating physicians to offer causation opinions. In re Cook Medical IVC Filters Mktg., Sales Practices, & Prod. Liab. Litigation, 2018 WL 5926510 (S.D. Indiana Nov. 13, 2018).

We all know how the expert game works. Each side hires experts (sometimes through an ever-so-helpful expert service), spoon-feeds them the friendliest supporting materials, and plops these walking jukeboxes down in front of a jury to play a catchy tune. Expert testimony, done properly, ends up being a well-rehearsed preview of the closing argument. These experts are usually experienced story-tellers and can parry with the most cunning cross-examiners. The problem is that jurors have a pretty good sense of how this process works. And if they had any doubts that the expert witness is an advocate, such doubts are erased when they learn the expert is making more money that day in court than any juror makes in a month. Thus, it is not surprising to hear from jurors after a case is over that they discounted the expert testimony. (Yes, we acknowledge that the discount might not be as much as the jurors say. It is kind of like how everyone claims that advertising does not affect them.)

Enter the treating physician. Both sides try to enlist the treating physicians as oath-helpers. If a party does manage to get the treater on its side, you can bet that will be a huge point in the closing. E.g., “Throw out all the experts if you want, but you cannot throw out the treater’s opinion. Nobody paid the treater to testify. The only side the greater is on is the patient’s health.” Etc. That is why the depositions of the treating physician is one of the most important moments in a case. That is why we defense hacks hate being in jurisdictions where the defense is forbidden from contacting treaters, but plaintiff lawyers are free to meet with them, shower them with bad company documents, woodshed them, perhaps dangle threats or promises, and, in general, do their best to line up the treaters with their paid experts on crucial issues such as warnings, injuries, and, most of all, causation. The treater’s testimony seems credible precisely because the treater is not retained by either party.

Most of you are familiar with the concept of non-retained experts. But there is probably a non-trivial subset of lawyers unfamiliar with the 2010 amendment to Fed. R. Civ. P. 26(a)(2). That amendment added requirements for disclosing non-retained experts. More specifically, the 2010 amendment added 26(a)(2)(C), providing that a party designating a non-retained experts must issue a summary disclosure stating the subject matter and summary of the facts and opinions to which the witness will testify. Rule 26(a)(2)(C), eight years after its adoption, is still a trap for the unwary. There is a 50% chance that a plaintiff lawyer naming nonretained experts will fail to supply the summary disclosure required by Rule 26(a)(2)(C).

But that is not what happened in In re Cook. The plaintiff did, in fact, make a summary disclosure. That summary disclosure proffered the treating physicians’ opinions that several possible alternative causes did not play a role in the plaintiff’s injury. Those opinions would have been, of course, quite useful for the plaintiff. If the jury is persuaded to rule out alternate causes, the defendant’s product would look more and more like the culprit. So what’s the problem? The plaintiff followed all the rules, right? Wrong. There is an even more fundamental rule, antedating Rule 26(a)(2)(C): non-retained expert treating physicians are limited to opinions formed during the course of providing treatment. The In re Cook court cited a Seventh Circuit opinion, Meyers v. Nat’l R.R. Passenger Corp., 619 F.3d 729, 734-35 (7th Cir. 2010). You won’t have to look too hard to find a similar opinion in whatever jurisdiction houses your litigation. The plaintiff in Cook argued that the treaters were basing their causation opinions on “their training, expertise, and observations during treatment,” but that last bit rang untrue. As the Cook court put it, “The first time these opinions were introduced was during their deposition testimony for purposes of this litigation. It is, therefore, inadmissible.”

Perhaps the plaintiffs could have gotten around this problem by submitting the treaters’ opinions pursuant to the standard expert disclosure process in Rule 26(a)(2)(B) rather than the summary disclosure in 26(a)(2)(C), but that would mean that the plaintiffs would have had to retain the treaters. That probably also means that those treaters would be paid. Bye-bye neutrality/purity. Plus, most plaintiff lawyers are cheap. For that reason, and for the S.D. Indiana’s persistent reasonableness, we are thankful.

********************************************************

We are also thankful for your attention, comments, and suggestions. We are thankful for another year of smart, supportive clients. We are thankful for judges who work hard, read all the papers, do their utmost to be fair, and, consequently, tell us that we won. Mostly, though, we are thankful for the mashed potatoes on our plate tomorrow. Sure, we adore the Drug and Device Law Family gathered around the table, but there is zero chance that the potatoes will start a political argument. Happy Holiday to you all.

A couple of years ago we penned a paean to Indiana and its cultural and legal triumphs. Now that another chunk of our family has decided to relocate to that happy state, our thoughts returned to Indiana’s many virtues. Sure, there’s the Indy 500, the fabulous covered bridges of Parke County, the Benjamin Harrison home, and a couple of our favorite in-house lawyers. And now there’s In re Cook Medical, Inc., IVC Filters Mktng., Sales Practices & Prod. Liab. Lit., 2018 U.S. Dist. LEXIS 190177 (S.D. Ind. Nov. 7, 2018).

Maybe plaintiff-files-Daubert-motion isn’t quite man-bites-dog, but it’s still pretty rare. Plaintiffs are usually all about getting to the jury, no matter how raggedy the case. In fact, the more raggedy, the better. Consequently, plaintiffs devote considerably more time fending off Daubert challenges than mounting their own. Maybe there’s a reason for that. Maybe plaintiffs tend to put up hack experts, while defendants put up good ones. Maybe we’re biased. Okay, definitely we’re biased. But take a look at what happened in In re Cook.

The defendants in Cook offered the testimony of a mechanical and materials science engineer who opined that the IVC filter design was not defective and that its benefits outweighed its risks. The expert was well qualified. It’s not as if it was a close call. The defense expert had the appropriate degrees from Cal Berkeley. He also had been a general manager at a company that made IVC filters. Federal Rule of Evidence 702 requires that an expert be qualified by knowledge, skill, experience, training, or education. Note the “or.” This expert had it all. Not only was this expert qualified, he had done the work. He looked at MAUDE adverse event data, peer-reviewed literature, the company’s testing records, the design and engineering records, the opinions of other experts in the case, and fact depositions. That is, the defense expert in Cook did far more homework than virtually any plaintiff design expert we have encountered. We’re not sure we’ve ever deposed a plaintiff design expert who has actually read the design history file. Indeed, we’re pretty sure that most plaintiff experts do not know what a design history file is.

The plaintiff’s main beef with the defense design expert in In re Cook concerns the opinions regarding the device’s benefits and risks. The main benefit of an IVC filter is prevention of pulmonary embolisms. How can a mere engineer opine on medical issues? (Dear Engineering Nerds: Please do not write angry comments; we are using the “mere” word sarcastically. We have endless respect for engineers. We utter a prayer of thanks to them every time we drive across the Benjamin Franklin Bridge. At parties, we always get next to the engineers in case a game of Jenga breaks out.) The court has no problem answering this question: “a biomedical engineer … can testify about the benefits and ability of the Celect IVC filter to catch blood clots from a biomedical design and engineering perspective.” The plaintiffs were asking the wrong question. No surprise there.

Then the plaintiffs raised other wrong questions: (1) Why doesn’t the expert quantify the number of filters that actually prevented pulmonary embolisms? (2) Why does the engineer rely on adverse event data without knowing what percent of adverse events are reported? (3) How dare the expert rely on the defendant’s own studies? The Cook court is untroubled by these wrong questions, and supplies clear, easy, right answers: (1) Quantification goes to weight, not admissibility. (2) No one knows the true adverse event reporting rate, so it’s hard to fault the expert. Also, and again, this criticism might go to weight, but not admissibility. (3) The company’s data might not be perfect, but it looks like valid evidence. The data’s short-comings constitute yet another issue of weight, not admissibility. Finally, the expert relied on lots of other data besides the company’s. In short, tell it to the jury.

We’re still in favor of federal judges acting as stout gate-keepers when it comes to expert testimony. Junk science should be excluded. But when an expert is so well qualified and so well informed as the one in Cook, and when that expert applies reliable methods, there’s no reason to exclude anything. Rather, it’s time for Hoosier hospitality.

Kudos to the multifirm defense counsel team that brought home the decision on which we report today, a victory that may well end up on our “best” list for 2018.

In April 2017, we posted about Dr. Mahyar Etminan, then an expert in the Mirena MDL pending in the Southern District of New York.  Plaintiffs in the MDL claimed that the defendant’s product, an intrauterine contraceptive device containing the synthetic hormone levonorgestrel (“LNG”) caused them to develop idiopathic intracranial hypertension (“IIH”), also known as pseudotumor cerebri, a rare and potentially serious condition marked by increased cerebrospinal fluid pressure in the skull.   In 2015, Etminan had published a study designed to assess the risk of IIH.  Although the study did not definitively conclude that defendant’s product caused IIH, Etminan concluded that one of the two analyses, a “disproportionality analysis” of adverse events in the FDA’s FAERS database, identified an increased risk of IIH associated with LNG and that this result was statistically significant.  Etminan concluded that the results of the second analysis, a retrospective cohort study, did not find an increased risk but that this result was not statistically significant.   No other study has ever established a causal link between LNG and IIH.

Subsequently, a prominent scientist in the field attacked the methodology of Etminan’s disproportionality analysis because the study failed to control for age and gender, resulting in erroneous and misleading conclusions.  At the same time, it was revealed that Dr. Etminan was on the plaintiffs’ payroll at the time that he published his study, a conflict of interest he had not disclosed.  Ultimately, after defendants served Dr. Etminan with a notice of deposition in one of the cases in the MDL, Dr. Etminan repudiated much of his study’s analysis and withdrew as an expert.   When we reported this, we told you to “stay tuned,” commenting that plaintiffs’ other experts, all of whom relied on Etminan’s results, had not withdrawn.

The other shoe dropped a couple of weeks ago.  In In re Mirena IUS Levonorgestrel-Related Prods. Liab. Litig., 2018 WL 5276431 (S.D.N.Y. Oct. 24, 2018), the court considered the defendants’ Daubert motions to exclude the plaintiffs’ seven remaining general causation experts.  And it granted them all.   The opinion is very long – seventy-two pages on Westlaw – and we commend it to your weekend reading, as we can’t begin to do justice to the court’s detailed analysis of each expert’s methodology.  But we wanted to bring this terrific decision to your attention and to focus its most important takeaways.

The court began its analysis by emphasizing, “In the face of [the] historical record, with no medical organization or regulator or peer-reviewed scientific literature having found that Mirena or any contraceptive product using LNG is a cause of IIH, an expert witness who would so opine . . . necessarily would break new ground in this litigation.”  Mirena, 2018 WL 5276431 at *20.  All seven of the plaintiffs’ general causation experts “so opined.”  Four of these experts “arrived at this result largely by drawing upon existing sources.”  These included varying combinations of case reports regarding Mirena, case reports regarding other contraceptive products containing LNG, another product’s warning label, the repudiated portions of the Etminan study, and another study (the “Valenzuela study”) that reported a statistically significant association between LNG-containing devices and IIH but which, the authors emphasized, found only a correlation, not a causal link.  The remaining three experts were “mechanism” experts, each of whom postulated a supposed mechanism by which the defendant’s product could cause IIH.   In this post, we will focus on two of the experts in the first group, which included an epidemiologist, a toxicologist, an OB/GYN, and an ophthalmologist, but we urge you to read the court’s dissection of the second group as well.

The plaintiffs’ epidemiology expert was a professor of biostatistics with experience in conducting and analyzing large clinical trials.   He claimed that the nine Bradford-Hill criteria supported his causation conclusion.  As many of you know, the criteria are “metrics that epidemiologists use to distinguish a causal connection from a mere association.” Id. at *23 (citation omitted).  They are:  statistical association (also known as “strength of association), temporality, biological plausibility, coherence, dose-response effect, consistency, analogy, experimental evidence, and specificity.

The court first held that the epidemiologist’s opinion did not satisfy any of Daubert’s four reliability factors, because the expert “has not tested his theory.  He has not subjected it to peer review or had it published.   He has not identified an error rate for his application of the nine Bradford Hill factors. . . . And [his theory] has not been generally accepted by the scientific community.”  Id. at *27 (internal punctuation and citation omitted).  With respect to this last, the court again emphasized, “Outside of this litigation, there is a complete absence of scholarship opining that Mirena, or, for that matter, any LNG-based contraceptive, is a cause of IIH.”   Id.  As such, the court undertook to “take a hard look” at the expert’s methodology, scrutiny that was “particularly warranted” because:

 [I]t is imperative that experts who apply multi-criteria methodologies such as Bradford Hill . . . rigorously explain how they have weighted the criteria.  Otherwise, such methodologies are virtually standardless and their applications to a particular problem can prove unacceptably manipulable.  Rather than advancing the search for truth, these flexible methodologies may serve as vehicles to support a desired conclusion.

Id.  (citations omitted).    Citing four examples of how the expert’s assessment of individual Bradford Hill factors “depart[ed] repeatedly from reliable methodology,” the court held, “Measured against these standards, [the epidemiologist’s] report falls short.  Id. at *28-29.

First, the expert used the “analogy” factor, basing his causation conclusion in part on an analogy to another contraceptive product.  But, the court explained, this analogy was based on an “unestablished hypothesis” about the other contraceptive product, for which a causal relationship with IIH had never been substantiated.  Id. at *29.  With regard to the “specificity” factor, the court explained that the factor “inquires into the number of causes of a disease,” id., with the difficulty of demonstrating a causal association escalating along with the number of possible alternative causes.   “In finding the specificity factor satisfied,” the expert “devote[d] two sentences to his discussion.”  Id.   He relied on a conclusory statement to the effect that alternative causes could be ruled out.   And he relied on the Valenzuela study, which had actually disclaimed a finding of causation.   The court explained that the “consistency” factor required “similar findings generated by several epidemiological studies involving various investigators” reaching the same conclusion.  Id. at *30.    Again, the epidemiologist claimed that the Valenzuela study satisfied this criterion because it considered two separate populations.  But, as the court stated, both studies were conducted by the same investigators, and neither found a causal relationship.  Finally, as to the biological plausibility factor, the epidemiologist postulated a biological mechanism by which he said LNG could cause IIH.  The court stated, “ . . . [B]y any measure, [the expert] is unqualified to give an expert opinion as to a biological mechanism of causation of IIH.”   Id. at *30.   This lack of qualifications compromised the expert’s assessment of the biological plausibility factor as well as of related factors.   The court concluded,

Each of [the expert’s] departures from settled and rigorous methodology favors the same outcome.  Each enables him to find that the Bradford Hill factor at issue support concluding that Mirena is a cause of IIH. . . . [His] unidirectional misapplication of a series of Bradford Hill factors is concerning – it is a red flag.  Rather than suggesting a scholar’s considered neutral engagement with the general causation question at hand, it suggests motivated, result-driven, reasoning. . . . Methodology aimed at achieving one result is unreliable.

Id. (internal punctuation and citation omitted.    The court went on to further eviscerate the epidemiologist’s methodology, criticizing his reliance on the Valenzuela study, his nearly-exclusive use of case reports to support three of nine Bradford Hill factors, his failure to consider evidence that undercut his opinions, and his cherry-picking of case reports that supported his desired conclusion.   The court concluded that the expert’s testimony was “compromised by a range of serious methodological flaws,” and failed to satisfy Daubert’s reliability standard.

The court voiced similar criticisms of the methodology of the plaintiffs’ toxicology expert.  Like the epidemiologist, the toxicologist failed to meet any of the four Daubert reliability standards  In applying the Bradford Hill factors, she failed to identify support for her conclusions, distorted or disregarded evidence that undercut her opinions, failed to articulate a plausible biological mechanism to support her causation conclusion, and drew an inapposite analogy to another contraceptive product.   And her opinions were plagued by additional methodological flaws.   She relied on the portion of the Etminan study that was discredited and that Etminan himself repudiated.  And she cited the Valenzuela study as her sole support for finding several Bradford Hill criteria satisfied without acknowledging the study’s methodological limitations and failure to find causation.   The court concluded, “[The toxicologist’s] proposed testimony is beset by methodological deficiencies.  It falls far short of satisfying Daubert’s standard of reliability.  Her testimony, too, must be excluded.”  Id. at *40.

And so it went with the court’s discussion of the rest of the plaintiffs’ experts.   The opinion does the best job we’ve ever seen of demonstrating how an expert can attempt to create the illusion of reliability by paying lip service to the Bradford Hill criteria and how those criteria can be manipulated to mask wholly result-driven ipse dixit opinions plagued by fatal methodological flaws.   In this case, a committed and rigorous judge stemmed the tide.  But we all know that this is not always the case.

We love this decision.  There is a lot more to say about it, and we look forward to telling you more in an upcoming post.

We’ve written about a lot of Risperdal summary judgment wins. No medical causation, no warnings causation (learned intermediaries aware of risks), no alternative design, no fraud. So, when we see an opinion that overturns a plaintiff’s verdict on the grounds of (1) impossibility preemption; (2) clear evidence preemption; and (3) no evidence of general causation, we can’t help but wonder how it got to trial in the first place. So we decided to do a little digging. From our review of the case, it appears these issues were all raised at the summary judgment stage but denied. What changed before and after trial? Not the facts that support these arguments. The regulatory history hasn’t changed. The experts’ opinions haven’t changed. Yet, defendant had to go through an amateur-hour trial (we’ll tell you more about that later) and then wait over a year for these post-trial rulings granting judgment as a matter of law. Sure, better a late win then no win at all – but it certainly feels like this could have been avoided.

The case is Byrd v. Janssen Pharm, Inc., No. 1:14-cv-0820, slip op. (N.D.N.Y. Sep. 21, 2018) and, as mentioned above, involved Risperdal, an antipsychotic drug prescribed to treat serious mental conditions – schizophrenia, manic depression, and autism. Plaintiff alleged that his use of Risperdal caused him to develop abnormal breast tissue growth. The two claims that went to trial were negligent design, manufacturing, and warning defect and strict liability design, warning, and misrepresentation. Id. at 3.

The opinion methodically sets out both defendant’s arguments and plaintiff’s responses, but we’re going to jump right to the conclusions. First up was preemption. Standard plaintiff argument: defendant unilaterally should have changed its warning to include gynecomastia and was able to do it via the Changes Being Effected (“CBE”) regulations. Standard impossibility preemption defense: federal law prohibited defendant from changing the FDA-approved labeling and/or there is “clear evidence” that the FDA would have rejected the proposed labeling change. Id. at 12. The court was persuaded as to both impossibility and clear evidence. Defendant presented “clear evidence” that the FDA had rejected its request to add safety and dosing information for pediatric use of Risperdal. Id.

But the court spent most of its analysis on whether a CBE label change even was permissible under federal law. A CBE labeling change can only be made on the basis of new information concerning a serious risk. “[H]ere, the relationship between antipsychotics and [abnormal breast development] was not new information because it had been discussed in basic psychiatry textbooks for decades, and the FDA does not consider gynecomastia a serious adverse event.” Id. at 9. A “serious” adverse event is defined by federal regulations to be an event that either “resulted in inpatient hospitalization or required surgical intervention to prevent inpatient hospitalization.” Id. at 15. And, both plaintiff’s and defendant’s regulatory experts agreed that gynecomastia “would not be a serious adverse event.” Id. at 16-17. Now, plaintiff’s expert was Dr. Plunkett and she was quick to voice her personal disagreement with the FDA on this point – but that’s irrelevant (both to us and to the court). Id. at 17.

The court didn’t stop there. Defendant also argued that plaintiff had failed to satisfy his burden of proof on causation. While defendant made arguments regarding both proximate and medical causation, the court focused its attention on the latter and specifically the lack of general causation evidence. Id. at 26. Starting with Dr. Plunkett who “admitted to not being a causation expert,” but opined on it anyway – the court found her opinion unsupported by the literature. Id. None of the three pieces of literature relied on by Dr. Plunkett included a control group, so at best they were evidence of an association, not a correlation. Dr. Plunkett’s reliance on this literature demonstrated a “disregard for the difference between an association between two things and a causal relationship between those two things.” Id. at 29; see id. at 30 (“a correlation between Risperdal and gynecomastia cannot be drawn without a control group”). The fact that these studies lack a control group was likely not “new” information at trial and again begs the question why this issue is only being properly addressed post-trial.

Plaintiff’s other causation expert likewise had no support for a general causation opinion. His conclusion was that plaintiff’s gynecomastia was “secondary at least in part to prolonged use of Risperdal.” Id. at 31. But, putting aside reliance on the same literature relied on by Plunkett, the only basis plaintiff’s second expert had for his general causation opinion was his differential diagnosis. A differential diagnosis, however, “generally does not prove general causation.” Id. at 33. It assumes general causation has already been proven. Without general causation, defendant was entitled to judgement as a matter of law.

Still, the opinion continues. The remainder of the decision addressed defendant’s alternative request for a new trial based on the inappropriate conduct of plaintiff’s counsel. The court did not need to decide this issue having already found two grounds to overturn the verdict and award judgement in defendant’s favor. Based on the description of plaintiff’s trial antics, however, we can only assume that the court wanted this opportunity to admonish plaintiff’s counsel. Defendant pointed out 23 separate incidences of plaintiff’s attorney’s misconduct in front of the jury. Id. at 34. In concluding that plaintiff’s counsel’s behavior did warrant a new trial, the court relied on:

(1) Plaintiff’s counsel’s self-deprecating tone of voice and posture when referring to his lack of professional skills and/or experience, (2) his helpless tone of voice and posture when referring to the fact that he was bullied as a child, (3) his alternating innocent and defensive tones of voice in response to an admonishment by the Court, (4) the sympathetic facial expressions of the jurors following the aforementioned acts and/or accompanying comments, (5) the credulous expressions of the jurors following Plaintiff’s counsel’s acts of asserting the truth of Plaintiff’s case and/or vouching for his witnesses, and (6) the jurors’ reactions following Plaintiff’s counsel’s acts of offering his personal opinions about the evidence and/or testifying when he could not otherwise introduce evidence.

Id. at 37-38. While this behavior more than justified a new trial – it wasn’t necessary because no childish antics could overcome the fact that plaintiff had failed to prove general causation and that defendant had clear evidence to support impossibility preemption. Both of those things were true a year ago too. But better late than never.