Photo of Michelle Yeary

It has taken us a few weeks to get around to posting on the recent string of favorable Daubert decisions coming out of the Boston Scientific Pelvic Mesh MDL.  That’s mostly because they are very long.  Very long.  But procrastination hasn’t made the opinions get any shorter, so we decided to finally dive in.  A decision that discusses opinions offered by biomedical engineer, a surgeon, a pathologist, a chemist and polymer science engineer, a gynecologist, and two regulatory consultants is going to be heavy and detailed.  A lot of those details are of course specific to pelvic mesh.  And, while that is enormously helpful to other defendants in that litigation – for the rest of our readers, we thought we’d try to distill the decision down to some of the core rulings that are the most significant and we think generally helpful in going after plaintiffs’ experts.  So, if you want all the details, you can find the decisions at:  Sanchez v. Boston Scientific Corp., 2014 U.S. Dist. LEXIS 137189 (S.D. W.Va. Sep. 29, 2014); Tyree v. Boston Scientific Corp., 2014 U.S. Dist. LEXIS 148312 (S.D. W.Va. Oct. 17, 2014); and Eghnayhem v. Boston Scientific Corp., 2014 U.S. Dist. LEXIS 152457 (S.D. W.Va. Oct. 27, 2014).   Since the decisions have overwhelming similarities, this post discusses Sanchez only.

Plaintiffs Can’t Dump in Company Documents via their Experts:  We see this all the time.  Plaintiffs give their experts hundreds, if not thousands, of pages of internal company documents.  The type of documents the experts would never see, never utilize in their field of study, and certainly never rely on.  Then plaintiffs try to get all these documents in at trial via that expert who purportedly is going to testify about what the company knew or didn’t know, what the company should or shouldn’t have done, or simply how bad the company is.

Well, not this time.  Plaintiffs were not allowed to use their experts to “usurp the jury’s fact-finding function by allowing an expert to testify as to a party’s state of mind or on whether a party acted reasonably.”  Sanchez, 2014 U.S. Dist. LEXIS 137189 at *9. Topics like corporate conduct, knowledge and state of mind are not appropriate subjects of expert testimony. Furthermore, an expert’s report that is nothing more than a “narrative review of corporate documents . . . riddled with improper testimony regarding [defendant’s] state of mind and legal conclusions” should be excluded.  Id. at *81.  So, for instance, testimony that the manufacturer acted “inconsistent with appropriate medical device design principles” goes to state of mind and is inadmissible.  Id. at *25.

Methodology Trumps Expertise:  An expert can have the best qualifications, but if his methodology is flawed, so are his conclusions.  Id. at *15.  That was the case with plaintiffs’ biomedical engineer (Thomas Barker).  He looked good on paper, but his testing method didn’t come close to passing Daubert’s reliability standards.  First, the tests he conducted on the mesh device should have been done in a saline bath to help replicate the in vivo environment.  Id. at *17.  Second, he only tested one or two samples of the devices – a completely unreliable sample size.  Id. at *20.  Third, he didn’t account for the multi-directional forces that the mesh endures in vivoId. at *22.  An unreliable method leads to unreliable, and therefore inadmissible, opinions.

Ipse Dixit Is Never EnoughIpse dixit – a fancy way of saying the only proof we have of a particular fact is that someone said it.  At least one of plaintiff’s experts (Dr. Thomas Margolis, surgeon) seems to have used the “because I say so” argument repeatedly throughout his deposition.  For example, defendant challenged Dr. Margolis’ safety and efficacy opinions on the ground that he rejected the conclusion of contrary scientific literature “without explaining a scientific basis for doing so.” Id. at *30.  Having “serious questions” about the conflicting study wasn’t enough of an explanation for the court to conclude that the expert’s opinion was reliable.  Id. at *31.  Dr. Margolis’ opinion on complication rates was similarly excluded because when asked about studies that showed lower complication rates, his only response was “I disagree with those studies.”  Id. at *32.  Not only did this expert simply disregard contrary literature that didn’t suit his purposes, he augmented the findings of the studies he did rely on (upping the complication rate from 6.2%-24.4% to an even 25%) because “when forming his opinion . . . he assumes the worst-case scenario and errs on the side of opining as to a higher complication rate.”  Id. at *35.  Assume and err?  Red-flags for unreliable in our book.

A Scientist’s Conclusion Has to be Scientific:  Dr. Margolis’ problems didn’t end there.  He also offered an opinion that a different non-mesh sling device is a safer available alternative to the defendant’s mesh sling.  His reasoning as to why the alternative device has a lower complication rate is because since it doesn’t use mesh it has no mesh-related complications.  That’s the definition of stating the obvious and the court agreed:

This logic is not scientific. Dr. Margolis’s conclusion that [the other product] does not have mesh-related complications because it is not made from mesh could be reached by a jury without expert testimony.

Id. at *41-42.  If it’s non-expert opinion, don’t let the expert get on the stand and say it.

The Underlying Study has to Support the Expert’s Conclusion:  Dr. Margolis isn’t out of trouble yet.  He also opined that the infection rate for the mesh device can be 100%.  The study on which he based that opinion actually said that 100% of the mesh systems that were explanted contained bacteria.  The authors of the study specifically stated that they couldn’t determine if the bacteria leads to infection and in fact that infection is a rare complication.  Id. at *43-44.  This is why you have to read every study that an expert relies on. You don’t want your expert to overstate a study’s conclusions and you want to catch it when your opponent’s expert does.

Cherry-Picking Doesn’t Lead to a Sweet Opinion:  To the contrary, it leaves the distinctly bitter taste of unreliability. In this case, defendant challenged the admissibility of plaintiff’s expert pathologist (Dr. Richard Trepeta) on the ground that his conclusions were based, at least in part on his review of pathology reports he received from plaintiff’s counsel. Id. at *55-56.  When the proponent of the expert selects the data, there is no standard by which the sample selection is governed.

The plaintiffs do not explain how or why they chose these twenty-four reports for Dr. Trepeta’s review, and without such an explanation, I have no way of assessing the potential rate of error or the presence of bias. . . . [t]here are no assurances that [plaintiffs’ counsel] did not opportunistically choose samples while ignoring others that might have weakened or disproved [the expert’s] theories. . . . I similarly have no way to ensure that the plaintiffs’ counsel did not provide Dr. Trepeta with only those pathology reports that tended to strengthen, rather than refute, Dr. Trepeta’s opinions.

Id. at *57-58 (citations omitted).  This is a good reminder that while conducting a thorough review of everything plaintiff’s expert says he relied on – look for the stuff he didn’t rely on because it wasn’t provided to him.  What’s missing may be even more important than what is not. A portion of Dr. Trepeta’s opinions, not based on the plaintiff-selected data, was ruled admissible.

You Can’t Find Causation Without Seriously Ruling Out the Alternatives:  Specific causation requires a reliable differential diagnosis.  Plaintiff wanted their pathologist to testify that defendant’s mesh device was the cause of Ms. Sanchez’s injuries.  At his deposition, however, when asked about specific alternative causes, Dr. Trepeta admitted that he had not considered or ruled them out.  Plaintiff attempted to rely on Dr. Trepeta’s testimony that he concluded that plaintiff was “experiencing symptoms which directly correlate with the mesh” as evidence that he ruled out alternatives.  The court said this was nowhere close to good enough to satisfy Daubert.

This vague, conclusory answer, however, is insufficient for Daubert‘s reliability prong. Differential diagnosis must take serious account of other potential causes to be regarded as a reliable basis for a specific causation opinion.  Here, Dr. Trepeta did not consider alternative causes for some of Ms. Sanchez’s most pervasive symptoms, including dyspareunia and scarring, and he simply inferred without any scientific basis or reasoning that her symptoms “appear to correlate with the mesh.” A wholly conclusory finding that lacks any valid scientific method cannot maintain a differential diagnosis.

Id. at *62-63

Protocol is Everything in Testing:   Plaintiffs also offered a joint report authored by a chemist (Dr. Jimmy Mays) and an engineer specializing in polymer science (Dr. Samuel Gido).  Their opinions were based on tests they conducted of both exemplar devices and 14 explanted devices.  In addition to problems with sample size, lack of random selection, failure to calculate statistical significance or an error rate, and no information on how the explanted material was preserved or handled before it reach them, id. at *69-70, the court had significant problems with these experts’ complete lack of any established testing protocol.    Apparently, the litany of errors committed by these experts was too long to include them all in the decision, so the court cites only a few examples of the lack of protocol.  For example, they didn’t prepare a written methodology for their testing.  The “cracking” standard used by Dr. Gido was “completely subjective” and “cannot be found in any published material.”  Id. at *72.  When it came time to clean the explants and the exemplars, the experts used two different cleaning methods.  Id.  And, finally, some test results were simply left out of the experts’ report.  Id. at *73.  The court summed it up nicely:

Although Drs. Mays and Gido performed tests that are supported by the literature, the haphazard application of these tests, errors, and changes to their report lead to the conclusion that their methodology is unreliable. Vigorous adherence to protocols and controls are the hallmarks of “good science.

Id.

Labeling Opinions Have to Be Reliable Too:  Plaintiff offered two experts on labeling – gynecologist Dr. Mark Slack and regulatory consultant Dr. Peggy Pence.  While the court concluded that testimony about what should have been included in the device’s labeling (directions for use and patient information) may be helpful to the jury in assessing plaintiff’s failure to warn claim, neither expert met Daubert’s reliability standards in reaching their conclusions.

First, Dr. Slack’s report is the largely “narrative review of corporate documents … riddled with improper . . . state of mind and legal conclusion” testimony referenced above.  So, that’s one strike already.  As to the adequacy of the risk information in defendant’s labeling, Dr. Slack identified 7 pieces of information he believed should have been in the labeling.  But that’s all he did:

Dr. Slack . . . cites no other authority supporting or explaining why certain information is required. Without any indication of the principles or methods used to establish these seven factors, I cannot reasonably assess reliability. Dr. Slack’s subjective and conclusory approach is evidence that his opinion is based on mere speculation and personal belief.

Id. at *85.

Dr. Pence likewise provided a list of warnings and risk she believes should have been included in the labeling and patient information.  She opines that by not including these warnings, defendant fell short of the standard of care.  However, while Dr. Pence points to literature about complications with the mesh device, that literature

does not go to the heart of her opinion–that [defendant] failed to meet the “standard of care required of a medical device manufacturer” in its deficient labeling of its product. In other words, although this authority demonstrates that complications occurred, it does not provide any guidance as to whether these complications should have been included as warnings. . . . Eliminating this peripheral information, Dr. Pence is left with ipse dixit sources like “the standard of care” and “a matter of ethics” both of which fall short of Daubert‘s reliability prong.

Id. at *94-95 (citations omitted).  Relying on FDA regulations didn’t save Dr. Pence either:

Here, expert testimony about the requirements of the FDCA, which are not at issue in this case, could lead to more confusion about the failure-to-warn claim than enlightenment. The jury might think that the FDA regulations govern warning requirements in California, whereas Dr. Pence is actually using the FDA regulations as a model for the contents of labeling materials. Given that the probative value of expert testimony on FDA requirements is substantially outweighed by the risk of jury confusion, I cannot admit Dr. Pence’s testimony as it relates to the FDCA or FDA regulations.

Id. at *96.

Postmarket Vigilance Opinions are Improper and Irrelevant:  Finally, Dr. Pence also sought to testify that the defendant failed to report adverse events to the FDA and that this rendered the mesh device misbranded.  Id. at *97.    Testimony about compliance with FDA reporting requirements is irrelevant in a case where there are no FDCA claims.  Even if marginally relevant, that relevance is outweighed by the risk of juror confusion described above.  Finally, even if the topic was more than marginally relevant, the testimony is an impermissible legal conclusion.  Id. at *97-98.  In other words, for oh so many reasons, it was excluded.

None of plaintiff’s experts got away unscathed and for a wide variety of Daubert violations.  There is a wealth information in these decisions and while we would never suggest that they weren’t stimulating reading, we would suggest a strong cup of coffee if you plan to plow through all three.