Photo of Bexis

This guest post is from long-time friend of the blog Bill Childs, from Bowman & Brooke, who also wishes to thank Elizabeth Haley for research assistance.  It’s a reworking of a piece on bogus scholarly literature that Bill previously published here.  We thought it was both good and relevant enough that we approached Bill with a request to re-run it as a guest post on the Blog, and he graciously accepted.  As always, our guest bloggers are 100% responsible for the content of their posts (and here that disclaimer also extends to B&B and its clients), and deserve all the credit (and any blame).

**********

The Daubert court, in interpreting Rule 702 of the Federal Rules of Evidence, laid out various non-exclusive criteria for consideration in evaluating proposed scientific evidence, one of them peer review. As the Court put it:  “The fact of publication (or lack thereof) in a peer reviewed journal…will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”  Daubert v. Merrell Dow Pharms., 509 U.S. 579, 594 (1993).  Peer review, or the absence thereof, was mentioned repeatedly by the New Jersey Supreme Court in endorsing Daubert in the recent decision in In re: Accutane Litigation, 191 A.3d 560, 586, 592, 594 (N.J. 2018).  Among other things, the Court noted that the plaintiffs’ expert had not submitted “his ideas…for peer review or publication,” considering that failure to be a strike against his methodology. Id. at 572.

Compared to other Daubert factors (or those described in the subsequent comments to Rule 702), the presence or absence of peer review may seem more binary than other factors − i.e., easier for a court to evaluate − it’s either there or it’s not, it seems.  Not so, either in the traditional sense of peer review or the changing world of things that now get called peer review.  Given this perceived simplicity, though, it frequently gets less attention than it deserves.  Litigants should think about peer review as being more complex than it appears, and in some specific contexts, additional exploration − whether through discovery into your adversaries’ experts, or early investigation of your own potential experts − may make sense.

Daubert vs. Predator

One fascinating consequence of this consideration of peer review in the Daubert context is the potential for experts publishing litigation-related work in what are called “predatory journals” (sometimes also called “vanity publications).”  See Kouassi v. W. Illinois Univ., 2015 WL 2406947, at *10-11 (C.D. Ill. May 19, 2015); Jeffrey Beall, “Predatory Publishing Is Just One of the Consequences of Gold Open Access,” 26 Learned Pub’g 79-84 (2013); John Bohannon, “Who’s Afraid of Peer Review?” 342 Science 60-65 (Oct 4, 2013).

Predatory journals, like the eponymous Predator in the 1987 film and its 2018 reboot, camouflage themselves.  They make themselves look not like the Central American jungle background, but like legitimate medical or scientific journals.  Their publishers’ websites generally look like legitimate publishers’ websites (if sloppy at times), their PDFs look like “real articles,” and their submission process might even look normal.  They’ll even claim to have peer review and editorial boards and all the rest of what you expect from journals.  Like the Predator, they even try to manipulate their editorial voices to sound like real journals.

These journals are, however, just aping the façades of real journals.  They typically do not have legitimate peer review processes − or possibly any review processes at all.  Frequently, if an author pays the exorbitant fees, the submitted article will get published.

Myriad examples exist revealing such journals as frauds.  My favorite is probably the publication of a case report of “uromysitisis” an entirely fictional condition − first referenced in Seinfeld as a condition from which Jerry claims to suffer after being arrested for public urination − by the purported journal Urology & Nephrology Open Access Journal.  The author of the intentionally nonsensical article − not a urologist, nor a medical doctor at all − wrote about his experience here. After that article’s exposure as an obvious fake, and something that even the most casual of reviewers should have rejected, the article was removed, but the “journal” is still up and publishing on the MedCrave site, described, a bit awkwardly, as “an internationally peer-reviewed open access journal with a strong motto to promote information regarding the improvements and advances in the fields of urology, nephrology and research.”  A few years earlier, a computer scientist published an article consisting solely of the phrase “Get me off your [obscenity] mailing list,” with related graphs, repeated for eight pages.  That journal remains in existence as well.

Such journals are largely set up to entrap new (and naïve) scholars who are under tremendous pressure to publish for promotion and tenure purposes − but they also can provide an opportunity for dubious expert witnesses to get something published they can cite as “peer reviewed,” especially as courts more and more often note the presence or absence of peer review.  It isn’t news to many litigation experts that having peer review for some of their more outlandish assertions can increase the odds of their testimony being admitted.  If an expert in fact has published in a predatory journal (and it can be shown that the expert knew or should have known about that fact), that fact should count against the admissibility of the testimony.

Given the camouflage, it is fortunate that there are resources and strategies that can help identify such publications.  Retraction Watch, published by the Center for Scientific Integrity and headed by science writer Adam Marcus and physician and writer Ivan Oransky (full disclosure: Ivan and I are friends, based in large part on our shared love for power pop like Fountains of Wayne and western Massachusetts bands like Gentle Hen.  He should not be blamed for my Predator references) while not focused solely (or even largely) on predatory journals, is an accessible look at the world of retractions “as a window into the scientific process.”  They keep an eye out for interesting developments in the world of predatory journals, and scientific publications generally, and their coverage is what made me suspicious when, in one of my cases, an adversary’s expert’s article was published by a MedCrave journal (home to the Seinfeld article).  Retraction Watch’s coverage of that article led to what I assume will be the only time in my career I had the chance to ask a Ph.D./M.D. if he was familiar with Seinfeld and if the show is, in fact, fiction, based on him publishing − and in fact being listed as an editor of − another MedCrave journal.

There is also a list of suspected predatory journals archived at Beall’s List.  The appearance of a journal on that list is not conclusive evidence that it is predatory, but it is enough to raise questions.  The removal of a journal from the Directory of Open Access Journals for “editorial misconduct” or “not adhering to best practices” (see list, here) is another giveaway.  The Loyola Law School’s “Journal Evaluation Tool” can also provide a useful rubric, accessible to non-scientifically-trained lawyers, for evaluating whether a journal is likely legitimate or not. And your own experts can likely provide feedback to you about journals.

Most experts will not have published in predatory journals.  But it is still worth the time to explore the question, especially about pivotal articles on which the experts are relying − whether the expert is your adversary’s or your own.  Even if the publication offer was innocently accepted (i.e., even if the author did not realize she was publishing in a predatory journal), the lack of rigor in evaluating the article by the publisher should at a minimum eliminate any weight given to the peer review factor. And if an author has intentionally published in such a journal, that should be the equivalent of an intentionally false statement in a C.V.

Not All Peer Review Is the Same

Of course, these relatively new faux journals are not the only way experts get published.  Consider the most traditional form of peer review, where editors of a journal have outside reviewers, usually with their identities screened from the authors, evaluate the quality and originality of the work, confirming that the methodologies presented appear legitimate and that the conclusions reached are reasonable based on what’s described.  Given that those goals line up nicely with the goals of a Daubert analysis, it is sensible indeed for a court to look at that as a potential indicator of reliability − indeed, that’s why peer review is a factor in the first place.

But even if a proffered expert testifies to having followed a methodology that matches something in a peer-reviewed publication, it is often worth at least a few deposition questions about the review process and a line in your subpoena duces tecum requesting copies of any materials the author has received relating to the review, or to attempt some third party discovery on the journals in question − though some courts may limit or refuse that discovery.  See, e.g., In re Bextra & Celebrex Mktg. Sales Practices & Prod. Liab. Litig., 2008 WL 859207 (D. Mass. March 31, 2008) (granting protective order for non-party medical journal publisher, expressing concerns about a chilling effect).  The propriety of allowing such discovery is beyond the scope of this article, but I addressed it in more detail in The Overlapping Magisteria of Law and Science: When Litigation and Science Collide, 85 Neb. L. Rev. 643 (2007).

If you get peer review notes, it’s possible you’ll find that a reviewer recommended the removal of a conclusion that the expert is now presenting, or that the reviewer warned against a particular inference from what is in the article.  Making it even easier, some journals, traditional and, more often, “open access,” are now posting their reviewers’ comments online.  Even if you do not find anything relevant, most experts will readily concede that peer review reflects at most an “approval” of the overall approach and is not a guarantee of correctness as to conclusions.  And sometimes you’ll be able to establish that the study in question was based on flawed data or that the work done for litigation did not, in fact, use the same methodology as that in the publication.  See, e.g., In re Mirena IUS Levonorgestrel-Related Prods. Liab. Litig., ___ F. Supp.3d ___, 2018 WL 5276431, at *11-13, *28, *34, 37-38, *50-51 (S.D.N.Y. Oct. 24, 2018) (rejecting expert’s reliance on “repudiated” open access journal article by author that did not disclose retention as a plaintiff’s litigation expert); In re Viagra Prods. Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009) (reversing an initial denial of defendants’ Daubert motion after learning of flaws in underlying data and processing, noting that “Peer review and publication mean little if a study is not based on accurate underlying data.”); Palazzolo v. Hoffman La Roche, Inc., No. A-3789-07T3, 2010 WL 363834, at *5 (N.J. Super. App. Div. Feb. 3, 2010) (finding no abuse of discretion in excluding an expert’s conclusion based on conclusion that the expert did not in fact use the methodology claimed to have used in the underlying peer-reviewed study).

Sometimes, even in a more traditional context, the peer review that was performed was not what was likely pictured by the Daubert court, particularly when the work at issue is outside the so-called “hard sciences.”  In a publicized example, the review of a history-oriented book about the lead and vinyl chloride industries, authored by frequent plaintiffs’ experts and published by the University of California, involved reviewers known to − and in some cases recommended by − at least one of the authors . See 85 Neb. L.R. at 660-63 (describing this situation; original book website was removed).  Whether or not that review was adequate for the academic purpose, it was materially different from, say, the reviewers of a double-blind clinical trial, and the facts surrounding it seem plainly relevant to how much weight a court should give it under Rule 702 and Daubert.  Without that discovery, the court may well not have learned about what “peer review” meant in that context.

Consider also the scenario where an expert says that their methodology has gone through peer review but the article has not yet been published.  Again, it may be worth pursuing more details, especially if the expert seems likely to cite to that review in defending their position.  If it has not yet been accepted for publication, consider requesting a copy of the comments the expert received from the reviewers. If those comments are provided, they may be helpful; if their production is refused, the fact of that review should be rejected as a basis for admissibility.

What To Watch Out For

Fundamentally, the important thing is to look through your and your adversaries’ experts’ C.V.s with care, especially as to articles that are directly on point with the issue you’re addressing.  It is not enough to think about what the articles say, and it also is not enough to think to yourself, “Well, that sounds like a legitimate journal.”  Look at the publishers’ site; look for hints in the article itself; and do some searches.  Ask a few questions of the expert about author fees and what the peer review entailed and throw in a document request to see if there is something worth exploring further.  And if you are dealing with a situation with what you think is a predatory journal, be ready to teach a judge about what that means; as of this writing, no court has referenced “predatory journals” in a reported Daubert decision.