Photo of Bexis

We’ve been corresponding recently with long-time friend-of-the blog, Dr. Frank Woodside over the unfortunate fact that junk science these days doesn’t only mean stuff (in the Jeb! sense) that isn’t published in what passes for scientific journals – and what can be done about it.  Dr. Frank has just written a law review article about this problem.  F. Woodside & M. Gray, “Researchers’ Privilege:  Full Disclosure,” 32 Cooley L.R 1 (2015), which is available online here.  Here’s the abstract:

An ever-growing chorus of academicians report that with the expanding number of academic journals there is a concomitant increase in the number of articles based on questionable methodology.  Many published studies contain improper statistical conclusions, flawed methodology, and results that cannot be replicated.  The recent controversy concerning the failure of parents to vaccinate their children because of the recommendations of flawed research exemplifies this crisis. This epidemic of faulty research has been exacerbated recently by the spread of low-quality academic journals and “pay-to-publish” journals, which will publish virtually anything for a fee.  This Article provides an analysis of a growing crisis of reliability in scientific research and how the so-called “researchers’ privilege” allows faulty research to go undetected.  This Article delineates the reasons why it is difficult, if not impossible, to evaluate published research findings without access to the underlying information that researchers have in their possession.  The Article then analyzes the state of the law regarding the ability of researchers to withhold records and data based on the so-called “researchers’ privilege.”  Finally, the Article explains why courts should favor the disclosure of research data and that confidentiality concerns should be addressed by a confidentiality order.

Id. at 1-2.  Here are the articles subheadings, which describe the material in it in more detail:

  • Misunderstanding and Misuse of Statistics and Research Methods
  • An Ever-Growing Number of Journals and “Pay to Play”
  • Fraud and Questionable Research Practices
  • Pre- and Post-Publication Peer Review Does Not Work

The article, and particularly its footnotes, is a treasure trove of source material about junk science, particularly involving retractions or exposés of published articles – starting with the infamous Wakefield autism fraud that we discussed here.  The article is worth reviewing for that reason alone.  We’ll give one example of what we’re talking about.  Ever heard of “p-hacking”?  It’s a great term for dealing with suspect scientific “research.” Here’s what the Woodside article has to say about that:

[Some statistical errors in research publications transcend mere misunderstanding and instead show that researchers manipulate statistical methods to obtain the desired results.  Psychologist Uri Simonsohn calls this phenomena “P-hacking,” and others refer to it as “data-dredging, snooping, fishing, significance-chasing and double-dipping.”  Statistics defines the P-value as “the probability that an observed positive association could result from random error even if no association were in fact present.”  For example, suppose that in an epidemiology study investigating the potential association between the administration of vaccines and the development of side effects, the P-value is 0.05. This P-value indicates that even if the vaccine had no effect, a positive association could be obtained in 5% of the studies due to random-sampling error.  Unfortunately, researchers can manipulate this relatively straightforward statistic.

Id. at 9-10 (numerous footnotes omitted).  Basically, if somebody applies enough different statistical methods to the same data (or subsets of it), that will increase the likelihood that − purely by random chance − one of those methods will generate a significant result.  A proper study sets out its protocol for statistical analysis in advance, so as to preclude after-the-fact massaging of the numbers. However, lousy journals don’t pay attention to this prospective/retrospective distinction.

If you’re into that kind of analysis, you’ll like this article.

The article’s legal recommendation – that attorneys faced with such expert opinions in litigation be able to obtain the data that supposedly supports the conclusions – is something of a two-edged sword.  If we can do it (subpoena underlying raw research data) to their experts and their experts’ sources, then the other side can do it to us.  Overall, however, since plaintiffs are already seeking, and often receiving, access to our client’s internal clinical trial databases, we probably have more to gain from the demise of any “researcher’s privilege” than we would lose.  While all statistical studies have their flaws, we believe that those we rely upon are far more scientifically valid than what the other side has to offer. Full access to the other side’s data (where they have any) should allow us to prove that.  In any event, for defense counsel attempting to examine independently the statistical basis of the studies the plaintiff is relying on, the article is also a thorough collection of the precedent in this area.

At minimum, even if the data remain confidential, we need to get our hands on, at least, the protocol and IRB (“institutional review board”) documents.  Things like timing of the protocol in relation to when the research started (the p-hacking point already discussed), amendments of the protocol, the power calculations, and the prospectively identified analyses will often reveal if the study results that plaintiffs like really exist, or are the result of statistical legerdemain.  This, in particular, is less of a two-edged sword, since manufacturer-funded studies can’t (for FDA reasons, if nothing else) hide this kind of information.

Beyond his article, Dr. Frank just doesn’t quit.  In our recent correspondence, he sent us three recent pieces (too recent for his article) that also concern academic fraud in published scientific research:  (1) how false statements by research subjects seeking to participate in clinical trials can adversely affect the results, here; (2) a retraction in which the “fabricated results” are admitted, here; and (3) an article about complete fabrication of the peer review process – involving 64 articles in 10 journals – whereby authors wrote “peer reviews” of their own articles using spoofed email addresses, and false identities, here.

We agree that all-too-frequent problems with scientific literature, whether intentional or otherwise justify courts in not permitting researchers to hide their data and methods from independent critique behind newly invented “privileges” of doubtful provenance.  The scientific method is based on the process of constant exposure of all theories to rigorous tests of verification and falsification.