We’re proud of the anti-intellectual, luddite image that we project through our blog. (We figure: If you’ve got it, flaunt it.)
But the truth is, we’re pretty curious guys. We’ve each written books and law review articles, and we regularly read an awful lot of stuff – not just in the course of client representations, but also in our spare time, to generate ideas to feed the beast that is this blog.
We’re now accustomed to being disappointed by short articles that appear in the legal trade press. A typical article makes these three points over the course of 2000 words: (1) a case came down, (2) the case said X, so (3) a case came down.
Thanks for the effort, guys, but we’ll just read the original for ourselves.
But what frosts us … well, okay – what frosts us today – are long articles, with interesting titles, that tempt us to read the whole thing with high expectations, only to learn that we’ve been conned.
The lead article in the most recent Food and Drug Law Journal did it to us today.
We should have known better. The article sports the names of four co-authors – one partner, one “senior associate,” and two mere “associates” from Fulbright & Jaworski. That’s a sure sign of big firm-itis. (We’re allowed to say that. We work at two of the biggest in the world.) The senior guy said, “I want to write an article” and enlisted help from the senior associate, who came up with a topic and then turned to someone else who turned to someone else who kicked the dog. The result was “Cause and Effect? Assessing Postmarketing Safety Studies as Evidence of Causation in Products Liability Cases.”
We waded through 26 law review pages and 166 footnotes to learn . . . nothing.
Apparently, there are companies that sell drugs. An outfit called the FDA gathers certain types of data. Plaintiffs must prove both factual and legal causation to prevail in product liability cases. Not all purported scientific evidence of causation is admissible in court. There was a case called Frye and a more recent one called Daubert that set some standards governing this type of evidence. Federal courts follow Daubert; some state courts do, too, but others don’t. And when courts and litigants are considering the admissibility of evidence, some studies are more powerful than others, with randomized clinical trials at the top of the heap.
That’s it.
Honest. We couldn’t make that up.
Well, we could, but we surely wouldn’t bother.
We at the Drug and Device Law Blog try not to be stingy with our praise. When folks say something smart, we acknowledge it. When defense lawyers at other firms — our competitors, in a sense – win important victories, we give them credit and congratulate them publicly. But what are we supposed to say about the recent ditty in the Food and Drug Law Journal?
Frankly, the article leaves us with three questions: First, what were the authors thinking? Lawyers presumably write articles either to educate their readers or to prove the authors’ expertise. This article did neither.
Second, where were the peer reviewers? The Food and Drug Law Journal is one of the few legal journals that is edited by real lawyers, rather than law students. What did the reviewers think this article added to the existing literature?
Third, we wasted an hour reading this thing, and we don’t have time to read something else to generate a blog post this weekend. How the heck are we going to write something about an article that should never have been written?
I guess we answered that last question, anyway. We’ll just join the chorus criticizing legal scholarship these days, and hope that things get better in the future.