Photo of Bexis

Today we’re boldly going where lawyers usually fear to tread (at least alone) – into the realm of epidemiology, albeit perhaps loosely defined. Why? Well, we do have to come up with things to write about, and there’s only so much preemption theorizing that even we can do before it sounds like we’re babbling.
Also, this particular topic, epidemiology, basically forced itself upon us. We recently received, from two entirely unrelated sources, a couple of articles that illustrate the pitfalls of using the FDA’s adverse drug experience reporting system to prove anything about causation in drug product liability litigation. We’ve already posted more than once about most (but not all) courts refusing to permit experts to rely on ADE epidemiology as proof of causation in prescription drug cases. We won’t rehash those posts, since you can read them yourselves, but in a nutshell, the biggest problems courts have had with ADE-based epidemiology are: (1) the reports are wholly anecdotal, since the government does not there to be any causation criteria; and (2) since reporting is voluntary, it’s highly influenced by outside events – chiefly adverse publicity concerning this or that drug.
That’s the legal side.
These two articles, on the other hand, are from scientific journals, and they illustrate these problems from the mathematical side. Ordinarily us talking about math without guidance from experts would be like us running with scissors. But we’re willing to leave our nannies behind for once because, first, we think we’re familiar enough with ADE-related issues to make sense of it, and, second, we have to feed the blog, and we find this stuff interesting.
We know. We’re kinda weird like that.
The most recent of the articles is in an (on-line?) journal called Pharmacoepidemiology (now that’s a mouthful). It compared ADE reporting of a particular adverse event, rhabdomyolysis (that’s another mouthful – it’s a serious muscle and kidney condition), among various statins (cholesterol lowering drugs) marketed in the United States.
At one point a few years ago one of these statins was already the target of major mass tort litigation, and plaintiffs’ lawyers were hungrily eyeing the rest, given the classwide effect.
That didn’t happen, but we do remember how plaintiffs’ lawyers liked to make the argument, based simply on adding up ADEs, that “your statin had a higher number of adverse events than the rest of them.”
The defense reply was, “Well, duh – since it was the first of them to be hit with a major contraindication, you’d expect more reports even if injuries were no more common.”
But back then, the numbers weren’t really there to provide solid support for that response.
We defense lawyers just knew in our gut that publicity bias in ADE reporting was real.
Well, the Pharmacoepidemiology article goes a long way to providing the publicity bias data that we wished we’d had back then. In particular, it provides the previously unavailable denominator (total cases) numbers without which you can’t really calculate reporting frequency. The article has all the mathematical details on how that denominator was calculated – like any study it has strengths and weaknesses (which anyone interested can read about). But that stuff’s complicated; and we’re lawyers, not statisticians, so we’ll leave it and go directly to the bottom line.
Just as we defense lawyers have long suspected, the statin data show huge reporting differences among the same serious condition and among the same class of drugs. The article concludes that there was between a 500% and 600% reporting difference between drugs of the same class – even though everybody now agrees the risk of this condition is a classwide effect of any statin drug.
Where do those dramatic numbers come from? The article calculates that, for the one statin that was ultimately recalled, 30% of its rhabdomyolysis cases were reported to the FDA. At the other end, the least-reported statin saw only 5% of identical cases reported. Another analysis, with somewhat different assumptions primarily concerning length of use, calculated the reporting difference at 20% versus 4%, respectively. The article points out that this bias might have been accentuated by promotional activity, since the drug with the fewest reports (according to the article) was “promoted. . .as safer than other statins.”
Using statins as a model of reporting differences within a class of drugs, the article calculated the extent of the publicity bias in voluntary FDA adverse event reporting. Where there’s major adverse publicity, such as a product recall or a public health alert, the publicity bias gets huge – between 500 and 600 percent.
And even that 500-600% publicity bias probably understates what happens when mass tort litigation really gets going. That’s because the article deliberately excluded the post-recall time period from the statin data. In real litigation, once the complaints generated by all that plaintiff solicitation needed to create a mass tort start being served on the defendant manufacturer, it is obligated to treat each and every one of them as an adverse event report and submit it to the FDA.
The second interesting statistical point in the Pharmacoepidemiology article is its estimation of how much does significant adverse publicity – such as the addition of a publicly announced major contraindication – bias the voluntarily reporting frequency for that same drug (as opposed to within an overall class of drugs). Examining the relevant data, the article concludes that not quite 15% (14.8%) of hospitalizations for the relevant adverse effect were reported to the FDA before the publicity. Afterwards, with the medical community more attuned to the adverse effect, fully 35% of cases were reported.
“Fully” 35%? That the reporting rate was that low, even after that kind of publicity, is disturbing in its own right for other reasons – but those reasons must await some other day, at least on this blog.
We know that we’ll be confronted with similar ADE-based allegations in the future. It’s inevitable, since plaintiffs’ experts know every bias in ADE data and how to manipulate the statistics to maximize or minimize that bias, as the opinion they wish to give requires. But now we at least have some reasonably firm scientific data from which to argue that ADE statistics in mass tort situations are so badly biased by adverse publicity (the very adverse publicity that spawned the mass tort) that they can’t validly be relied upon to prove anything. These are: (1) that for the same drug, major adverse publicity can cause an uptick in voluntary reporting of more than 100%; and (2) as between drugs in the came class, involving an identical risk, the effect of publicity bias can reach fully 500-600%.
Hence the two main takeaways from the Pharmacoepidemiology article:

  • The extent of spontaneous adverse event reporting to FDA may not be uniform for all drugs, as demonstrated by the case study of statin-associated rhabdomyolysis.
  • Factors such as publicity surrounding a drug, the extent of use in the US population, and the marketing of a drug may affect the number of cases reported to the AERS [adverse event reporting system].

Pharmacoepidemiology Article, at “Key Points.”
There’s a second article of similar interest because it also deals with ADE statistics. This article was published in the Archives of Internal Medicine in September of last year.
We’re inclined have to be a little (actually a lot) more skeptical of this one since one of the authors, Dr. Curt Furberg, is a plaintiffs’ expert. He’s appeared for plaintiffs in the Rezulin mass tort. See In re Rezulin Products Liability Litigation, 309 F. Supp.2d 531, 545-46 (S.D.N.Y. 2004) (excluding Furberg testimony as “hav[ing] no basis in any relevant body of knowledge or expertise”). He was also the lead plaintiff expert who appeared in a Paxil case (Colacicco) as amicus curiae along with eleven other plaintiff’s experts – none of whom ever disclosed their employment by the plaintiffs’ side to the Court. Looking to page 1758 of the article, it appears that Dr. Furberg again makes no financial disclosure of his plaintiff work.
To be fair, though, there are two other authors on the article besides Dr. Furberg – both of whom work for or with an organization named the Institute for Safe Medication Practices.
One thing that the Furberg article has going for it is that the statistics are a lot less sophisticated, so it’s easier to understand. All the article really does is add up the number of deaths and other serious adverse events reported to the FDA over an eight-year period (1998-2005) – it finds an overall increase in yearly reported incidents of more than 250% – and then breaks down the ADE reports by drug.
That means the Furberg article contains “top ten” list of drugs with the most “reported” deaths and serious injuries over this seven year period. That list has predictably been picked up on the Internet, where it now forms the basis for a starkly titled “10 Deadliest Drugs” webpage.
Unlike the first article we discussed in this post, Dr. Furberg’s piece doesn’t attempt any fancy estimates of how many people have actually used any of the drugs that find their way onto these lists. Thus “estrogens” (oral contraceptives and hormone replacement) and “insulin” (diabetes), which are used by many millions, top the list of drugs with the “most” serious outcomes, ahead of various other drugs (including things called DMARDs) probably used by far fewer people.
In short, there’s no “denominator” at all for these top ten lists. It’s like selecting the 10 best hitters in baseball by looking at the number of hits they got, but ignoring their batting averages – but we don’t think that anyone would really believe that, for example, Juan Pierre (221 hits) was a better hitter than Barry Bonds (.362 BA) in 2004.
Nor does the Furberg article distinguish between adverse events involving drug abuse and those associated with proper therapeutic uses. Thus, in four of the six top slots on the “death” list are “opioid analgesics” – narcotics, in other words – which are frequently taken illegally to get high.
We do give the article credit for at least making an attempt to remove from consideration ADE reports that really are just computer generated complaints from mass tort actions – so it at least recognizes that problem. Thus the Furberg study excludes all ADEs filed more than two weeks after a drug was withdrawn. That’s a decent start, but there are a lot of mass torts where the product is not withdrawn. With that confounding factor, one of the article’s primary conclusions: that “The change and overall risks can be primarily attributed to a minority of important drugs” (p. 1757), could well be litigation, rather than safety, driven. If only a few of these “important” drugs are those saddled with mass tort litigation (and we recognize a number of major litigation targets on the article’s top ten lists), then litigation could well be playing a larger role in the overall results than the authors acknowledge.
Thus, in a number of respects, the Furberg article is an example of how not to use ADE statistics. But that’s not to say that the article doesn’t have anything interesting or useful to say – because it does.
For one thing, there are a set of fascinating multi-year graphs (p. 1757) that illustrate the dramatic effects of adverse publicity on ADE reporting.
Another point is that withdrawn drugs – those drugs that the plaintiffs’ lawyers and their political allies love most to harp about – didn’t really have that much effect on ADE reporting. “Contrary to our expectations, drugs related to safety withdrawals were a modest share of all reported events and declined in importance over time” (p. 1757). That’s a finding of significant importance, although we can’t really say it’s all that surprising to us. After all, with all ADEs for any withdrawn drug deliberately excluded after fourteen days, that such ADEs would “decline. . .over time” can’t really be all that hard to believe.
The Furberg article does acknowledge the “many known limitations” of ADEs data that we’ve discussed in our prior posts: that the data are “voluntary” rather than “systematic”; that the reports “do[] not establish causality”; that reported “events” include a wide variety of disparate occurrences; and that reported events are only a “fraction” of actual events (pp.1757-58).
But from our point of view, perhaps most useful thing about the Furberg article is its rather frank acknowledgement of reporting bias related to both litigation and adverse publicity. It states:

We also explored whether the results were influenced by external factors such as. . .safety withdrawals, or legal claims. Our 8-year study period featured several such episodes. . . . [examples omitted]. It seemed possible that increased reporting could have been stimulated through media publicity and lawyers seeking injured clients through radio, television, and Internet advertising. We limited the impact of this phenomenon by excluding reports received more than 14 days after a drug was withdrawn for safety reasons. Even if all cases associated with withdrawn drugs involved legal claims, the subset data showed that such claims accounted for less than 10% of all events and declined since 1999. Nevertheless, the influence of publicity and legal claims can be seen in specific drugs. . . . [examples omitted] However, overall, the increased reporting effect from these events was partially adjusted for, was limited to relatively few drugs, and may have declined over time.

Furberg Article, at 1758 (emphasis added).
Given the nature of our practices, we tend to represent the manufacturers of those “relatively few drugs” for which ADE reporting is concededly affected by litigation and publicity. Thus we welcome ammunition, like “the influence of publicity and legal claims can be seen,” that we can use to argue that, because of such biases, ADEs cannot be used as evidence of causation. And we especially welcome ammunition tossed to us over the transom by somebody (like Dr. Furberg) who is normally associated with the other side of the “v.”