Photo of Bexis

Bexis spotted this one – because he’s compulsive and checks the Pennsylvania Supreme Court website every morning to see if any of his half-dozen pending matters have been decided. For the next however long you read this post, we want you to think of adverse drug or device experience reports (“ADEs”) as no different than truck wrecks. That’s instead of thinking of pharmaceutical mass torts as train wrecks, which is something we do all the time.
Trust us, there’s a reason for this. It’s all about the use and misuse of so-called “similar occurrence” evidence in the trial of product liability litigation.
Everybody who defends drug and device manufacturers in product liability cases knows all too well how much plaintiffs love to wave ADEs around in front of juries as “evidence” that the defendant knew all along that its drug/medical device/vaccine caused whatever injury they’re claiming this week. The biggest trouble with this line of attack is that the FDA itself, which requires collection and submission of ADEs in the first place and which decides what should be reported as an ADE, says they’re no good as evidence of causation.
The FDA requires essentially anything bad to be reported as an ADE if it occurs near the time a drug was used – “whether or not considered drug related.” 21 C.F.R. §314.80(a). For devices, the reporting standard is any injury that “may have been attributed to a medical device.” Id. §803.3(d). We’re pretty sure that there’s a similar standard for biologics, too, but we don’t know it off-hand. Please feel free to enlighten us.
Obviously, these regulations are intentionally broad, as the Agency wants to get as much information as possible into the system. Thus, another of FDA’s regulations makes clear that ADEs are not “admissions” of causation:

A report or information submitted by an applicant under this section (and any release by FDA of that report or information) does not necessarily reflect a conclusion by the applicant or FDA that the report or information constitutes an admission that the drug caused or contributed to an adverse effect.

21 C.F.R. §314.80 (k). For its own regulatory reasons, the FDA wants more, rather than less, information and thus doesn’t restrict reporting to events “caused” by regulated products. The Agency has made this very clear for at least a decade now:

1. For any given ADE case, there is no certainty that the suspected drug caused the ADE. This is because physicians and consumers are encouraged to report all suspected ADEs, not just those that are already known to be caused by the drug. The adverse event may have been related to an underlying disease for which the drug was given, to other concomitant drugs, or may have occurred by chance at the same time the suspect drug was administered.
2. Accumulated ADE cases may not be used to calculate incidences or estimates of drug risk.

Annual Adverse Experience Drug Report: 1996 (emphasis added). You can read the whole thing yourself here. An FDA-commissioned evaluation of ADEs observed:

[T]he reporting rate is not a true measure of the rate or the risk. . . . An observed event may be due to the indication for therapy rather than the therapy itself (e.g. suicide after anti-depressant ); therefore observed report associations should be viewed as signal detecting, and causal conclusions drawn with caution.

Pharmaceutical Safety Assessments Analysis (4/11/03) at second unnumbered page. Thus the FDA stopped making “causality assessments” based on ADEs because the results were “dubious,” “inefficient,” and “low quality”:

[C]ausality assessments for spontaneous ADR reports. . .should be used only when a signal has arisen. Much of the work in causality assessment goes to distinguishing between probable and possible caseness, a distinction of dubious value. Routine causality assessments of spontaneous reports were dropped at FDA in 1983 because of its limited value and because it was a major source of delay in entry and contributed to backlogging. Hopefully, regression to an inefficient and low utility activity can be avoided based on prior experience.

Indeed, anybody who asks the FDA for copies of ADEs gets a boilerplate cover page with Agency “caveats” stating that “accumulated” ADEs “cannot be used to calculate incidence or estimates of drug risk,” and that “[t]rue incidence rates cannot be determined” from ADEs.

[O]ne or even many reports of adverse reactions often do not provide sufficient information to confirm that a drug caused the reaction. A reaction may be caused by the suspect drug, another drug that a patient is taking, or the underlying diseases for which the drug was prescribed; it may also be entirely coincidental.

Gerald Faich, “Adverse Drug Reaction Monitoring,” 314 New Eng. J. Med. 1589, 1591 (1986).
So when the plaintiff’s lawyer hands his/her expert witness a stack of ADEs and asks for an opinion on whether they prove that the defendant’s FDA-regulated product “caused” plaintiff’s jungle rot, or whatever, that’s objectionable because it misuses ADEs for a purpose for which they’re neither intended nor competent. It’s happened a lot, and most courts will put a stop to it because ADEs “make little attempt to screen out alternative causes for a patient’s condition,” and “frequently lack analysis.” Glastetter v. Novartis Pharmaceuticals Corp., 252 F.3d 986, 989-90 (8th Cir. 2001) (affirming exclusion).
The weight of precedent for the proposition that ADEs cannot establish causation in personal injury cases is pretty impressive. Over one hundred pages into one of the longest (and best) opinions you’ll ever read, Soldo v. Sandoz Pharmaceuticals Corp., 244 F. Supp.2d 434 (W.D. Pa. 2003), held that “because the [FDA] case reports themselves say that causation has not been proven, reliance on the case reports is per se unscientific.”

ADEs do not demonstrate a causal link but instead represent coincidence. Case reports and ADEs are compilations of occurrences, and have been rejected as reliable scientific evidence supporting expert opinion. . . . Unlike epidemiological studies, they do not contain a testable and systemic inquiry into the mechanism of causation. As such, they reflect reported data, not scientific methodology.

Id. at 537-38. There’s plenty more where that came from. McClain v. Metabolife International, Inc., 401 F.3d 1233, 1250 (11th Cir. 2005) (“these FDA reports reflect complaints called in by product consumers without any medical controls or scientific assessment”); Rider v. Sandoz Pharmaceuticals Corp., 295 F.3d 1194, 1199 (11th Cir. 2002) (ADEs “are merely accounts of medical events” and “reflect only reported data, not scientific methodology”); Hollander v. Sandoz Pharmaceuticals Corp., 289 F.3d 1193, 1211 (10th Cir. 2002) (ADEs “contain only limited information” and are “unreliable evidence of causation”); In re: Accutane Products Liability Litigation, 2007 WL 1288354, at *3 (M.D. Fla. May 2, 2007) (ADEs “reflect[] nothing more than an assessment of a possible relationship, not an actual relationship”); In re Meridia Products Liability Litigation, 328 F. Supp.2d 791, 808 (N.D. Ohio 2004) (ADEs held “irrelevant to establish a material issue of fact”); Dunn v. Sandoz Pharmaceuticals Corp., 275 F. Supp. 2d 672, 682 (M.D.N.C. 2003) (ADEs “are not scientific proof of causation”); Cloud v. Pfizer, Inc., 198 F. Supp. 2d 1118, 1133-34 (D. Ariz. 2001) (ADEs “are merely compilations of occurrences, and have been rejected as reliable scientific evidence supporting an expert opinion”); Caraker v. Sandoz Pharmaceuticals Corp., 172 F. Supp. 2d 1046, 1050 (S.D. Ill. 2001) (ADEs “make little attempt to isolate or exclude possible alternative causes, lack adequate controls, and lack any real analysis”); Nelson v. American Home Products Corp., 92 F. Supp.2d 954, 969, (W.D. Mo. 2000) (ADEs “do not demonstrate a causal link sufficient for admission to a finder of fact in court”); Brumbaugh v. Sandoz Pharmaceuticals Corp., 77 F. Supp. 2d 1153, 1156-57 (D. Mont. 1999) (“most significant analytical defect [of ADEs] is that they don’t isolate and investigate the effects of alternative causation”); Saari v. Merck & Co., 961 F. Supp. 387, 394 (N.D.N.Y. 1997) (ADEs “neither confir[m] nor den[y] that there is any relationship” between alleged symptoms and a product); Haggerty v. Upjohn Co., 950 F. Supp. 1160, 1164 (S.D. Fla. 1996) (ADEs “contain raw information that has not been scientifically or otherwise verified as to cause and effect” and cannot be a basis for expert testimony), aff’d, 158 F.3d 588 (11th Cir. 1998); Wade-Greaux v. Whitehall Laboratories, Inc., 874 F. Supp. 1441, 1481 (D.V.I. 1994) (“such data represent anecdotal information of chance associations, do not purport to assess cause and effect and have no epidemiological significance”); Hagaman v. Merrell Dow Pharmaceuticals, 1987 WL 342949, at *8 (D. Kan. June 26, 1987) (ADEs “are often incomplete, in that they often do not indicate whether other drugs are being taken, or whether other drugs might have caused the reported health problem”). These aren’t even all the arguably relevant cases – there’s more.
State courts exclude use of ADEs too, just not as often. In Reynolds v. Warthan, 896 S.W.2d 823 (Tex. App. 1995), the court held that ADEs were irrelevant and misleading on issues of causation and risk. The court followed the FDA’s expressed views, id. at 827-28, and determined that ADEs failed to “establish a causal link” between the drug and alleged injuries but instead “created a suspicion without any medical proof.” Id. at 828. A New York court likewise held that “unverified listings and reporting of adverse reactions . . .are not generally accepted in the scientific community on questions of causation.” Heckstall v. Pincus, 797 N.Y.S.2d 445, 447 (N.Y.A.D. 2005). See Pauley v. Bayer Corp., 2006 WL 463866, at *2 (Pa. C.P. Jan. 26, 2006) (ADEs are “not the product of laboratory research or any type of controlled study,” but “merely the compilation of experiential reports”).
OK guys, you ask, but what about the truck wrecks? Just getting to that. Well, plaintiffs (or at least their lawyers) are pretty smart. Faced with all of the precedent we’ve just gone through, they don’t just stand still and take their lumps. Quite the contrary. What we’ve seen plaintiffs do is have their experts gin up some sort of statistical analysis – sometimes called “causality assessment,” (Accutane, supra) and purport to rely upon that rather than on the ADEs directly. Their argument becomes that, “oh no, we’re not trying to admit the ADEs per se, and we’re not not even using the ADEs as a basis for expert testimony; rather we’ve creating a new scientific study that’s admissible because it uses valid statistical analysis.”
Being defense counsel, we respond that this is a distinction without a difference. It’s a case of garbage in, garbage out, because the ADE data that is being analyzed is invalid. You can’t make inherently bad numbers any better just by crunching them – the sow’s ear remains just that.
Still, such nifty exercises in do-it-yourself statistics do give plaintiffs a fig leaf to hide behind – a degree of separation from the oft-excluded ADEs themselves. If the judge is sympathetic, it gives plaintiffs something to argue that’s different from the argument that they’ve lost in dozens of cases.
Here’s where the truck wrecks come in. Last week the Pennsylvania Supreme Court issued a one-line per curiam order affirming in all respects a case called Hutchinson v. Penske Truck Leasing. The order (which isn’t much to look at) is here. We’ve been following Hutchinson because Philadelphia is home to a lot of pharmaceutical mass tort litigation, and we’ve argued Hutchinson in the ADE situation in cases being tried there. We’ve won it, too, but with nothing worth posting in terms of any opinions.
Hutchinson is important because it cuts to the heart out of this statistical dodge that plaintiffs have started to use to try to keep ADEs as supposed evidence of causation – that is, if you think of ADEs as truck wrecks. Hutchinson involved a truck wreck, and a plaintiff’s expert seeking to rely upon statistics far more impressive than anybody’s home cooking of drug/device ADEs. The government itself (NHTSA) compiled stats about the causes/effects of truck wrecks nationwide for several years. Relying on those stats, the plaintiff’s expert in Hutchinson opined that a particular aspect of a truck’s design was defective and helped cause the accident that injured the plaintiff. The trial judge let this in, and the jury awarded the plaintiff several million dollars. Hutchinson v. Penske Truck Leasing Co., 876 A.2d 978, 983-83 (Pa. Super. 2005).
On appeal, the court said “nothing doing” and reversed. All the truck accidents (substitute “ADEs” throughout, if you’d like) in the government studies could only be relevant as “other similar incidents.” Like most states, Pennsylvania does not allow other incidents into evidence unless those incidents are similar to the one at suit – and the person offering such evidence bears the burden of proving similarity. The government truck wreck studies never revealed the circumstances of any of the wrecks. That was fatal, because the plaintiff, whose expert used the studies, had to show that the wrecks in the studies were similar to the accident being litigated:

[T]he “substantial similarity” test applies whether the evidence of other accidents is offered to prove the existence of a defect, the cause of the accident, or notice of a defect. . . . The burden to prove substantial similarity of other accidents lies with plaintiff [the proponent].

Id. at 984-85.
To meet this burden required affirmative proof that (1) the product, (2) how it was used, and (3) other surrounding “circumstances” of those other truck wrecks all strongly resembled the case being litigated. Id. at 985.

[T]he court must give “thoughtful consideration” to the similarities – and to any differences – between the products involved in other accidents and the product at issue. Only if the court finds substantial similarity, not only with respect to the products but also with respect to the circumstances surrounding the accidents, is the evidence of other accidents properly admitted.

Id. at 986 n.4.
Critically to our story, the plaintiff responded “that the expert reports do not constitute ‘other accident’ evidence because [plaintiff] presented no single other accident to the jury but rather presented only the reports’ conclusions from studies of hundreds of other accidents.” Id. at 985. The court emphatically shot this argument down – calling it “frivolous and illogical.” Id. “To suggest. . .that the underlying nature of this evidence of other accidents was transformed, merely because it was compiled, analyzed, and summarized to generate conclusions, defies both logic and common sense.” Id. (emphasis added).
Hutchinson was the first case in Pennsylvania, and possibly in the nation (Accutane is a similar ruling, but not as explicit), to consider whether the admissibility of evidence of other incidents directly and the admissibility of statistical studies of other incidents are to be governed by the same evidentiary standard. That’s why we were watching the Pennsylvania Supreme Court’s further review of the case so closely. Hutchinson’s application of an identical standard strongly supports exclusion of analogous statistical analysis of ADEs, whether conducted by the plaintiff’s expert or by somebody more reputable.
Not unexpectedly, we think Hutchinson sets out a good rule. Like the court said, it’s “frivolous and illogical” to think that purportedly similar other incidents – be they truck wrecks or ADEs – can have their inadmissibility whitewashed away simply by submitting them to statistical analysis. That should be even more true for the ADEs than for the truck wrecks because the major basis for ADEs being excluded is the substantive invalidity of this type of data, rather than just its dissimilar circumstances. Garbage in; garbage out. The moral of the story is to keep objecting.