We’ve always hated the concept of cy pres class action settlements.  A cy pres distribution is an admission that, even without opposition, the plaintiffs cannot prove who was injured and by how much.  Cy pres also takes money supposedly belonging to the injured class and give it to charities not injured by anyone, so the charities can use the money to encourage more litigation.  To us, cy pres not only encourages abusive class action litigations, but violates all kinds of basic legal rules, such as Due Process, the Rules Enabling Act, and the First Amendment (not all absent class members necessarily agree with the views of the charities being allowed to take their money).

We won’t prolong this post with any detailed discussion of what “cy pres” means in this context.  Basically it’s a misapplication of an estates doctrine (legal French beginning with the words “cy pres”) invoked to allow class action settlement funds not to be given to the class at all, because it’s supposedly “too hard” to figure out damages, and the money purportedly owned by the class is given to charities instead.  It’s a gimmick to facilitate class actions that otherwise would fall of their own weight.

We lambasted the cy pres settlement at issue in the pending Supreme Court case, Frank v. Gaos, No. 17-961, as a “poster child” for class action abuse well before the Supreme Court granted certiorari.  It had everything going against it:  class members getting zero, and cy pres charities taking everything (after a huge chunk of attorney fees were removed, of course), the charities were not selected by any set criteria and had connections to the parties or counsel (such as being class counsel’s law schools), use of cy pres money to “increase awareness” of cyber security issues – a euphemism for soliciting more litigation, cy pres being used to bloat the settlement numbers for purposes of calculating attorney fees.

Thus, we thought Frank was the perfect case to cause the Supreme Court to recoil in disgust from a process with no foundation in statute, common law, or rule.

Now Franks has been argued, and we think we were half right.  From their statements, and how they formed their questions, it’s pretty clear that none of the justices liked cy pres – certainly not the 100% version at issue here.  See Transcript at:

  • “[T]here may be a question about whether the trial court adequately determined feasibility.”  Tr. at 5-6 (Sotomayor, J.).
  • “[T]his is a full cy pres award, meaning there’s no direct benefit to the class.”  Id. at 10 (Sotomayor, again).
  • “[I]s any effort made — and would it even be possible — to determine whether every absent class member or even most of the absent class members regard the beneficiaries of the cy pres award as entities to which they would like to make a contribution?”  Id. at 13 (Alito, J.).
  • “So the parties and the lawyers get together and they choose beneficiaries that they personally would like to subsidize?”  Id. at 14 (Alito, again).
  • “I think you either decide the cy pres award provides relief or it doesn’t provide relief.  If it doesn’t provide relief, you don’t get a fee for it.”  Id. at 26 (Roberts, C.J.).
  • “[D]o you think that problem is going to be meaningfully redressed by giving money to AARP?”  Id. at 42 (Roberts, again).
  • “Including a group that engages in — engages in political activity, having nothing to do with the inability of elderly people to conduct searches?”  Id. at 43 (Roberts, again).
  • “[W]here they [the absent class members] get nothing, under those circumstances, . . . what’s happening in reality is the lawyers are getting paid and they’re making sometimes quite a lot of money for really transferring money from the defendant to people who have nothing to do with it.”  Id. at 46-47 (Breyer, J.).
  • “Isn’t it always better to at least have a lottery system, then, that one of the plaintiffs, one of the injured parties gets it [the settlement funds], rather than someone who’s not injured?”  Id. at 51 (Kavanaugh, J.).
  • “But there is the appearance, as the district court said in the hearing, the appearance of favoritism and alma maters of — of counsel.”  Id. at 55 (Kavanaugh, again).
  • “[D]on’t you think it’s just a little bit fishy that the money goes to a charity or a 501(c)(3) organization that [defendant] had contributed to in the past?”  Id. at 56 (Roberts, again).
  • “The appearance problem here . . . is symptomatic of a broader question, which is why is it not always reasonable . . . to try to get the money to injured parties.”  Id. at 58-59 (Kavanaugh, again).
  • “[T]he appearance of favoritism and collusion . . . is rife in these cases.”  Id. at 61 (Kavanaugh, yet again).
  • “And at the end of the day, what happens? The attorneys get money, and a lot of it.  The class members get no money whatsoever.  And money is given to organizations that they may or may not like and that may or may not ever do anything that is of even indirect benefit to them.”  Id. at 63 (Alito, again).

Ouch.  Those are scathing comments from five justices (Thomas, J. rarely says a word) on both sides of the political spectrum, and most of them commented more than once.  So we’re reasonably confident that, if the Court addresses the so-called “cy pres doctrine,” it will not fare well.

But that’s the other half of the problem – standing is a major issue.  Two justices, Gorsuch and Kagan, did not ask any questions about cy pres at all.  They were totally focused on standing.  Standing, in this context, means that none of the plaintiffs, even assuming the privacy of their Internet searches was violated, had established that they had suffered any concrete or distinct harm from the claimed breach.  That may well be a good argument, leading to a standing decision that seriously clips the wings of class actions focused on various breaches of cybersecurity.  During the oral argument, more than half of the Justices were also interested in the standing question. Indeed, at one point they talked over each other in their eagerness to refocus the argument on that issue:

JUSTICE KAGAN: Mr. Frank –

JUSTICE GORSUCH: We – I’m sorry.

JUSTICE KAGAN: Sorry. No, go ahead.

JUSTICE GORSUCH: Oh, please go ahead.

JUSTICE KAGAN: No.

CHIEF JUSTICE ROBERTS: Justice Kagan.

JUSTICE KAGAN: I was going to change the subject.

(Laughter.)

JUSTICE GORSUCH: So was I.

(Laughter.)

JUSTICE GORSUCH: Jurisdiction?

JUSTICE KAGAN: Yes.

JUSTICE GORSUCH: Go for it.

Tr. at 14-15.  Lack of standing, of course, would mean that “the whole class action is thrown out.”  Id. at 17 (Roberts, C.J.).  We’re not going into detail, but from the discussion, it appears that the trial judge’s standing ruling was required by a Ninth Circuit decision that has since been reversed by the Supreme Court, Spokeo, Inc. v. Robins136 S. Ct. 1540 (2016).  See Tr. at 70 (the district court “believed its hands were tied by the Ninth Circuit precedent”).  We also note that, shortly after the oral argument, the Court issued an order requiring supplemental briefing on the standing issue, with such briefing to be completed by December 21, 2018.  Merry Christmas.

A decision dismissing the entire class for lack of standing would be good for privacy class action defendants – and bad for their opponents – but it would leave the doctrine of cy pres free to continue its reign of error in the lower courts.

Or maybe not.

While a decision that the entire class action fails for lack of standing would moot the original question that the Court accepted about cy pres distributions as a settlement tool, there is an exception to mootness where an issue is “capable of repetition yet evading review.”  E.g., Spencer v. Kemna, 523 U. S. 1, 17 (1998).  Since lousy, no-injury class actions and cy pres settlements go together like . . . death and taxes, perhaps, the mootness doctrine might apply here, as invocation of cy pres is indicative of class actions that should never have been brought in the first place because no individualized injury can be proven.

The Court’s request for supplemental briefing is a sign that, rather than remanding, the Court is inclined to address the standing question itself – and if it does that, it could also smack down cy pres even if it concludes standing is absent.  At least we can hope.

We celebrated National Cybersecurity Awareness Month a few weeks ago by bringing you the FDA’s newly published Medical Device Cybersecurity Regional Incident Preparedness and Response Playbook, with a promise to cover the Agency’s promised update on its Guidance for Content of Premarket Submissions for Management of Cybersecurity in Medical Devices, which was first published in 2014.

Well, the Agency has now published the Draft Guidance (you can review it here), and it is really interesting for a few reasons.  First, the FDA continues to view medical device cybersecurity risks through the same lens as it views any other risk.  That is to say, treatment with any medical device presents potential risks, and premarket submissions for connected medical devices should permit analysis of cybersecurity risks compared against the device’s benefits.  Second, the Draft Guidance generally follows the philosophy and framework set forth in the FDA’s current guidance, but places considerably greater flesh on the bone.  Third, the Draft Guidance places a much greater emphasis on medical device warnings, including suggesting the inclusion of a long list of detailed information—so much information that we wonder about usefulness and feasibility.

So what does the Draft Guidance say?  The theme is that the increasing use of connected medical devices and portable media in medical devices makes effective cybersecurity more important than ever to ensure device functionality and safety.  The Draft Guidance’s mission is clear:  “Effective cybersecurity management is intended to decrease the risk of patient harm by reducing device exploitability which can result in in intentional or unintentional compromise of device safety and essential performance.”  (Draft Guidance, at p.3)  “Intentional or unintentional.”  In other words, we are talking here not only about bad actors and malicious attacks, but also accidents and other situations where no harm to a device’s function was intended at all.

One new feature is the creation of two tiers of medical devices:  A device is “Tier 1, Higher Cybersecurity Risk” if (1) the device is capable of connecting to another medical or non-medical product, or to a network, or to the Internet; AND (2) a cybersecurity incident affecting the device could directly result in patient harm to multiple patients.  All other devices are “Tier 2, Standard Cybersecurity Risk.”  (Id. at 10)  The catch-all nature of Tier 2 seems odd at first blush because it would appear to include devices for which there is no conceivable cybersecurity risk, such as orthopedic implants.  Also note that these tiers cut across the FDA’s existing statutory device classifications, such that a Tier 1 device could be Class II or Class III device.  They are separate criteria.  (Id.)

The consequence of falling into Tier 1 is that the Draft Guidance calls for considerably more exacting information in premarket submissions.  More specifically, premarket submissions for Tier 1 devices should “include documentation demonstrating how the device design and risk assessment” incorporate certain design controls that accomplish the following:

  • Identify and Protect Device Assets and Functionality – The focus here is on the design of “trustworthy” devices and the presentation of documentation demonstrating “trustworthiness.” A trustworthy device should prevent unauthorized use through sufficient authentication and encryption; should ensure the trustworthiness of content by maintaining “code, data, and execution integrity” through such measures as software/firmware updates and enabling secure data transfer; and should maintain confidentiality.  ( at 11-16)
  • Detect, Respond, Recover – As the Draft Guidance puts it, “appropriate design should anticipate the need to detect and respond to dynamic cybersecurity risks.” This includes designing the device to detect cybersecurity events promptly.  It also includes designing the device to respond to and contain the impact of cybersecurity incidents and to recover its capabilities.  This would be though such measure as routine security updates and patches, systems to detect and log security compromises, features that protect critical functionality, and measures for retention and recovery of system configurations.  It also includes something called a “CBOM”—a Cybersecurity Bill of Materials, essentially a list of hardware and software components that are or could become susceptible to vulnerabilities.  ( at 16-18)

Perhaps the most interesting part of the Draft Guidance is the recommendation for device labeling.  As product liability litigators, medical device labeling is near and dear to our hearts because a manufacturer’s potential liability often depends on the adequacy of the risk information and instructions for use.

The FDA seems to agree that device labeling is important.  After citing the governing statutes and regulations, the Agency counsels that “when drafting labeling for inclusion in a premarket submissions, a manufacturer should consider all applicable labeling requirements and how informing users through labeling may be an effective way to manage cybersecurity risks.”  (Id. at 18-19)  The Draft Guidance then lists 14 separate factors that it recommends for inclusion in the labeling.  We paraphrase them below not because we expect you to study them, but more so you can get a sense of how exacting these recommendations could be.  Here goes:

  • Device instructions and product specifications related to recommended cybersecurity controls appropriate for the intended use environment;
  • A description of the device features that protect critical functionality;
  • A description of backup and restore features;
  • Specific guidance to users regarding supporting infrastructure requirements;
  • A description of how the device is or can be hardened using secure configuration;
  • A list of network ports and other interfaces that are expected to receive and/or send data, and a description of port functionality and whether the ports are incoming or outgoing;
  • A description of systematic procedures for authorized users to download version-identifiable software and firmware;
  • A description of how the design enables the device to announce when anomalous conditions are detected (e., security events);
  • A description of how forensic evidence is captured, including but not limited to any log files;
  • A description of the methods for retention and recovery of device configuration;
  • Sufficiently detailed system diagrams for end users;
  • “A CBOM including but not limited to a list of commercial, open source, and off-the-shelf software and hardware components to enable device users . . . to effectively manage their assets, to understand the potential impact of identified vulnerabilities to the device (and the connected system), and to deploy countermeasures to maintain the device’s essential performance”;
  • Where appropriate, technical instructions to permit secure network deployment and servicing, and instructions on how to respond upon detection of a cybersecurity vulnerability or incident; and
  • Information, if known, concerning device cybersecurity end of support, e., the time when the manufacturer may no longer be able to reasonably provide software patches and updates.

We support providing adequate information to device users, and we doubly support taking medical device cybersecurity seriously.  These recommendations, however, raise several questions.  For one thing, who is the intended audience?  The learned intermediary doctrine in most every state holds that medical device warnings are for the prescribing physicians—and no one else.  Is this information to be written for physicians, or IT professionals, or even patients?  We don’t know.

We also wonder about whether it is feasible to provide all this information, or even useful.  Maybe it would be both, or maybe neither.  But we think it is fair to ask whether providing “sufficiently detailed system diagrams” and lists of “commercial, open source, and off-the-shelf software and hardware components” is the most helpful information for protecting patient health and safety.  What is a “CBOM”?  We also wonder how the adequacy of this information would be judged.  Unlike medical risk information, this information is beyond what most physicians (the learned intermediaries) would readily appreciate.  In the so-far-extremely-unlikely event that a cybersecurity incident results in harm to a patient, will we have a new category of experts to depose?

To round it out, the Draft Guidance recommends including design documentation and risk management documentation that demonstrates device trustworthiness and the design’s connection to “threat models, clinical hazards, mitigations, and testing.”  (Id. at 21-22)

The above questions and more can be presented to the regulators as they consider the Draft Guidance and put it in final form.  Comments and suggestions are currently due sometime next March, although these deadlines tend to slip.  We will eagerly see what people have to say.  Stay tuned.

Today we have another guest post by long-time friend of the blog, Dick Dean, and his colleague at Tucker, Ellis, Mike Ruttinger.  Regular readers will recall, that right after Sikkelee v. Precision Airmotive Corp., ___ F.3d ___, 2018 WL 5289702 (3d. Cir. Oct. 25, 2018), was decided, we blogged about the aspect of that decision that we thought was most directly relevant to drug/device litigation – the court’s rejection of tort claims based on failure to make reports to government agencies.  We briefly mentioned the remainder of the Third Circuit’s decision (which was actually by far the lengthier discussion), but didn’t spend much time on it.  In this post, Dick and Mike rectify that oversight.  As always, our guest bloggers deserve 100% of the credit (and any blame) for their discussion.

**********

The Third Circuit is having a bad year on preemption.  Its decision in In re Fosamax Products Liab. Lit., 852 F.3d 268 (3rd Cir. 2017), in which it held that it is for juries and not judges to determine whether there is “clear evidence” sufficient to meet the Wyeth v. Levine, 555 U.S. 555 (2009), standard for preemption in a failure-to-warn case, was accepted for review by the Supreme Court and is widely expected to be reversed.  [Editorial note – Fosamax ended up tied for the worst decision of 2017.  We hope to be rid of it in 2018.]  And now the Circuit has injected needless confusion into the test for impossibility preemption set forth in Levine’s follow-up case, PLIVA, Inc. v. Mensing, 564 U.S. 604 (2011).  Mensing is familiar to many as the case that clarified the rule for determining when an impossible-to-resolve conflict between federal and state law preempts plaintiffs’ claims.  If the change the plaintiff seeks is one that requires prior approval or federal permission, then the claim is preempted.  Id. at 620 (“The question for ‘impossibility’ is whether the private party could independently do under federal law what state law requires of it.”) (emphasis added).  [Editorial note:  We call that the “independence principle.”]

The recent Third Circuit decision, Sikkelee v. Precision Airmotive Corp., No. 17-3006, 2018 WL 5289702, at *8 (3rd Cir. Oct. 25, 2018), is a wrongful-death case that originated from a 2005 airplane crash.  As the date suggests, it has been around for quite a while; this is the litigants’ second trip to the Third Circuit after a 2016 appeal [blogged about here] culminated in denial of a petition for certiorari and a remand for the Middle District of Pennsylvania to consider conflict preemption issues.  Specifically, the focus of Sikkelee became the design of the airplane carburetor.  The district court initially found the plaintiff’s defect claim to be barred by field preemption under the Federal Aviation Act because federal regulation of aviation is so extensive as to preempt the entire field of airplane design-related tort law.  Sikkelee v. Precision Airmotive Corp., 45 F. Supp. 3d 431 (M.D. Pa. 2014).  But the Third Circuit reversed that decision, suggesting that while field preemption does not apply, “the case law of the Supreme Court and our sister Circuits supports the application of traditional conflict preemption principles.”  Sikkelee v. Precision Airmotive Corp., 822 F.3d 680, 699 (3rd Cir. 2016).  Accordingly, the Third Circuit remanded with an opinion directing that the district court should consider the conflict preemption principles set forth in Mensing.

Given the Third Circuit’s lengthy discussion of Mensing in its 2016 opinion, what came next was quite a surprise.  The panel that issued the 2016 decision acknowledged the role that Mensing, as well as a subsequent decision, Mutual Pharmaceutical Co. v. Bartlett, 570 U.S. 472 (2013), play in the conflict preemption analysis at length.  The court even honed in on the FAA’s “preapproval process for aircraft component part designs” as a key factor for the district court to consider in any conflict preemption analysis because the FAA would need to preapprove the alternate design that the plaintiff alleged as the basis for her lawsuit.  Sikkelee, 822 F.3d at 708 (“Thus, the reasoning of the Bartlett majority, 133 S. Ct. at 2473, 2480, and the consideration we must give to the FAA’s views under separation of powers principles, see Wyeth, 555 U.S. at 576-77, 129 S. Ct. 1187, lead us to conclude that the FAA’s preapproval process for aircraft component part designs must be accorded due weight under a conflict preemption analysis.”).  On remand, the district court followed the Third Circuit’s suggestion and found that the design-defect claim regarding the carburetor was indeed preempted because federal regulations required prior approval of the suggested design change.  Sikkelee v. AVCO Corporation, 268 F. Supp. 3d 660 (M.D. Pa. 2017).  But on October 25, that decision was reversed on appeal by a completely different panel of Third Circuit judges.  The new panel found no conflict preemption, applying the “clear evidence” test from Wyeth v. Levine rather than the Mensing prior approval test.  Specifically, the panel reasoned that “the nature of FAA regulations and Lycoming’s interactions with the FAA—including the changes it has made to its type certificate—demonstrate that Lycoming could have—indeed it had—adjusted its design.”  Sikkelee, 2018 WL 5289702 at *8.  For the defendant “to be entitled to an impossibility-preemption defense,” the court reasoned, “it must present ‘clear evidence that the [FAA] would not have approved a change.’”  Id.  Because it found evidence that the FAA would have permitted the change, the court held conflict preemption inapplicable.

The contrast between the Third Circuit’s two Sikkelee decisions is made only starker by the dissenting opinion filed by Judge Roth.  From the outset, Judge Roth notes that the majority erred by taking “a piecemeal approach to the Supreme Court’s impossibility preemption precedents.”  Id. at *13.  Put simply, Wyeth v. Levine cannot be read in a vacuum; for the Supreme Court’s trilogy of conflict preemption cases—Levine, Mensing, and Bartlett—to make sense, they must be read together.  This is not a novel position, but one spelled out by a wide variety of courts over the last five years.  Among the many to do so are Yates v. Ortho-McNeil-Janssen Pharmaceutical, Inc., 808 F.3d 281 (6th Cir. 2015), In re Celexa and Lexapro Marketing and Sales Practices Litigation, 779 F.3d 34 (1st Cir. 2015) (reading Wyeth and Mensing in combination), Utts v. Bristol-Myers Squibb Co., 226 F. Supp. 3d 166, 178-83 (S.D.N.Y. 2016), and—yes—the Third Circuit’s first Sikkelee decision, Sikkelee, 822 F.3d at 702-03 (reading Levine, Mensing, and Bartlett together to spell out different preemption rules for claims based on different regulatory scenarios).  The point, Judge Roth explained after reviewing the three decisions, is that:

When a manufacturer operating in a federally regulated industry has a means of altering its product independently and without prior agency approval . . . state-law claims against the manufacturer alleging a tortious failure to make those alterations ordinarily are not preempted; but, when federal regulations prohibit a manufacturer from altering its product without prior agency approval, state-law claims imposing a duty to make a different, safer product are preempted.

Sikkelee, 2018 WL 5289702 at *13.

Put another way, the fact that a defendant may have made changes in the past which were approved does not negate the fact that it still had to ask a federal agency for permission to make a change.  The fact that a party has to ask is dispositive, as the Supreme Court clarified in Mensing when it held that “[t]he question for ‘impossibility’ is whether the private party could independently do under federal law what state law requires of it.”  564 U.S. at 620.  Levine was simply the wrong framework because in Levine, it was undisputed that the brand drug-manufacturer could unilaterally do what the plaintiff alleged state law required.

In contrast with Levine, on which the majority relied, it was undisputed in Sikkelee that the design change would require prior approval.  The first Third Circuit panel expressly held that “the type certification process results in the FAA’s preapproval of particular specifications from which a manufacturer may not normally deviate without violating federal law.”  Sikkelee, 822 F.3d at 702.  The majority ignored that conclusion entirely.  Indeed, though parsing several of the applicable FAA regulations, the majority never addressed whether those regulations require prior approval of the requested change.  Instead, it short-circuited the inquiry by merely concluding that since the changes had been made subsequent to the accident, the prior approval element was meaningless.  Judge Roth, in dissent, was the only member of the panel to address the prior-approval issue and came to the same conclusion as the first Sikkelee panel—that prior approval was a necessary predicate to the design change.  And since prior approval was required, the case was more like Mensing than Levine, leaving no need to apply the clear-evidence standard.

The trilogy of Levine, Mensing, and Bartlett lay out a clear rule for conflict preemption—the same one summarized by Judge Roth.  If en banc review does not cure the Sikkelee opinion, the Third Circuit may find that Fosamax is not the last preemption decision it sends to the Supreme Court for review.

A couple of years ago we penned a paean to Indiana and its cultural and legal triumphs. Now that another chunk of our family has decided to relocate to that happy state, our thoughts returned to Indiana’s many virtues. Sure, there’s the Indy 500, the fabulous covered bridges of Parke County, the Benjamin Harrison home, and a couple of our favorite in-house lawyers. And now there’s In re Cook Medical, Inc., IVC Filters Mktng., Sales Practices & Prod. Liab. Lit., 2018 U.S. Dist. LEXIS 190177 (S.D. Ind. Nov. 7, 2018).

Maybe plaintiff-files-Daubert-motion isn’t quite man-bites-dog, but it’s still pretty rare. Plaintiffs are usually all about getting to the jury, no matter how raggedy the case. In fact, the more raggedy, the better. Consequently, plaintiffs devote considerably more time fending off Daubert challenges than mounting their own. Maybe there’s a reason for that. Maybe plaintiffs tend to put up hack experts, while defendants put up good ones. Maybe we’re biased. Okay, definitely we’re biased. But take a look at what happened in In re Cook.

The defendants in Cook offered the testimony of a mechanical and materials science engineer who opined that the IVC filter design was not defective and that its benefits outweighed its risks. The expert was well qualified. It’s not as if it was a close call. The defense expert had the appropriate degrees from Cal Berkeley. He also had been a general manager at a company that made IVC filters. Federal Rule of Evidence 702 requires that an expert be qualified by knowledge, skill, experience, training, or education. Note the “or.” This expert had it all. Not only was this expert qualified, he had done the work. He looked at MAUDE adverse event data, peer-reviewed literature, the company’s testing records, the design and engineering records, the opinions of other experts in the case, and fact depositions. That is, the defense expert in Cook did far more homework than virtually any plaintiff design expert we have encountered. We’re not sure we’ve ever deposed a plaintiff design expert who has actually read the design history file. Indeed, we’re pretty sure that most plaintiff experts do not know what a design history file is.

The plaintiff’s main beef with the defense design expert in In re Cook concerns the opinions regarding the device’s benefits and risks. The main benefit of an IVC filter is prevention of pulmonary embolisms. How can a mere engineer opine on medical issues? (Dear Engineering Nerds: Please do not write angry comments; we are using the “mere” word sarcastically. We have endless respect for engineers. We utter a prayer of thanks to them every time we drive across the Benjamin Franklin Bridge. At parties, we always get next to the engineers in case a game of Jenga breaks out.) The court has no problem answering this question: “a biomedical engineer … can testify about the benefits and ability of the Celect IVC filter to catch blood clots from a biomedical design and engineering perspective.” The plaintiffs were asking the wrong question. No surprise there.

Then the plaintiffs raised other wrong questions: (1) Why doesn’t the expert quantify the number of filters that actually prevented pulmonary embolisms? (2) Why does the engineer rely on adverse event data without knowing what percent of adverse events are reported? (3) How dare the expert rely on the defendant’s own studies? The Cook court is untroubled by these wrong questions, and supplies clear, easy, right answers: (1) Quantification goes to weight, not admissibility. (2) No one knows the true adverse event reporting rate, so it’s hard to fault the expert. Also, and again, this criticism might go to weight, but not admissibility. (3) The company’s data might not be perfect, but it looks like valid evidence. The data’s short-comings constitute yet another issue of weight, not admissibility. Finally, the expert relied on lots of other data besides the company’s. In short, tell it to the jury.

We’re still in favor of federal judges acting as stout gate-keepers when it comes to expert testimony. Junk science should be excluded. But when an expert is so well qualified and so well informed as the one in Cook, and when that expert applies reliable methods, there’s no reason to exclude anything. Rather, it’s time for Hoosier hospitality.

When it comes to medical device preemption, having Pre-Market Approval (“PMA”) is like being dealt pocket aces in Texas Hold’Em Poker.  It’s the strongest starting hand you can have; a 4:1 favorite over any other two card combo.  It means you’re starting in the power position.  Since the Supreme Court’s decision in Riegel v. Medtronic, Inc., 552 U.S. 312 (2008), manufacturers of PMA medical devices are in the power position in products liability litigation.  Very little slips by the double-edge sword of express and implied preemption in PMA cases.  The same can, and should be said for IDE cases as well.  And that’s what the Kentucky Court of Appeals said in Russell v. Johnson & Johnson, — S.W.2d –, 2018 WL 5851101 (Ky. Ct. App. Nov. 9, 2018).

Defendant manufactures medical catheters.  The catheter was approved by the FDA via the PMA process in 2004.  Id. at *1.  In 2015, the FDA approved use of the catheter under the Investigational Device Exemption (“IDE”) to the MDA which allowed the catheter to be used in a clinical trial to evaluate its safety in certain cardiac ablation procedures.  Plaintiff underwent a cardiac ablation procedure as part of the clinical trial in which defendant’s catheter was used.  Id.  After plaintiff’s procedure the catheter did receive full pre-market approval.  Id. at *4.

Plaintiff suffered complications during the procedure and subsequently filed suit alleging defendant was liable for strict liability, negligence, lack of informed consent, failure to warn, breach of warranties, fraud, and unjust enrichment.  Id. at *2.  Defendant moved to dismiss all claims on the grounds of preemption and the trial court, relying on Riegel, granted the motion.  Id.  Plaintiff later asked the court to set aside its ruling based on defendant’s voluntary recall of other catheters, but not the one used on plaintiff.  The court denied that motion.  Plaintiff appealed both rulings.

Not surprisingly, plaintiff’s primary argument was that the court should discount Riegel because at the time of plaintiff’s surgery, the device had not yet received pre-market approval.    Id. at *4.  But the court found the argument contradicted by numerous courts to have considered the issue.  Some courts find that timing of the grant of PMA to be immaterial.  Id.  While others find IDE approval to be synonymous with PMA.  Id.  This certainly follows the logic of Riegel.  Riegel adopted a two-step test for preemption and the first step is whether the FDA has established requirements applicable to the device.  Riegel concludes that a PMA does in fact establish such requirements.  Well, so does an IDE.

[b]ecause IDE devices are subject to a level of FDA oversight and control that is, for the purpose of a preemption analysis, identical to that governing PMA devices, the body of preemption law governing PMA devices applies equally to the IDE device at issue in this case.

Id. (citing Martin v. Telectronics Pacing Sys., Inc., 105 F.3d 1090 (6th Cir. 1997).

Thwarted by authorities from other jurisdictions on the issue, plaintiff next urged the court to rely on a Kentucky Supreme Court case decided before RiegelNiehoff v. Surgidev Corp., 950 S.W.2d 816 (Ky. 1997).  Id.  Niehoff rejected preemption in an IDE case relying on Medtronic, Inc. v. Lohr, 518 U.S. 470 (1996).  But as we all know, Lohr dealt with a device approved via the §510k “substantial equivalence” process.  As pointed out above, the IDE process is more analogous to the PMA process and therefore, in a post-Riegel world, Riegel is controlling.   In Niehoff, the manufacturer also stopped the clinical trial before the FDA considered its PMA application.  Id.  Whereas in Russell, the device was granted PMA just over one year after plaintiff’s procedure.  Id. at *5.

In deciding the preemption question in the current case, the court started its analysis with the clear cut statement that “there is no presumption against preemption” in an express preemption case.  Id.  After checking that box, the court looked at the device at issue and concluded that “approval after being subject to both the IDE and PMA processes, satisfies the first prong of Riegel.”  Id. at *6.  So, to survive preemption, plaintiff cannot be alleging a claim that is different or additional to FDA’s requirements regarding safety and effectiveness.  Id.  That means, plaintiff in his complaint must allege three things: “violation of a federal requirement; violation of an identical state violation; and a link between the federal violation and [plaintiff’s] injury.”  Id.  Plaintiff went 0 for 3.

The court could find no allegations of federal violations, or even a cite to a federal regulation.  No factual support for any alleged violation.  No allegations that his injury was caused by a federal violation.  All plaintiff did was allege the device was defective – “in other words, the FDA should have imposed more stringent requirements – an attack precisely prohibited by the MDA.”  Id. at *7.

Failure to allege a parallel violation required dismissal of plaintiff’s strict liability, negligence, failure to warn, and fraud claims.  Id. at *7, *8.    Plaintiff’s informed consent claim failed because plaintiff signed a detailed consent form that was approved by the FDA.  Any claim that the consent was inadequate would impose a different or additional requirement on the defendant.  Id. at *7.  Claims that the device breached warranties regarding safety and effectiveness “directly contradict the FDA’s conclusion that the catheter was safe and effective.”  Id. at *8. As would an unjust enrichment claim premised on a claim that plaintiff did not receive safe and effective medical care.  Id.  Finally, plaintiff failed to allege a parallel federal statute to the Kentucky Consumer Protection Act.  Id.  So, all of the claims were properly dismissed as preempted.  The appellate court also upheld the trial’s court’s decision that any attempt at amendment would be futile.  “Additional time would not have transformed [plaintiff’s] claims into parallel state claims.”  Id.

As for the motion to set aside the dismissal based on the recall, the court again upheld the trial court’s decision.  A final judgement can be set aside based on newly discovered evidence which could not have been learned via due diligence in time for a new trial.  Id. at *9.  But new evidence is not events that occur after entry of a final judgment – such as the recall here.  Id.  Moreover, the new evidence needs to be relevant.  The recall was of different catheters, not the one used in plaintiff’s procedure.  Id.  Next, the voluntary recall “negated neither federal preemption nor FDA approval.”  Id.  The FDA was aware of adverse events and of the recall, but did not withdraw its approval of the device.  And, a recall is not a presumption that FDA regulations have been violated.  A recall doesn’t turn a “preempted claim into a parallel cause of action.”  Id.

             No doubt defendant had pocket aces going into this appeal, but Jim Murdica and Kara Kapke from Barnes & Thornburg and Lori Hammond from Frost Brown Todd deserve a shout out for knowing when to go all in!

We’ve blogged several times about the Biomaterials Access Assurance Act of 1998, 21 U.S.C. §§1601-06.  In a nutshell, the BAAA provides suppliers of “raw materials and component parts” used in the manufacture of medical devices with a “Get Out of Litigation Free” card in most situations.  It allows manufacturers of “biomaterials” – defined as “a manufactured piece of an implant” or a “substance” that “has a generic use” and “may be used in an application other than an implant” – to remove themselves from product liability litigation before being forced to engage in expensive and time consuming discovery.  See 21 U.S.C. §1602(3, 8) (defining “raw material” and “component part”).

However, the BAAA is now twenty years old, and in light of the rapid technological advancement in the medical device field, could use some updating for the twenty-first century.

What Congress was trying to ensure in enacting the BAAA was that manufacturers of “raw materials and component parts [that] also are used in a variety of nonmedical products” remain willing to supply manufacturers of medical devices by removing the threat of litigation over the small quantity of those materials used by medical device manufacturers to make FDA-regulated products:

(5) because small quantities of the raw materials and component parts are used for medical devices, sales of raw materials and component parts for medical devices constitute an extremely small portion of the overall market for the raw materials and component parts;

(6) under the [FDCA] manufacturers of medical devices are required to demonstrate that the medical devices are safe and effective, including demonstrating that the products are properly designed and have adequate warnings or instructions;

(7) notwithstanding the fact that raw materials and component parts suppliers do not design, produce, or test a final medical device, the suppliers have been the subject of actions alleging inadequate–

(A) design and testing of medical devices manufactured with materials or parts supplied by the suppliers; or

(B) warnings related to the use of such medical devices;

(8) even though suppliers of raw materials and component parts have very rarely been held liable in such actions, such suppliers have ceased supplying certain raw materials and component parts for use in medical devices for a number of reasons, including concerns about the costs of such litigation;

(9) unless alternate sources of supply can be found, the unavailability of raw materials and component parts for medical devices will lead to unavailability of lifesaving and life-enhancing medical devices. . . .

Id. §§1601(5-8).

Back in 1998, few if any medical devices utilized computer software, and cloud computing did not exist.  Further, to the extent that software was used in medical devices twenty years ago, it was not “agnostic,” but rather invariably custom made for the particular device – in stark contrast with the widespread use of interoperable imaging and 3D printing technology (to name two) in current medical devices.  Not surprisingly, the protection of the 1998 BAAA was limited to physical materials.  “Component parts” are “manufactured piece[s].”  Id. §1602(3)(a).  A “raw material” is “a substance or product.”  Id. §1602(8).

Thus, our proposal here is simple.  In order for the BAAA to provide the scope of protection for suppliers of database agnostic software and platform agnostic storage capacity (and other types of interoperable computer code that we mere bloggers don’t even comprehend – blockchain, anyone?) as Congress intended, the BAAA needs to be amended to include manufacturers of a third category of medical device-related inputs, “electronic applications” (or something like that), within its protections.  To that end we propose the following amendment, in the nature of an addition, to §1602.  First, to add a new definition:

(3A) Electronic applications

The term “electronic applications” means electronic software, data storage and other interoperable processing of electronic data used in connection with a medical device that has significant non-device-related uses.

Next, we advocate inclusion of “electronic applications” in the definition of “biomaterials supplier” provided in §1602(1).

These additions may require some tweaks elsewhere in the BAAA, but the gist of our proposal should be clear.  In the twenty-first century, the ability of a medical device manufacturer to incorporate multi-use “agnostic” electronic programming into its devices will be at least as important as access to specialized plastics and alloys was in the twentieth century. To ensure that vendors of such data processing software, cloud storage, and other interoperable electronic coding will continue to deal with medical device manufacturers without fear of (or prohibitive price premiums for) involvement in protracted, multi-district mass tort litigation, the BAA needs to be amended to include electronic software as twenty-first century biomaterials.

Kudos to the multifirm defense counsel team that brought home the decision on which we report today, a victory that may well end up on our “best” list for 2018.

In April 2017, we posted about Dr. Mahyar Etminan, then an expert in the Mirena MDL pending in the Southern District of New York.  Plaintiffs in the MDL claimed that the defendant’s product, an intrauterine contraceptive device containing the synthetic hormone levonorgestrel (“LNG”) caused them to develop idiopathic intracranial hypertension (“IIH”), also known as pseudotumor cerebri, a rare and potentially serious condition marked by increased cerebrospinal fluid pressure in the skull.   In 2015, Etminan had published a study designed to assess the risk of IIH.  Although the study did not definitively conclude that defendant’s product caused IIH, Etminan concluded that one of the two analyses, a “disproportionality analysis” of adverse events in the FDA’s FAERS database, identified an increased risk of IIH associated with LNG and that this result was statistically significant.  Etminan concluded that the results of the second analysis, a retrospective cohort study, did not find an increased risk but that this result was not statistically significant.   No other study has ever established a causal link between LNG and IIH.

Subsequently, a prominent scientist in the field attacked the methodology of Etminan’s disproportionality analysis because the study failed to control for age and gender, resulting in erroneous and misleading conclusions.  At the same time, it was revealed that Dr. Etminan was on the plaintiffs’ payroll at the time that he published his study, a conflict of interest he had not disclosed.  Ultimately, after defendants served Dr. Etminan with a notice of deposition in one of the cases in the MDL, Dr. Etminan repudiated much of his study’s analysis and withdrew as an expert.   When we reported this, we told you to “stay tuned,” commenting that plaintiffs’ other experts, all of whom relied on Etminan’s results, had not withdrawn.

The other shoe dropped a couple of weeks ago.  In In re Mirena IUS Levonorgestrel-Related Prods. Liab. Litig., 2018 WL 5276431 (S.D.N.Y. Oct. 24, 2018), the court considered the defendants’ Daubert motions to exclude the plaintiffs’ seven remaining general causation experts.  And it granted them all.   The opinion is very long – seventy-two pages on Westlaw – and we commend it to your weekend reading, as we can’t begin to do justice to the court’s detailed analysis of each expert’s methodology.  But we wanted to bring this terrific decision to your attention and to focus its most important takeaways.

The court began its analysis by emphasizing, “In the face of [the] historical record, with no medical organization or regulator or peer-reviewed scientific literature having found that Mirena or any contraceptive product using LNG is a cause of IIH, an expert witness who would so opine . . . necessarily would break new ground in this litigation.”  Mirena, 2018 WL 5276431 at *20.  All seven of the plaintiffs’ general causation experts “so opined.”  Four of these experts “arrived at this result largely by drawing upon existing sources.”  These included varying combinations of case reports regarding Mirena, case reports regarding other contraceptive products containing LNG, another product’s warning label, the repudiated portions of the Etminan study, and another study (the “Valenzuela study”) that reported a statistically significant association between LNG-containing devices and IIH but which, the authors emphasized, found only a correlation, not a causal link.  The remaining three experts were “mechanism” experts, each of whom postulated a supposed mechanism by which the defendant’s product could cause IIH.   In this post, we will focus on two of the experts in the first group, which included an epidemiologist, a toxicologist, an OB/GYN, and an ophthalmologist, but we urge you to read the court’s dissection of the second group as well.

The plaintiffs’ epidemiology expert was a professor of biostatistics with experience in conducting and analyzing large clinical trials.   He claimed that the nine Bradford-Hill criteria supported his causation conclusion.  As many of you know, the criteria are “metrics that epidemiologists use to distinguish a causal connection from a mere association.” Id. at *23 (citation omitted).  They are:  statistical association (also known as “strength of association), temporality, biological plausibility, coherence, dose-response effect, consistency, analogy, experimental evidence, and specificity.

The court first held that the epidemiologist’s opinion did not satisfy any of Daubert’s four reliability factors, because the expert “has not tested his theory.  He has not subjected it to peer review or had it published.   He has not identified an error rate for his application of the nine Bradford Hill factors. . . . And [his theory] has not been generally accepted by the scientific community.”  Id. at *27 (internal punctuation and citation omitted).  With respect to this last, the court again emphasized, “Outside of this litigation, there is a complete absence of scholarship opining that Mirena, or, for that matter, any LNG-based contraceptive, is a cause of IIH.”   Id.  As such, the court undertook to “take a hard look” at the expert’s methodology, scrutiny that was “particularly warranted” because:

 [I]t is imperative that experts who apply multi-criteria methodologies such as Bradford Hill . . . rigorously explain how they have weighted the criteria.  Otherwise, such methodologies are virtually standardless and their applications to a particular problem can prove unacceptably manipulable.  Rather than advancing the search for truth, these flexible methodologies may serve as vehicles to support a desired conclusion.

Id.  (citations omitted).    Citing four examples of how the expert’s assessment of individual Bradford Hill factors “depart[ed] repeatedly from reliable methodology,” the court held, “Measured against these standards, [the epidemiologist’s] report falls short.  Id. at *28-29.

First, the expert used the “analogy” factor, basing his causation conclusion in part on an analogy to another contraceptive product.  But, the court explained, this analogy was based on an “unestablished hypothesis” about the other contraceptive product, for which a causal relationship with IIH had never been substantiated.  Id. at *29.  With regard to the “specificity” factor, the court explained that the factor “inquires into the number of causes of a disease,” id., with the difficulty of demonstrating a causal association escalating along with the number of possible alternative causes.   “In finding the specificity factor satisfied,” the expert “devote[d] two sentences to his discussion.”  Id.   He relied on a conclusory statement to the effect that alternative causes could be ruled out.   And he relied on the Valenzuela study, which had actually disclaimed a finding of causation.   The court explained that the “consistency” factor required “similar findings generated by several epidemiological studies involving various investigators” reaching the same conclusion.  Id. at *30.    Again, the epidemiologist claimed that the Valenzuela study satisfied this criterion because it considered two separate populations.  But, as the court stated, both studies were conducted by the same investigators, and neither found a causal relationship.  Finally, as to the biological plausibility factor, the epidemiologist postulated a biological mechanism by which he said LNG could cause IIH.  The court stated, “ . . . [B]y any measure, [the expert] is unqualified to give an expert opinion as to a biological mechanism of causation of IIH.”   Id. at *30.   This lack of qualifications compromised the expert’s assessment of the biological plausibility factor as well as of related factors.   The court concluded,

Each of [the expert’s] departures from settled and rigorous methodology favors the same outcome.  Each enables him to find that the Bradford Hill factor at issue support concluding that Mirena is a cause of IIH. . . . [His] unidirectional misapplication of a series of Bradford Hill factors is concerning – it is a red flag.  Rather than suggesting a scholar’s considered neutral engagement with the general causation question at hand, it suggests motivated, result-driven, reasoning. . . . Methodology aimed at achieving one result is unreliable.

Id. (internal punctuation and citation omitted.    The court went on to further eviscerate the epidemiologist’s methodology, criticizing his reliance on the Valenzuela study, his nearly-exclusive use of case reports to support three of nine Bradford Hill factors, his failure to consider evidence that undercut his opinions, and his cherry-picking of case reports that supported his desired conclusion.   The court concluded that the expert’s testimony was “compromised by a range of serious methodological flaws,” and failed to satisfy Daubert’s reliability standard.

The court voiced similar criticisms of the methodology of the plaintiffs’ toxicology expert.  Like the epidemiologist, the toxicologist failed to meet any of the four Daubert reliability standards  In applying the Bradford Hill factors, she failed to identify support for her conclusions, distorted or disregarded evidence that undercut her opinions, failed to articulate a plausible biological mechanism to support her causation conclusion, and drew an inapposite analogy to another contraceptive product.   And her opinions were plagued by additional methodological flaws.   She relied on the portion of the Etminan study that was discredited and that Etminan himself repudiated.  And she cited the Valenzuela study as her sole support for finding several Bradford Hill criteria satisfied without acknowledging the study’s methodological limitations and failure to find causation.   The court concluded, “[The toxicologist’s] proposed testimony is beset by methodological deficiencies.  It falls far short of satisfying Daubert’s standard of reliability.  Her testimony, too, must be excluded.”  Id. at *40.

And so it went with the court’s discussion of the rest of the plaintiffs’ experts.   The opinion does the best job we’ve ever seen of demonstrating how an expert can attempt to create the illusion of reliability by paying lip service to the Bradford Hill criteria and how those criteria can be manipulated to mask wholly result-driven ipse dixit opinions plagued by fatal methodological flaws.   In this case, a committed and rigorous judge stemmed the tide.  But we all know that this is not always the case.

We love this decision.  There is a lot more to say about it, and we look forward to telling you more in an upcoming post.

The 21st Century Cures Act is noteworthy as the first legislative attempt at regulating artificial intelligence (“AI”) in the medical field. The Act added this provision to the FDCA:

(o) Regulation of medical and certain decisions support software: (1) The term device . . . shall not include a software function that is intended

*          *          *          *

(E) unless the function is intended . . . for the purpose of −

*          *          *          *

(ii) supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and

(iii) enabling such health care professional to independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.

21 U.S.C. §360j(o)(1)(E). Note:  This same provision is also called “FDCA §520” – by those with an interest (either financial or regulatory) in keeping this area as arcane as possible.

The FDA has also responded with “draft guidance” (also an arcane term – meaning “we can change our minds about this at any time, and you can’t sue us”) about what the Agency considers to be a regulated “device” after the 21st Century Cures Act. “However, software functions that analyze or interpret medical device data in addition to transferring, storing, converting formats, or displaying clinical laboratory test or other device data and results remain subject to FDA’s regulatory oversight.” Id. at 12. Thus, the FDA now also has a definition of “artificial intelligence”:

A device or product that can imitate intelligent behavior or mimics human learning and reasoning. Artificial intelligence includes machine learning, neural networks, and natural language processing. Some terms used to describe artificial intelligence include: computer-aided detection/diagnosis, statistical learning, deep learning, or smart algorithms.

Emphasis added by us throughout.

We have emphasized the selected text because it identifies the underlying tension being created as AI enters the medical field.  What’s going to happen – indeed, what is already happening in some areas such as analysis of some medical images such as x-rays and MRIs – is that AI is going to generate diagnoses (such as tumor, or no tumor) and treatment output for physicians (so-called “computer-aided detection/diagnosis”) in numerous and expanding areas of medical practice.

The AI rubber is really going to hit the road when these “functions that analyze or interpret medical device data” begin to “provid[e] recommendations to a health care professional,” that said professional can no longer “independently review,” so that our health care providers will find it necessary to “rely primarily on . . . such recommendations.”  To put it bluntly, at some point in the not-too-distant future, AI will able to diagnose a disease and to propose how to treat it as well or better than human physicians. Moreover, since AI means that the machines can “teach” themselves through experience, they will evolve into something of a “black box,” running processes that humans will no longer be able to follow or to analyze independently.

Just as computers can now beat any and all humans at complex logic games such as chess and go, they will eventually be able to out-diagnose and out-treat any and all doctors.

What then?

Consider the tort system.  That’s what we do here on the Blog.

The diagnosis and treatment of disease has heretofore been considered the province of medical malpractice, with its traditions of medical “standard of care” and its professional requirements of “informed consent.”  Conversely, the medical malpractice area is governed, in most jurisdictions, by a variety of “tort reform” statutes that do things such as impose damages caps, create medical screening panels, and require early expert opinions.

Already, without considering AI, such statutory restrictions on medical malpractice plaintiffs leads these plaintiffs, whenever possible, to reframe what should be medical malpractice cases as product liability claims. Take Riegel v. Medtronic, Inc., 552 U.S. 312 (2008), for example.  In Riegel, the plaintiff was injured when his physician put the medical device (a balloon catheter) to a contraindicated use (in a heavily calcified artery).  Id. at 320. What happens when you push balloons against hard things with sharp edges? They go “pop.”  That’s what happened in Riegel.  Likewise, consider Wyeth v. Levine, 555 U.S. 555 (2009).  In Levine, the plaintiff was injured when a drug that was supposed to be injected intravenously was mistakenly injected into an artery instead.  Id. at 559-60.  Since the medical lobby has been much more successful in passing “tort reform” than have FDA-regulated product manufacturers, the plaintiffs have taken the path of least resistance, meaning product liability.

The same thing is sure to happen with AI – where plaintiffs will surely attempt to impose what amounts to absolute liability on the manufacturers of AI-enhanced medical “devices.”  Just look at the experience with AI in the automotive field.  Every accident involving a self-driving car becomes a big deal, with the AI systems presumed to be somehow at fault.  However, the negligence standard, for both auto accidents and medical malpractice, has always been that of a “reasonable man.”  That’s the crux of what will be the struggle over AI – when machines take over the functions once performed by human operators (whether drivers or doctors), are they also to be judged by a “reasonable man” negligence standard?  Or will strict (really strict) liability be imposed?

Then there’s preemption.  Under the statutory provisions we quoted at the beginning of this post, the FDA will be regulating AI that makes diagnostic and treatment “recommendations” to physicians as “devices.”  Perhaps the regulatory lawyers will figure out how to pitch AI as “substantially equivalent” to one or more already-marketed predicate devices, but if they don’t, then AI medical devices will require pre-market approval as “new” devices.  AI seems so different from prior devices – as indicated by its special treatment in the 21st Century Cures Act – that we wouldn’t be at all surprised if PMA preemption will be available to protect many, if not all, AI device manufacturers from state tort liability.

But putting preemption aside for the moment, the typical AI medical injury suit will usually involve one of two patterns – the AI equipment generates a diagnosis and proposal for treatment, and the patient’s attending physician either ;  (1) follows the machine’s output, or (2) does not follow that output.  For this hypothetical, we assume some sort of injury.

If the physician went with the machine, then the plaintiff is going to pursue a product liability action targeting AI.  Given AI’s function, and the black box nature of its operation, in such cases it will be very tempting for the physician also to seek to avoid liability by, say, blaming the machine for a misdiagnosis due to unspecified errors in the algorithm.  In such cases, both the plaintiff and the defendant physician would presumably advocate some sort of “malfunction theory” of liability that would lift the usual product liability obligation for the party asserting a “defect” to prove what the defect was.  Again, the black box nature of machine learning will force reliance on this type of theory.

So, are lawsuits targeting AI medical devices going to be allowed to proceed under strict liability?  In most jurisdiction, the “malfunction theory” is only available in strict liability.  There are two problems with the strict liability in AI situations, something the law will have to sort out over the coming years.  First, computer software has not been considered a “product” for product liability purposes, under either the Second or Third Restatements of Torts.  Second, individualized medical diagnoses and treatments has always been considered a “service” rather than a “product,” which is the reason that doctors and hospitals have not previously been subjected to strict liability, even when part of their medical practice involves the prescription of drugs and/or the implanting of medical devices.  We discussed both of these issues in detail in our earlier post on AI.  So, while we can expect plaintiffs to assert strict liability in AI diagnosis and treatment cases, defendants have good grounds for relying on the negligence “reasonable man” standard.

In case two, where injury occurs after the physician elects not to follow the machine’s output, the plaintiff likely will not have a product liability action.  For one thing, once the results of AI are ignored, it’s pretty hard to argue causation.  The plaintiffs know that, so in this situation they will rely on the AI output to point a finger at the doctor.  Unlike situation one, there is unlikely to be much intra-defendant finger-pointing, first because any AI defendant will win on causation, and second because doctors will remain the AI manufacturers’ customers, and it is not good business to tick off one’s customers.

So in case two, the malpractice question will be whether it is “standard of care” for a doctor (or other relevant health care provider) to follow an AI-generated diagnosis or treatment output, even when the doctor personally disagrees with that diagnosis/course of treatment based on his or her own independent medical judgment.  As posed, this question is close to the learned intermediary rule, only to some extent in reverse.  As with the learned intermediary rule, the basic proposition would be that the physician remains an independent actor, and that the job of the manufacturer (in this case, of AI equipment) is solely to provide the best possible information for the physician to evaluate and use as s/he sees fit.  Only, in this instance, the physician is the one protected by this rule, rather than the manufacturer.  The other alternative – forcing physicians to accept the output of AI as automatically being the medical “standard of care” – would effectively deprive physicians of professional independence, and we see little chance of the medical lobby of allowing that alternative being chosen.

Case two is thus a situation where “tort reform” could be relevant.  As AI catches on, and physicians become aware of the problem of having their therapeutic decisions being second-guessed by plaintiffs relying on AI outputs, we would not be surprised to see statutory proposals declaring AI results inadmissible in medical malpractice actions.  We think that AI manufacturers should view such efforts as opportunities, rather than threats.  An alliance between physician and AI interests to ensure that (in case one) AI is judged by a negligence “reasonable man” standard if and when preemption doesn’t apply, rather than strict liability, could be combined with an evidentiary exclusion provision (in case two) in a comprehensive AI tort reform bill that all potential defendants could get behind.

Indeed, such legislation could be critically important in ensuring a successful future for AI in the medical space.  Even assuming that the FDA straightens out the regulatory side, the other side’s already existing impetus to impose strict liability on AI could hinder its acceptance or prohibitively raise the cost of acquiring and using AI technology.  States that wish to encourage the use of medical AI would be well advised to pass such statutes.  Alternatively, so might Congress, should product liability litigation hinder the use of this life- and (secondarily) cost-saving technology.

A federal judge in one of our non-drug or device cases recently informed the parties that he was so busy with his criminal docket that it might be better to let the magistrate judge take over our case, including trial. We fretted over the prospect of losing a judge who had thus far been very attentive, careful, and rigorous, but the issue was mooted when the plaintiff swiftly said “No thanks.”

Over the years, we have had many good experiences with magistrate judges. Indeed, our last trial in Philly was in front of a magistrate judge, and it would be hard to imagine how it could have been any more fair or efficient. The case we are discussing today, Lynch v. Olympus Am., Inc., 2018 U.S. Dist. LEXIS 185595 (D. Colorado Oct. 30, 2018), was handled by a magistrate judge pursuant to the parties’ consent via 28 U.S.C. 636(c). The plaintiff might now regret such consent, in light of the magistrate judge’s decision, but the clarity and logic of that decision is undeniable.

In Lynch, the plaintiff sued three companies for making endoscopes that were allegedly too hard to clean, thereby causing the plaintiff to suffer an infection and illness. One of the companies was a Japanese parent, and the other two were affiliates that did business in the United States. The Japanese parent moved to dismiss the complaint for want of personal jurisdiction. The affiliates moved to dismiss the strict liability and negligence claims on substantive grounds, and to dismiss the misrepresentation claims for failure to plead with specificity.

The magistrate judge’s opinion is exquisitely organized, so we’ll follow its outline faithfully.

    Personal jurisdiction

The Lynch court considered whether the Japanese parent could be hailed into Colorado based on the stream of commerce theory. That theory is not one of our favorites, and the Lynch court correctly described its fuzziness. SCOTUS has not exactly been very exacting in setting forth the contours of the stream of commerce theory. The Tenth Circuit has been better in erecting this key guardrail: “specific jurisdiction must be based on actions by the defendant and not on events that are the result of actions taken by someone else.” Monge v. RG Petro-Machinery (Group) Co., 701 F.3d 598, 618 (10th Cir. 2012). The Tenth Circuit has also been unambiguous that mere corporate affiliation is insufficient to impute one company’s contacts to its parent corporation. Accordingly, the Lynch court did not feel “free to disregard corporate formalities when assessing whether it may properly exercise personal jurisdiction over a defendant.” What we end up with is another successful example of using marketing subsidiaries to insulate the foreign parent from suit.

Are we done with personal jurisdiction? Not yet. The plaintiff tried to invoke non-registration-related consent by the parent corporation, based on things that happened in other cases. Nice try. Not really. The Lynch court reasonably pointed out that other cases are other cases, and what happened in those other cases has no bearing on or similarity with this case. “A consent in one case does not affect the propriety of a court’s exercise of personal jurisdiction in another case, even if related and even if in the same forum.” The court agreed with the plaintiff that it might be entitled to some jurisdictional discovery — if it could successfully plead the substantive claims, but that, as we will see in a moment, is not so clear.

    Design Defect

Beyond bare conclusions, the plaintiff did not manage to allege that the risks of the endoscopes outweigh benefits in all patients. Potential problems with reuse do not affect all patients on whom new devices are used. “[M]erely being difficult to clean does not render the device unsafe for all all patients.” Moreover, causation requires more than the fact that the plaintiff fell ill after encountering the product. The allegations are too conclusory to state a claim for strict liability under a design defect theory.

    Failure to Warn

Lynch is the first case applying Colorado law to apply the learned intermediary rule to a medical device. No prior Colorado court had explicitly done so, though there was at least one case that generally treated prescription drugs and medical devices together. The plaintiff in Lynch contended that the endoscope was a nonprescription device. Even if that contention was “technically correct,” it did not matter. “Clearly, patients are not buying [endoscopes] over the counter” to administer the procedure to themselves at home. It is necessarily a doctor who “decides the procedure is required and performs it.” To our mind, the Lynch court analysis makes good sense. There are a variety of medical devices that might not technically require a prescription, but there is still no doubt that a doctor is a necessary (and learned) intermediary. Think of scalpels. The doctor, not the patient, selects the particular scalpels employed in a procedure. Would it make any sense to permit personal injury plaintiffs to complain that the scalpel manufacturer supplied inadequate warnings? Scalpels are, sharp, sure. But maybe the manufacturer should have warned that they are very, very sharp. Or maybe there is a materials safety data sheet for one of the scalpel’s raw materials that might give the dense or paranoid pause. The mind reels. For now, let’s content ourselves with the Lynch court’s application of the learned intermediary rule to endoscopes, and the court’s conclusion that the plaintiff did “not allege the failure to warn as applied to her physician.”

The Lynch court made another ruling that is pertinent to many drug and device cases: is the failure to warn really about warning at all? What exactly is the warning and what would be the effect of the warning? In Lynch, the warning allegedly lacking seems to be nothing more than telling doctors that the product is defective. “Plaintiff’s claim is, in essence, that the product was defectively designed and therefore the warning was inadequate; as pled, the failure to warn claim cannot be distinguished from the design defect claim.” Thus, the warning claim boils down to, or takes back to, the design defect claim. And we know how that turned out.

    Negligence and Products Liability

The complaint contained “inadequate factual allegations establishing causation. Absent plausible allegations linking the Defendants’ actions to the Plaintiff’s harms, there is no plausible claim for relief.” Well, okay then.

    Misrepresentation

The plaintiff alleged both fraud and negligent misrepresentation, but the Lynch court correctly points out that there was no real difference between those claims. The negligent misrepresentation claim was replete with all sorts of intentionality. The court concluded that both claims needed specificity and both claims lacked it. The acts of the various defendants were jumbled together, and specifics on who, what, where, and when were wholly absent.

*********************

The plaintiff will get a chance to try again. The magistrate judge was perhaps a bit charitable. But she was also smart and careful, so the plaintiff had better do a much better job on the next go around.

Back in May, 3M won the first MDL bellwether trial in In re: Bair Hugger Forced Air Warning Devices Prods. Liab. Litig. (D. Minn.). The case was Gareis v. 3M Company and at the time of trial, the only claim remaining in the case was for strict liability design defect under South Carolina law. 2018 WL 5307824 at *1 (D. Minn. Oct. 26, 2018). Plaintiff’s negligence, failure to warn, unfair and deceptive trade practices, misrepresentation, and unjust enrichment claims had all been dismissed on summary judgment. Following the defense verdict, plaintiff moved for a new trial based on dangerously thin grounds. The kind of grounds that simply crumble when even the slightest force is brought to bear. And that’s pretty much what happened. Plaintiff’s arguments just fell apart.

Plaintiff’s first argument was that the court improperly excluded evidence that would have helped him prove a design defect. Id. at *2. And the court’s first conclusion was that plaintiff “identif[ied] no prejudice” from the exclusion of the evidence. With no discussion of how the trial result would vary with the admission of the evidence, plaintiff’s motion had to be denied. Id. But the court also went on to explain why the evidence was properly excluded.

Plaintiff wanted to admit evidence of defendant’s “knowledge of risk-utility.” Id. But, “the manufacturer’s mental state is not an element of a strict liability claim for design defect.” Id. Under South Carolina law, the focus in a strict liability “centers upon the alleged defectively designed product,” not the manufacturer’s conduct. Id. Since the evidence wouldn’t go to an element of plaintiff’s claim, it was irrelevant and inadmissible. Plaintiff also argued that certain alternative design evidence was improperly excluded based solely on the argument that he should not have been limited to only alternative designs that “achieve the same function by the same mechanism.” Id. at *3. In other words, plaintiff wanted to introduce “alternatives” that were actually “different” products. That bare bones assertion didn’t move the court to go back and revisit its already correct in limine ruling on the issue.

Plaintiff then moved on to evidence it argued was improperly admitted. Testimony by defense experts that plaintiff alleged was not disclosed in the expert’s Rule 26 disclosures and therefore was inadmissible at trial. But, it you’re going to argue surprise – it really needs to be a surprise. For example, plaintiff argued that defendant’s expert did not disclose that he intended to use a video of a study used to validate his experiment and therefore plaintiff couldn’t effectively cross-examine the witness regarding the details of the experiment. Id. at *4. But, plaintiff saw the video during the MDL Science Day, referenced it in his motion to exclude defendant’s expert, knew that it was available on defendant’s website, and even question the expert about it at his deposition. Id. Not really a sneak attack. The court found no violation of the Rule 26’s disclosure requirements, but even if there had been, “the admission of the video was harmless.” Id. Plaintiff tried to make the same argument about defense expert’s testimony concerning a study cited by plaintiff’s expert. While defendant’s expert didn’t cite the study, his report did opine on the substance of the study. Id. Again, hard to be surprised about a study your own expert relied on. There are a few other similar examples in the opinion.

Finally, plaintiff tried to argue that testimony that never happened should also have been excluded. Go ahead, you can re-read that sentence. It’s accurate. Evidence of FDA clearance of the device was excluded in limine. At trial, defense counsel started to ask a question about the FDA’s examination of the device to which plaintiff’s counsel immediately objected. The objection was sustained. The question was not answered. At sidebar, plaintiff asked that the testimony be stricken. The court denied the request because there was no testimony to strike. Plaintiff argued that the failure to strike testimony that didn’t exist was grounds for a new trial. The court didn’t agree. No new trial. Id. at *6. Bair Hugger score remains 1-0 defendant.