Some of us are participating in beta testing of generative artificial intelligence (“AI”) for legal applications in the law firm environment. So far the verdict is – associates can breathe easy, at least for now. Nothing we’ve seen is capable of replicating legal research even at a first-year level of quality.
But that doesn’t mean that AI won’t impact prescription medical product liability litigation. In particular, we’re not surprised to learn that AI is being used in the context of FDA-required adverse event reporting, purported problems with which have become one of the other side’s go-to preemption dodges. Just a few examples from a simple Google search:
Adverse event cases undergo clinical assessment. Case evaluation includes assessing the possibility of a causal relationship between the drug and adverse event, as well as assessing the outcome of the case. An AI model was developed based on relevant features used in causality assessments; it was trained, validated, and tested to classify cases by the probability of a causal relationship between the drug and adverse event. AI/ML has also been applied to determine seriousness of the outcome of ICSRs [Individual Case Safety Reports], which not only supports case evaluation, but also the timeliness of individual case submissions that require expedited reporting.
FDA, “Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products,” at 10 (2022). “We conclude that AI can usefully be applied to some aspects of ICSR processing and evaluation, but the performance of current AI algorithms requires a ‘human-in-the-loop’ to ensure good quality.” Ball & Dal Pan, “Artificial Intelligence” for Pharmacovigilance: Ready for Prime Time?,” 45 Drug Safety 429, at abstract (2022).
Early detection of ADRs and drug-induced toxicity is an essential indicator of a drug’s viability and safety profile. The introduction of artificial intelligence (AI) and machine learning (ML) approaches has resulted in a paradigm shift in the field of early ADR and toxicity detection. The application of these modern computational methods allows for the rapid, thorough, and precise prediction of probable ADRs and toxicity.
Yang & Kar, “Application of Artificial Intelligence & Machine Learning in Early Detection of Adverse Drug Reactions (ADRs) & Drug-Induced Toxicity,” 1 Artificial Intelligence Chemistry, at abstract (2023)
What this tells us, as litigators in MDLs and other mass torts, is that plaintiffs’ efforts at taking “discovery” of AI algorithms employed in FDA-mandated adverse event reporting won’t be far behind. Particularly with AI, however, there is a fine line between what has already been created and what AI can create going forward. The key is to limit such discovery to what “discovery” is intended to be, as defined by Fed. R. Civ. P. 34. In the case of electronic information, Rule 34(a)(1) allows a requesting party “to inspect, copy, test, or sample . . . electronically stored information” (emphasis added). Thus, requestors are limited to discovering “data . . . stored in any medium.” Id.
The 2006 Advisory Committee notes specify that “Rule 34 applies to information that is fixed in a tangible form and to information that is stored in a medium from which it can be retrieved and examined.” Other key language in the comments is:
The addition of testing and sampling to Rule 34(a) with regard to documents and electronically stored information is not meant to create a routine right of direct access to a party’s electronic information system, although such access might be justified in some circumstances.
(Emphasis added).
We emphasize these points because what we don’t want to happen is for the other side to go beyond access to “stored” information allowed under Rule 34, and instead try to manipulate AI programs to create new outputs that – the other side will contend – demonstrate hypothetical inaccuracies or shortcomings that may never have occurred in the real-world operation of such AI.
The legal proposition is simply this: “Plaintiff may not require Defendants to create evidence that does not currently exist.” Brown v. Clark, 2013 WL 1087499, at *5 (E.D. Cal. March 14, 2013). “Defendants have no obligation under the discovery rules to create evidence to support Plaintiff’s claims.” Warner v. Cate, 2016 WL 7210111, at *9 (E.D. Cal. Dec. 12, 2016).
While Plaintiff is entitled to seek relevant evidence from the Defendants in discovery and to file a motion to compel if necessary, Plaintiff may only seek evidence that already exists. The rules of discovery do not allow Plaintiff to compel Defendants to conduct an investigation to create evidence for Plaintiff.
Rider v. Yates, 2010 WL 503061, at *1 (E.D. Cal. Feb. 5, 2010). Parties “are not required to create evidence that does not currently exist in order to comply with their discovery obligations.” Bratton v. Shinette, 2018 WL 4929736, at *5 (E.D. Cal. Oct. 11, 2018). “If no such [evidence] exists, as [the producer] purports, [requestors] cannot rely on Rule 34 to require [them] to create a document meeting their request.” Abouelenein v. Kansas City Kansas Community College, 2020 WL 1124396, at *4 (D. Kan. March 6, 2020). A “[p]laintiff is not entitled to play-by-plays of ever-changing data.” Moriarty v. American General Life Insurance Co., 2021 WL 6197289, at *4 (S.D. Cal. Dec. 31, 2021).
That is what allowing plaintiffs to manipulate a defendant’s AI reporting system amounts to. They would be going beyond merely accessing “stored” information and instead would be demanding to make something new – such as a deliberately incomplete adverse event report – that did not exist when such “discovery” was sought. We need to anticipate plaintiffs attempting this interference with our client’s AI systems, with adverse event reporting representing a particularly likely early pressure point.