Photo of Bexis

Bexis recently attended the Spring Conference of the Product Liability Advisory Council (“PLAC”).  PLAC meetings are usually good for new blogpost ideas, and this one was no exception.  Today’s idea comes from an unusual source, though – the final day’s ethics presentation.  That presentation was about artificial intelligence, mostly in the mass tort context.  One segment pointed out how the other side is using AI for, among other things, new plaintiff intake.  That prompted us to take a look at who’s advertising what to the other side.  We don’t particularly want to promote their business, but an on-line journal article we found stated:

AI tools can be used in mass tort management to streamline case intake by automating data collection from online forms, emails, and calls.  It can drastically reduce manual entry errors and time.  They analyze plaintiff information in real-time to assess eligibility and match individuals to appropriate tort categories.

We didn’t have to look far.  An ad masquerading as a blog promised that it “excels in lead screening” and could “pre-qualify callers by asking about drug usage, medical diagnoses, and side effects.” Another article promised “intake” that would “engage every caller, capture and analyze every detail, and prioritize the highest-value cases.”  We found advertisements pushing prospective plaintiff “identification and enrichment” that promised to create MDL-specific matrices and to populate them with “scored” data that included familiar information:  product use, qualifying injury, and statute of limitations.  Another promoted “AI-based automated intake forms” that could “restore client information and verify it with official databases at the same time as it detects missing documentation.”  Still another was offering AI “fillable PDFs” with “tools to help structure the data such as checkboxes and drop-down lists.”  These would collect information about would-be plaintiffs’ “general demographics,” “medical background,” “proof of use or exposure,” and “proof of the injury being alleged.”  We didn’t have to dig past the second page of Google results.

Maybe these plaintiff AI tools are as good as advertised, maybe not.  But the fundamental problem remains – mass tort incentives have never (at least prior to settlement) favored careful early vetting of plaintiffs.  Instead, the other side’s modus operandi has always been “the more, the merrier.”  They get names and file claims, and then they leave it to the defense to spend the time and effort to separate the good claims (if any) from those that would never have been filed in one-off litigation.  We have serious doubts that even the most efficient AI changes those incentives much.

However, plaintiffs’ use of AI plaintiff intake tools provides the defense an opportunity to put new Fed. R. Civ. P. 16.1 to work.  Specifically, Rule 16.1(b) requires (unless an MDL court prohibits it) that the parties address “how and when” they “will exchange information about the factual bases for their claims” (emphasis original).  Note use of “will” rather than “whether” in this section.  That means it’s mandatory (unlike most of the rest of Rule 16.1).  We should seek to obtain the factual (not privileged) plaintiff “information” that their AI has collected.  If these systems work as well as advertised, that should be easy for the other side to produce, and maybe the cost of vetting the bodies they dig up can be more equitably apportioned in MDLs.

But that’s not all.  Equally important, is what “information” we might still have to collect because plaintiffs failed to utilize all the advertised bells and whistles that these AI systems claim to possess.  We’re not talking about particular AI prompts − especially anything that may be subject to work product privilege.  What we are suggesting is determining whether the other side is affirmatively disabling their own AI systems so they can continue to file factually unsupported lawsuits.  That would seem to be a rather blatant violation of Rule 11, and we hope the other side does not do this.  However, their track record on vetting their own supposed clients is not very good.

Our side’s Rule 16.1(b) statements thus should make an effort to ensure that the other side is actually using the early vetting capacity of their own AI programs.  Given the “file and forget” incentives that the plaintiffs’ side has in mass torts, it would not surprise us at all that some plaintiff firms do not use much of the informational functionality of their own AI.  Knowing that would be useful, both in determining potential apportionment of discovery expenses for information plaintiffs could have collected, but did not, and more generally to call the other side’s credibility into question early in the MDL process.

So that is our take home idea from PLAC.  Since the other side claims to be using sophisticated AI tools in mass tort plaintiff intake – we should ensure that they are actually using their available AI capacity to do what Rule 11(b)(3) supposedly required them to be doing all along:  pre-complaint verification that their client’s “factual contentions have evidentiary support.”