December is both a festive and frantic month. Along with all the caroling, wassailing, and gift-buying, the last month of the year invariably sees us squeezing in continuing legal education (CLE) credits, reconnecting with old friends at the ACI drug and device conference in New York City, and wrapping up the Fall/Winter semester class we teach at Penn Law. This year, all three of those projects forced us to confront the increasingly busy intersection of artificial intelligence and the law.
First, there was a CLE webinar put on by Reed Smith’s vaunted e-discovery team. Like most lawyers planted in a large firm, we think of our own practice group (Life Sciences and Health Industry Litigation) as the crown jewel of the place. But as the breadth and complexity of our cases forces us to collide with adjacent practice groups, we learn that there are plenty of other top-drawer specialists lurking in our offices. We recently worked with False Claims Act lawyers who mastered a difficult area replete with traps for the unwary. We long ago found out that our Insurance Recovery Group sits comfortably at the top of the league tables. And our near-constant discovery battles have sent us again and again to seek guidance and assistance from our e-discovery folks, who are simply the best at what they do. They know all the latest e-discovery developments and are quick to come up with practical advice. They shake solutions out of their cuffs. They even put on CLE programs better than anyone else does. In the most recent class, they staged a series of debates between lawyers on hot discovery topics of the moment. Two of those topics involved AI. If these were issue-spotting exams, we would have flunked. One debate was whether employment of AI in sifting through the other side’s materials produced in response to discovery implicated ethics and confidentiality concerns. Our initial reaction was that it surely was none of the other side’s business how we were reviewing discovery materials. But if the AI tool was a large language model (LLM) that would learn from the information and then render that information available to the model’s owners and other users, you might have a very real confidentiality issue. You would certainly want to know if the other side was using an AI tool that would place your client’s materials in the public sphere. The answer might be to make sure the AI tool was a closed/enterprise model. Another debate topic was whether the client’s use of AI in business activities such as product development would end up being discoverable. You can easily imagine how a corporate defendant’s AI queries might look like admissions, or might look like evidence of who knew what when. Whether such AI queries and answers would be discoverable would likely turn on whether they could be characterized as “ephemeral.” The answer to these questions were by no means obvious, but before the CLE we would not even have been aware of the questions.
Second, there was a really excellent ACI panel on how AI is being used to analyze and brief legal issues. A group of in-house and outside lawyers, along with some AI consultants, brought us up to speed on AI capabilities. The panel showed examples of case analyses generated by law firm associates and AI tools. Which was which? In truth, it seemed reasonably clear that the longer, more comprehensive write-ups were the products of the AI tool, while the pithier summaries, which evinced a keener sense of prioritization, were the handiwork of human lawyers. That is not to say that either the software or the people were the better performers. Our takeaway was that a combination of the two would end up producing the best work product. Lawyer experience cannot be fully replaced by AI (not yet, anyway), but can certainly be enhanced.
Third, we shuttled back to Philly to oversee the final session of the litigation strategy class we have taught at Penn for the past 14 years. The students keep getting younger. And they keep getting smarter and more techno-savvy. We always end the semester with exercises we call “120 Seconds.” Each student (there are typically 15-16 in the class) selects one of the cases we’ve used throughout the semester and then delivers a two minute ‘clopening’ for both the plaintiff and defense sides. After all the analyses, case assessments, and deployment of discovery devices, the culmination is whether the students could synthesize compelling stories and persuasive themes. After each student declaims both the plaintiff and defense clopenings, the question for the other class members is which side did the speaker appear to favor. Then the speaker would confess their preference. Most of the students managed to do such good jobs for both sides that it was tough to identify their preferences. One of the LLM students (that’s LLM as in Master of Laws, not large language model) told us that he had run his presentations through a ChatGPT program to test whether they had mounted the strongest possible cases for each side. Now the use of AI in schools is hardly free from controversy. There is worry that students will ‘cheat’ by using AI to do their work for them. Some teachers utilize AI programs to detect the use of AI by students. But in this situation, we did not see anything wrong with the use of AI to stress-test the 120 seconds presentations. Indeed, as with our sense at the end of the ACI panel, we saw AI as a collaborative tool. It could be used either at the outset of an assignment, to generate a preliminary outline, or it could be used at the end, as a kind of second set of eyes. Or it could be used somewhere in between. Let’s face it – we’re not in the best position to dream up all the ways in which AI can add value to legal services. Our students will be – and are already – much better than we are at taking AI and its advantages on board. AI is coming whether we like it or not. We do not view its arrival with dread. We are not quite ready to kneel down and welcome our software overlords. It’s a tool, just as Shepardizing was, and Westlaw, Lexis, and spell-check is. We’re not worried about AI replacing lawyers. But lawyers who know how to use AI might very well end up replacing lawyers who don’t.