Today’s guest post is another tech-related discussion from Reed Smith‘s Jamie Lanphear. Given the increasing ubiquity of artificial intelligence (“AI”) in legal practice, the notion of AI prompts and output becoming yet another front in the never-ending ediscovery wars is concerning. Here are Jamie’s latest thoughts on the latest pertinent caselaw in this area. As always our guest posters deserve 100% of the credit (and any blame) for their efforts.
**********
The Blog has recently covered both the discoverability of AI prompts (here) and the Heppner decision out of the Southern District of New York (here). Since those posts, courts have continued to develop the law in this area — and some of the developments are encouraging. This post picks up where those earlier discussions left off, surveying the latest case law on privilege, work product, and waiver as they apply to AI prompts and outputs, and offering practical guidance for lawyers and in-house teams navigating an area where the rules are still being written.
Heppner: A Quick Recap and What It Leaves Open
As the Blog discussed here, in United States v. Heppner, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026), Judge Rakoff held that a criminal defendant’s communications with a generative AI platform were protected by neither attorney-client privilege nor the work product doctrine. The holding rested on several narrow facts: the defendant used a public AI tool on his own, without direction from counsel; the platform’s terms of service destroyed any expectation of confidentiality; and the materials were seized pursuant to a search warrant in a criminal case rather than sought through civil discovery. The court did not apply or analyze work product protection under Rule 26(b)(3) of the Federal Rules of Civil Procedure. The decision generated significant concern – but the limited application left a number of important questions unanswered.
Open Questions: Privilege in the Corporate AI Context
Heppner is what we have on privilege, but it does not tell us very much about how privilege will apply in the corporate context. The case involved a party acting on his own, not for the purpose of seeking legal advice from counsel. The in-house world presents meaningfully different scenarios.
Start with the threshold problem. Attorney-client privilege requires a communication between an attorney and a client, and when a corporate employee communicates with an AI chatbot, the communication is between the employee and a machine — not between the employee and counsel.
However, there are more-than plausible arguments that some of these interactions should remain protected. If an attorney directs an employee to use an in-house AI tool as part of the process of providing legal advice, the tool looks more like an agent than a third party — functionally similar to a paralegal or other intermediary acting at counsel’s direction. No court has endorsed that theory yet, but it is not an unreasonable application of existing doctrine to novel facts. Thus, in the first place, attorneys should think long and hard before directing clients (rather than their own offices) to do something like this.
The same logic applies in more routine scenarios. If an employee communicates with a colleague to gather information at counsel’s request for the purpose of providing legal advice to the company, those communications are privileged. What if the employee consults an AI tool instead of a colleague? The substance of the interaction is the same — the employee is gathering information counsel needs to provide legal advice. The only difference is the medium.
A related question arises with enterprise AI tools. If a company deploys an internal AI system and counsel interacts with that system to obtain information necessary to provide legal advice, there is a reasonable argument that the tool functions as the client’s agent. That scenario is not meaningfully different from an attorney communicating with a client’s accountant or other third-party agent to obtain information needed to render legal advice. Enterprise tools are deployed by the company, access-controlled, and integrated into the legal workflow. They are not consumer-grade chatbots with permissive terms of service. That factual distinction was critical to the outcome in Heppner.
None of these theories has been tested. But it is unlikely that courts will fail to find ways to adapt privilege doctrine to accommodate the realities of AI use. The law may lag science, but it does not ignore technology. The analogies are there, and courts have adapted privilege to new contexts before. The questions are not whether privilege will evolve, but how, and with what nuances.
Morgan: A Counter to Heppner
A recent decision out of the District of Colorado offers a notably different perspective on whether AI interactions are protected — at least under the work product doctrine. In Morgan v. V2X, Inc., No. 25-cv-01991-SKC-MDB, 2026 WL 864223 (D. Colo. Mar. 30, 2026), an employment discrimination case, a pro se plaintiff used AI in connection with litigation preparation. The defendant moved to compel disclosure of the AI tool the plaintiff was using and sought to amend the protective order to restrict AI use with confidential information. The discoverability of the plaintiff’s actual prompts and outputs was not directly at issue, but in working through the questions before it, the court engaged in an analysis that pushes back on several of the premises underlying Heppner—and the result is some useful support for the proposition that AI-assisted litigation materials are protected work product.
First, the court held that Rule 26(b)(3) broadly protects materials prepared in anticipation of litigation by a party, not merely by counsel. In civil cases, the rule’s plain language extends work product protection to parties and their representatives—a point that distinguished Morgan from Heppner, which was a criminal case governed by a different procedural rule.
Second, the court rejected the argument that using a third-party platform destroys confidentiality or waives protection. Drawing on Fourth Amendment case law, the court made a point that is both memorable and important: does anyone with a Gmail account forfeit all rights to confidentiality because Gmail has access to their emails? The court emphasized that in today’s world, nearly all electronic interactions pass through third-party systems, and courts are increasingly pushing back on the notion that using modern technology automatically destroys privilege.
Third, the court emphasized that work product (in contrast to attorney/client privilege) waiver requires disclosure to an adversary or under circumstances that substantially increase the likelihood that an adversary would obtain the information. Disclosure to an AI platform provider does not meet this standard—you are not disclosing information to your adversary, and there is no meaningful indication the information will end up in an adversary’s hands simply because it was entered into an AI system.
An important caveat: because the pro se litigant in Morgan was simultaneously the party and the advocate, Morgan does not resolve what happens when a non-attorney party — say, a corporate employee — uses AI independently, without attorney direction. In that scenario, the Heppner “gap” between party and counsel would exist even in the civil context. But the textual foundation of Rule 26(b)(3), as the Morgan court discussed it, supports an argument that such use would still be protected.
The bottom line is that while Morgan did not directly rule on the discoverability of prompts and outputs, it supports the position that AI-assisted litigation materials can fall within work product — and that the mere fact that information is processed or stored by an AI platform does not automatically result in waiver. This is a practical and sensible position that hopefully other courts will follow.
Tremblay: Prompts and Outputs as Opinion Work Product
As the Blog has noted (here), Tremblay v. OpenAI Inc., No. 23-cv-03223-AMO, 2024 WL 3748003 (N.D. Cal. Aug. 8, 2024), established that when counsel crafts AI prompts, both the prompts and the resulting outputs constitute opinion work product — the highest tier of protection — because the prompts reflect counsel’s mental impressions and opinions about how to interrogate the AI tool. Opinion work product is virtually undiscoverable absent a showing that counsel’s mental impressions are at issue and the need for the material is compelling.
This means that, at least until the case law further develops, having counsel create or direct the creation of AI prompts is not just a best practice. It has doctrinal consequences for the level of protection those materials receive. As a general principle, the closer an attorney is to the creation of the prompts, the stronger the argument that the materials are protected work product.
Waiver: Where Protection Breaks Down
Work product protection can be lost, and several recent decisions illustrate how that can happen in the context of AI use.
In Concord Music Group, Inc. v. Anthropic PBC, the court addressed waiver across multiple discovery orders over the course of 2025. See, e.g., 2025 WL 1482734 (N.D. Cal. May 23, 2025); 2025 WL 3677935 (N.D. Cal. Dec. 18, 2025). Consistent with Tremblay, the court held that the prompts and related settings used in the publishers’ investigations were opinion work product. The publishers voluntarily produced the prompts and outputs they relied on in their complaint and filings — roughly 5,000 prompt-output pairs. The defendant then sought broadly all of the remaining prompts and outputs, including those that did not support the publishers’ claims. The court denied that request as overbroad, holding that waiver must be closely tailored to the needs of the opposing party and limited to what is necessary to rectify any unfair advantage gained.
Later in the litigation, the publishers conducted an investigation into the effectiveness of the defendant’s AI guardrails — essentially testing whether the guardrails could be circumvented using various prompts. When they indicated they intended to put a witness on the stand to testify about that investigation — including the prompts the witness used and what he learned using them — the court found at-issue waiver under the sword-and-shield doctrine. In deciding where the draw the line, the court held that protection had been waived for the prompts that had been provided to the witness to conduct the investigation—including those provided to him by counsel—and the associated outputs, but those that remained with the attorney (and therefore were not placed at issue) remained protected. So here we see waiver of opinion work product, but a narrow tailoring of that waiver by the court.
A similar result played out in T.B. v. Big Brothers Big Sisters of N.Y.C., No. 452864/2021, 2025 WL 2443502 (N.Y. Sup. Ct. Aug. 21, 2025), where plaintiff’s counsel disclosed a portion of a ChatGPT transcript during a deposition. The court held that this placed the summary at issue and waived work product protection, and ordered production of the full AI exchange.
While these waiver principles are not new (or unique to AI), what AI potentially changes is the frequency with which the issue might arise. Tools that make it easy to generate analytical work product—evaluating complaint data, synthesizing the medical literature, identifying patterns in adverse event reports—also make it easy to rely on that work product in ways that place it at issue. The discipline and forethought are the same that have always been required. AI may just create more opportunities to get it wrong.
Where This Is Headed
As Judge Rakoff observed, AI “presents a new frontier in the ongoing dialogue between technology and the law.” That much is obvious. What is increasingly clear from the case law, however, is that courts are finding ways to apply existing doctrines to new technology. Work product protection extends to AI interactions. Attorney-crafted prompts are opinion work product and therefore receive the highest level of protection. And the mere use of a third-party AI platform does not automatically waive that protection.
The attorney/client privilege question is less settled, but the analogies are there. And there is no reason to think that courts will not adapt privilege doctrine to accommodate AI—just as they have adapted it to accommodate other evolutions in the provision of legal services. The waiver landscape requires attention, but the principles exist and the framework for applying them is well established. What remains is for courts to work through the application in this particular context.