We’ve become aware that some clients are using artificial intelligence (AI) to summarize or analyze things like complaints, briefs, internal documents, or even – horror of horrors! – law firm bills. If the client performing these tasks is an in-house lawyer, such work might be protected by the attorney client privilege or work product doctrine (note that we say “might”). But what if the AI project was undertaken by a non-lawyer?
Judge Jed Rakoff of the Southern District of New York has some bad news for you. In United States v. Heppner, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026), he held that a criminal defendant’s written exchanges with a generative AI platform were not protected from government inspection by either the attorney-client privilege or the work product doctrine. We specify the name of the judge because it matters. When Judge Rakoff speaks, the legal world listens. Not long ago, we wrote occasional blogposts with the title of “There’ll Always be Posner.” That title was inspired by The New Yorker, which often ran funny little reportorial pieces called “There’ll Always be England” documenting doings in the land of Dickens, Tolkien, and Led Zeppelin. Some places or people are so reliably interesting or eccentric that they can always be counted on to supply good copy. So it was with Judge Posner, a brilliant judge and writer. His words brimmed with confidence. He eschewed footnotes. After all, what authority was more authoritative than Posner’s own noggin? In the taxonomy of judges, it was hard to place Posner. He did not seem a textualist or originalist so much as a very sophisticated consequentialist. In a way, he was the sort of Philosopher King that earlier legal scholars (think of Learned Hand or Herbert Wechsler) decried. When Posner retired from the bench, we lost a splendid source for our blatherings. (Posner’s post-retirement career has been … interesting). Similarly, Judge Rakoff is lightning smart. His rulings are often bold and his prose shimmers. Our first encounter with him was at a CLE long ago. Rakoff was then a partner at the Mudge Rose law firm. The subject matter was RICO, and Rakoff was incapable of saying anything uninteresting or uninsightful. Like Posner, Rakoff exudes originality and confidence. Like Posner, he is delightful to read (and probably terrifying to appear before).
Rakoff’s analysis in the Heppner case is easy to follow and hard to fault, but that does not soften the blow to the solar plexus from his ruling that answers “a question of first impression nationwide: whether, when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the AI user’s communications protected by attorney-client privilege or the work product doctrine?” Judge Rakoff’s answer is a resounding No. So be careful out there.
Why is Judge Rakoff’s answer No? Facts matter, and some of them here are particularly important in accounting for the result. Most important is that the criminal defendant, “without any suggestion from counsel that he do so,” used the generative AI platform Claude to prepare “reports that outlined what he might argue with respect to the facts and the law” in anticipation of government charges. The AI communications were seized from the defendant at the time of his arrest pursuant to a search warrant. The defendant had not challenged the validity of the search warrant. Thus, in parsing Judge Rakoff’s ruling, you need to ask whether things would have turned out differently if the lawyer had told the defendant to run a specific AI inquiry, and whether a motion to suppress might have gained some traction.
Judge Rakoff held that the defendant’s communications with the AI platform were not embraced by the attorney-client privilege because:
- The communications were not between the defendant and his counsel. The privilege requires a “trusting human relationship,” and that was absent here.
- The communications were not confidential. The written privacy policy for the Claude program used by the defendant disclosed that user inputs were being used to “train” the program. Thus, the defendant’s communications with Claude constituted a disclosure to a third party.
- The defendant did not communicate with Claude for the purpose of obtaining legal advice. Well, maybe the defendant thought that is what he was doing, but Claude itself disclaims that role. Cleverly, the government asked Claude whether it could give legal advice, and the AI platform responded that “I’m not a lawyer and I can’t provide formal legal advice or recommendations.” Checkmate.
Similarly, the work product doctrine did not shield the AI communications from disclosure. Those communications might have been done in anticipation of litigation, but they were not “prepared by or at the behest of counsel,” nor did they reflect the defense counsel’s strategy. Maybe you can conjure up some tweaks to the fact scenario that could have brought the work product doctrine within range, but how risk averse are you?
Judge Rakoff rightly concludes that generative AI “presents a new frontier in the ongoing dialogue between technology and the law.” The Heppner case adds to that ongoing dialogue, and adds to our worries about whether AI offers more peril than promise.