Artificial intelligence isn’t going anywhere. Experts use it. Opposing counsel use it. Clients use it – and want their lawyers to use it too. It is becoming an increasingly standard legal research, drafting, and case strategy tool. But as a couple of our recent posts (here and here) have pointed out—AI is far from perfect. And modest fines and court-imposed sanctions are proving wholly insufficient to combat the accelerated frequency of AI hallucinations in litigation filings. We’ve suggested some harsher sanctions (if anyone is listening). But it also made us curious to see if anything else was being done. Turns out, courts across the country are moving beyond reactive sanctions imposed after problematic filings, toward a proactive enforcement posture. Some courts now impose disclosure obligations and verification standards as threshold filing requirements.
The standing orders vary in scope and consequence, and we have not undertaken a 50-state or all jurisdiction review. So, this should not be considered comprehensive, but more of an FYI and a heads up to check your local rules and judge-specific standing orders.
For example, in the Southern District of New York, Judge Broderick’s Civil Rules provide that any party must disclose the use of GenAI when the GenAI tool is used to prepare any filings with the Court. Rule 4(J); see Bonsignore v. N.Y. Dep’t. Tax’n & Fin., No. 25-CV-6324, 2025 WL 3468041 (S.D.N.Y. Dec. 3, 2025); see also Brian NG v. Amguard Ins. Co., 25 CIV. 806 (VSB) (GS), 2025 WL 3754555, at *7 (S.D.N.Y. Dec. 29, 2025) (warning a party against submitting fake citations and noting that Judge Broderick’s certification rules apply to all parties, regardless of pro se status). More importantly, the party must certify that it has independently reviewed and verified the accuracy of any portion of the filing generated by GenAI, and that the filing complies with Rule 11 obligations or else face penalties. In the Southern District of Texas, Judge Olvera’s Local Rules similarly require all parties at the outset of a case to file a certificate attesting
either that no portion of any filing will be drafted by [GenAI] or that any language drafted by [GenAI] will be checked for accuracy, using print reporters or traditional legal databases, by a person.
Rule 8(C)(1) (emphasis added).
It’s a little scary that courts need to have “check your work or else” orders. But as Judge Matthew J. Kacsmaryk (N.D. Tex.) explains in his Mandatory Certification Regarding Generative Artificial Intelligence:
[GenAI] platforms are incredibly powerful and have many uses in the law . . . But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, [GenAI] is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.
In other words, the use of GenAI in court filings raises real ethical and practical concerns. And judges should not accept “sorry, ChatGPT threw that cite in there” any more than they accept “I’m just here covering for another attorney.”
Here are additional orders we found requiring certification that a human being has reviewed AI generated material: IAS Rules (Judge Peter Weinmann, NY Sup. Ct., Erie Cnty.) (requiring certification that the filing has been verified and reviewed by a human being); Use of Generative AI (Magistrate Judge Phillip Caraballo, M.D. Pa.) (requiring parties to certify that they have “checked the accuracy of any portion of the document generated by AI, including all citations and legal authority”); Standing Order, Rule 5(c) (Judge Blumenfeld, C.D. Cal.) (requiring certification that “filer has reviewed the source material and verified the [AI] generated content is accurate”); Standing Order, Rule E(5) (Judge Hwang, C.D. Cal.) (same); Standing Order, Rule 10 (Magistrate Judge Susan van Keulen, N.D. Cal.) (imputing any GenAI related hallucinations and corresponding sanctions to the signature of the attorney or party on the filing, because the signature indicates that counsel has personally confirmed the accuracy of the content generated by GenAI tools); Individual Rules and Practices, Rule 2(E) (Judge Cronan, S.D.N.Y.) (requiring certification that “litigant personally reviewed the filing for accuracy of cited legal authorities and factual assertions and . . . describing in detail the steps taken to verify the accuracy of all legal authorities and factual assertions generated by the AI tool” or else face sanctions).
One of the most comprehensive standing orders we found on AI use comes from the Northern District of California’s Magistrate Judge Kang. His Standing Order issues guidelines in three broad categories: filings with the Court, evidence, and confidentiality. Rule VII(C). For filings with the Court, any party must identify the GenAI tools used for drafting the text in its title, caption, a table preceding the body text, or by a separate contemporaneous notice and must also maintain records of these portions of text should the Court request it. Any GenAI generated evidentiary material must be identified in discovery by a notice and declaration verifying authenticity and may not contain “uncorroboratable” or “fictitious” statements of fact or evidence. Regarding confidentiality, counsel must comply with all protective order obligations while interacting with such tools and must maintain records of all prompts or inquiries submitted to these tools to establish compliance with this standing order. Further, Judge Kang repeatedly admonishes that parties should only use AI tools “with competent training, knowledge, and understanding of the limitations and risks of such automated tools.” The order stops short of requiring certification of human proof reading, it makes clear that the court expects as much in accordance with all ethical and professional standards.
While the above are all judge-specific orders, the Northern District of Texas, has adopted a Local Civil Rule mandating that a brief must disclose GenAI use on “the first page under the heading ‘Use of Generative Artificial Intelligence’ [and] [i]f the presiding judge so directs, the party filing the brief must disclose the specific parts prepared using [GenAI].” Rule 7.2(f)(1).
Finally, some courts outright prohibit the use of artificial intelligence in court filings. The Western District of North Carolina, for instance, based on a “concern regarding the reliability and accuracy of filings” using AI, has issued an Order requiring that all court filings be accompanied by a certification that stipulates that no artificial intelligence was used in research for the preparation of the document, except for AI embedded in traditional legal research tools, and verifies that every statement and citation has been checked by an attorney or paralegal. Judge Newman’s Standing Order, in the Southern District of Ohio, prohibits the use of any AI in the preparation of any filing to the court, with exceptions for information gathered from legal search engines, internet search engines, or Microsoft suite products. Rule VI. A violation of the AI ban may result in “sanctions including, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit. See also Standing Order (Judge Boyko, N.D. Ohio) (same). In the Northern District of Illinois, Judge Coleman’s Case Procedures prohibit the use of AI to draft memoranda or as authority to support a party’s motion.
Paralleling the growth of AI and the uncertainty it generates, the proliferation of judge-specific standing orders rather than uniform district- or circuit-wide rules has created a patchwork that is increasingly difficult to navigate. But one thing is clear—courts are not waiting to deal with AI problems after the fact. They are shifting the burden to counsel at the outset. Disclose it, verify it, stand behind it.
And that shift has consequences. These disclosure, certification, and outright ban orders are not just compliance hurdles—they are litigation tools. They provide a roadmap to challenge opposing counsel’s filings, probe the reliability of their submissions, and tee up Rule 11 motions. So yes, check your local rules. But also read your opponent’s certifications carefully. Because in a court where AI disclosure is mandatory, what is said about how a filing was created may matter just as much as what the filing says.
Much thanks to Dechert law clerk, Nimisha Noronha, for digging in on this research.