New York Court System Issues First AI-Use Policy for Judges and Court Staff
The New York State Unified Court System has established interim guidelines for artificial intelligence use, including required training and strict confidentiality protections for all court personnel.
The New York State Unified Court System has released its first policy governing the use of artificial intelligence by judges and court employees. The interim policy, effective October 2025, establishes mandatory training requirements, strict confidentiality safeguards, and a list of pre-approved AI tools for use within the nation’s largest state court system.
The policy applies to all judges and nonjudicial employees across the UCS and covers any AI-related work performed on court-owned devices or in connection with court duties. According to the policy document, the guidelines are designed to “promote the responsible and ethical use of AI technology” while ensuring “fairness, accountability, and security,” particularly when using generative AI tools.
Mandatory Training and Approved Tools
Under the new policy, no court personnel may use generative AI products on any UCS device or for court-related work until completing an initial training course. The court system will also require ongoing continuing education in AI technology use.
The UCS has approved a select list of AI tools for employee use, including Microsoft 365 Copilot Chat, Microsoft Azure AI Services, and the free version of OpenAI’s ChatGPT. Notably, paid subscriptions to public AI platforms are prohibited. The policy distinguishes between “private model” AI tools hosted within the court system’s secure environment and “public model” tools like the free version of ChatGPT, which carry greater restrictions due to data security concerns.
GitHub Copilot is approved for developers and data scientists within the organization, while Trados Studio translation software is available for the Office of Language Access.
Strict Confidentiality Requirements
The policy’s most stringent provisions address confidentiality and data security. Court personnel are flatly prohibited from entering any confidential, private, or privileged information into public AI platforms. This includes docket numbers, party names, addresses, dates of birth, and personally identifiable information.
The policy goes further than many similar guidelines by also prohibiting users from uploading any document that has been filed or submitted for filing in any court into public AI systems, even if the document is classified as public. The rationale is that court records could be sealed in the future or may contain inadequately redacted sensitive information, and entering such information into public AI platforms “makes the exposure of the information permanent.”
Addressing AI’s Limitations
The UCS policy provides detailed education about generative AI’s known problems, including what the technology industry calls “hallucinations”, which are instances where AI systems fabricate facts or legal citations. The document explains that generative AI tools “do not operate like traditional search engines” and are “designed to generate content, not to locate information or provide authoritative answers to factual inquiries.”
The policy explicitly warns that “general-purpose AI programs (whether operating on a public model or on a private model) are not suitable for legal writing and legal research, as they may produce incorrect or fabricated citations and analysis.” Even when using AI-enhanced features built into established legal research platforms, the policy requires that “any content generated by AI should be independently verified for accuracy.”
Court personnel are also required to review all AI-generated content for bias, stereotypes, or prejudice, acknowledging that AI training datasets “include material that reflects cultural, economic, and social biases and expressions of prejudice against protected classes of people.”
Ethical Obligations Remain Paramount
The policy emphasizes that existing ethical rules for judges and court employees remain fully applicable when using AI tools. It cites the Rules Governing Judicial Conduct (22 NYCRR Part 100) and Rules Governing Conduct of Nonjudicial Court Employees (22 NYCRR Part 50), noting that judges “may not delegate their judicial decision-making responsibilities to any other person or entity.”
The policy states that “while AI tools can be used to assist with a judge’s work, judges and court staff must ensure that such tools are never actually engaged in the decision-making tasks a judge is ethically obligated to perform.” Questions about potential ethical concerns should be directed to the Advisory Committee on Judicial Ethics.
Permitted Uses
Despite the restrictions, the policy identifies several appropriate applications for AI within the court system. These include drafting policy memos, letters, speeches, and job descriptions; simplifying complex language for public communications; and summarizing lengthy documents or large datasets for administrative reports.
The policy notes that AI’s ability to “scan and process vast amounts of data in just a few minutes, or even seconds” makes summarization “among its most valuable uses.” However, if AI-generated summaries are to be submitted to other court personnel or released to the public, “the contents of the AI-generated product must be checked against the original material to ensure accuracy.”
A Living Document
The UCS characterizes this as an “interim policy” that is “intended to evolve with technological advancements, operational necessities, and future iterations of relevant legislation, regulation, and public policy.” The appendix listing approved AI products includes a note that “new AI tools are released daily, and AI components are regularly added to existing products,” with the list expected to “change and grow over time.”
The policy also preserves supervisory discretion, noting that approval of an AI product “does not necessarily mean that, for a particular task, the use of that product is suitable or appropriate” and does not prevent judges or supervisors from prohibiting use of approved tools for specific tasks.
The New York policy follows similar AI guidance issued by other state courts and bar associations nationwide, as the legal profession grapples with integrating powerful new technologies while maintaining ethical standards and protecting confidential information.
My Take
This is more than a policy, it’s a cultural turning point. One of the largest court systems in the world is acknowledging that AI has a place inside the justice system. At this stage of the technology curve, pretending it can be banned is delusional. People will use it. The question is whether institutions will shape that use or let it happen in the shadows.
By formalizing guardrails, New York’s courts are legitimizing AI for every other player in the ecosystem: firms, regulators, and even vendors. It sends a simple message: AI isn’t a threat to integrity if handled with discipline. That’s a profound signal from the judiciary.
This also redefines what “access to justice” could mean. If used carefully, AI can help courts draft clearer orders, translate materials faster, and communicate in plain language. It can make the system more human, not less. But it also raises new questions about automation bias and the risk of unequal access to the tools themselves.
What stands out most is how governance is being used here as a catalyst, not a constraint. The best policies don’t freeze progress, they make experimentation safe. That’s what this does.
What’s not included in the policy is how the policy, specifically the restrictions, will be monitored and enforced.
Looking ahead, it’s only a matter of time before we see Chief AI Officers embedded in court systems, advising on model selection, bias audits, and safe expansion. Because the use of AI in law isn’t just inevitable, it’s now institutional.