Oregon Becomes Testing Ground for AI Ethics Rules as Fabricated Case Law Spreads
A motion arrives in a Portland courtroom so polished it gleams. The prose is crisp, the citations confident, yet half of them do not exist. Oregon’s judges, seeing national headlines about fabricated case law, decide to act before the problem reaches their docket. In 2025 the state becomes an unlikely test lab for a legal system learning to read machine-made filings without losing faith in human authorship.
Why Oregon Moved Quickly
The spark came from New York. When lawyers in Mata v. Avianca, Inc. were sanctioned in June 2023 for submitting a brief written partly by ChatGPT that cited six imaginary precedents, it exposed a blind spot in the profession’s trust model. The assumption that a lawyer’s signature guaranteed authenticity suddenly looked naïve. Judge P. Kevin Castel imposed a $5,000 fine and required the attorneys to apologize to the falsely identified judges whose names appeared in the fabricated opinions.
Within months, judges across the United States began issuing standing orders requiring lawyers to disclose any use of generative AI. The American Bar Association’s Formal Opinion 512, released in July 2024, echoed that theme, reminding lawyers that competence now includes understanding the “capabilities and limitations” of AI systems. Oregon’s judiciary and bar took that cue seriously. In February 2025, the Oregon State Bar issued Formal Opinion No. 2025-205, becoming one of the first jurisdictions to codify expectations for AI-assisted filings.
What the Guidance Requires
Oregon’s opinion does not ban AI. It insists on supervision. Lawyers may use generative models, but they must review and verify every citation, quotation, and factual statement the tool produces before filing. The opinion warns that failure to verify could breach duties under Oregon Rule of Professional Conduct 3.3, which prohibits knowingly making false statements of fact or law to a tribunal, and Rule 4.1, which requires truthfulness in statements to others. Both rules apply throughout the entirety of legal proceedings, and lawyers must correct any false information provided, even inadvertently.
Confidentiality also receives unusual emphasis. The bar cautions that using open-model tools, those that retain user prompts for training—may expose client information without consent. Attorneys are expected to evaluate whether a platform’s terms of service allow data reuse, and if so, to secure informed consent or switch to closed systems. As the opinion notes, “the duty of confidentiality applies regardless of the sophistication of the technology employed.”
The guidance further requires transparency with clients. If an AI tool materially affects representation, by changing cost structures or introducing risks, the client must be told. Even billing is covered: time spent learning how to use an AI platform cannot be charged to a client absent prior agreement, and any AI-related fee must be disclosed in advance.
The Challenge of Digital Authenticity
Legal authenticity used to be simple: a lawyer signed a document, the sources checked out, and responsibility stopped there. Generative AI complicates that foundation. When a model drafts a motion that looks and sounds legitimate but rests on statistical prediction rather than reasoning, the signature alone cannot carry the weight of authorship.
The problem has surfaced nationwide. Courts have imposed sanctions in multiple jurisdictions for AI-generated hallucinations. In Kohls v. Ellison, a January 2025 Minnesota case, Judge Laura Provinzino excluded expert testimony from Stanford University Professor Jeff Hancock after discovering that his declaration contained AI-generated fabricated citations. Hancock admitted using GPT-4o to help draft his filing, which inadvertently generated citations to two non-existent academic articles. The judge noted the irony that an expert on AI misinformation fell victim to the technology he critiques, ruling that his credibility had been “shattered.”
In Colorado, attorney Zachariah C. Crabill received a 90-day suspension (with the remainder of a 366-day suspension stayed upon completion of two-year probation) after filing a motion containing fabricated citations generated by ChatGPT and then lying to a judge about their origin. These incidents underscore a single point: machines may assist, but lawyers must still vouch for every line.
Disclosure and Supervision
Oregon’s opinion leans heavily on Rules 5.1 and 5.3, which impose supervisory duties on lawyers regarding both human and non-human assistants. Under Rule 5.3, lawyers with direct supervisory authority over nonlawyers—a category now extended to include AI tools—must make reasonable efforts to ensure their conduct is compatible with professional obligations. AI tools now fall within that framework. Firms are urged to create internal policies specifying which models are approved, who may use them, and how output is checked. The duty extends to training staff in prompt-crafting and verification to avoid inadvertent breaches of confidentiality or accuracy.
In practice, that means law offices must treat AI like a junior associate who never sleeps but sometimes lies. Review everything. Keep audit trails. Document when and how a tool was used. If the product of an AI system goes into a filing, the lawyer must be able to explain, if challenged, what steps were taken to ensure its reliability. As one Oregon ethics commentator noted, “Delegating to a model is not the same as delegating to a paralegal—you cannot interview the algorithm after the fact.”
The Court’s Interest: Integrity and Efficiency
Judges have practical reasons to care. Every hallucinated citation consumes judicial resources. Oregon’s guidance is a pre-emptive strike: clarify the rules before the filings multiply. The Ropes & Gray AI Court Order Tracker documents hundreds of court orders nationwide addressing AI use, with requirements ranging from simple disclosure to outright prohibition in some courtrooms.
In the federal system, the Judicial Conference has debated uniform disclosure policies for AI-assisted submissions, but state courts remain the front line. Oregon’s approach—a blend of ethics opinion and judicial practice memo—may become a template for others balancing innovation with evidentiary integrity. The state’s December 2024 Attorney General guidance on AI applications reinforces this framework by reminding businesses that Oregon’s existing consumer protection, data privacy, and anti-discrimination laws fully apply to AI systems.
Technological Competence Redefined
Oregon treats technological competence as part of Rule 1.1’s duty of professional competence. The state joins more than forty other jurisdictions that now explicitly fold technology into the definition of legal skill. Lawyers are expected not to code, but to know what the code does.
That means understanding how large language models generate text, when they hallucinate, and how to verify output. Research from Stanford RegLab and the Institute for Human-Centered AI demonstrates that legal hallucinations are pervasive and disturbing: hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models. Even specialized legal AI tools from major providers hallucinate more than 17% of the time. The Oregon State Bar Bulletin summed it up: “Lawyers must supervise all work product, including that produced by AI.” The emphasis is on discernment, not fear—learning to interrogate a model’s reasoning rather than accept its eloquence.
Practical Realities for Practitioners
In daily practice, Oregon lawyers are responding with measured adoption. Several firms have instituted disclosure templates for pleadings prepared with AI assistance, requiring partners to certify human review. Public defenders and small-firm practitioners, often without dedicated tech staff, rely on vendor-vetted tools such as Lexis+ AI or Thomson Reuters CoCounsel to minimize risk.
The cost factor is significant. Legal-specific AI tools range from $80 to $180 per user per month for research platforms, with specialized contract review tools costing more. For solo practitioners and small firms operating on tight margins, these expenses must be weighed against efficiency gains. Many are starting with general-purpose tools like ChatGPT Plus ($20/month) for non-confidential work, then graduating to closed enterprise systems as budgets allow. The Oregon State Bar’s 2025 guidance emphasizes that lawyers who charge clients for AI tool costs must clearly disclose these expenses in advance and cannot bill for time spent learning the technology absent prior agreement.
Bar associations are hosting continuing-education sessions on AI verification. Insurance carriers are beginning to draft new coverage clauses addressing AI-related risks, with some policies excluding claims arising from “unverified machine-generated content” from professional liability coverage. Law schools are adjusting curricula: students are now trained to audit model outputs for accuracy, echoing the bar’s message that oversight is the new literacy.
Beyond Compliance: The Human Element
The deeper question is philosophical. If a model can mimic legal reasoning convincingly, what remains uniquely human about advocacy? Oregon’s answer is accountability. Machines cannot owe duties, express remorse, or sign under penalty of perjury. Lawyers can. The new rules preserve that moral distinction in procedural form.
Oregon’s approach treats authenticity not as a technical feature but as an ethical one. The human must still stand behind the words, even if the machine wrote the first draft. As Judge Provinzino emphasized in the Minnesota deepfakes case, “At a minimum, expert testimony is supposed to be reliable. The Court should be able to trust the ‘indicia of truthfulness’ that declarations made under penalty of perjury carry, but that trust was broken here.” The idea is less about distrust of technology than defense of agency and professional responsibility.
National Ripples
Oregon’s guidance arrives as multiple states grapple with similar questions. California, Florida, and New York have all issued ethics opinions on AI use in legal practice. The State Bar of California’s practical guidance and Florida Bar’s ethics guidelines echo Oregon’s themes of verification, confidentiality, and competence. The pattern is clear: jurisdictions are adapting existing professional responsibility frameworks rather than creating new regulatory structures from scratch.
Looking Ahead
Oregon’s courts may soon adopt formal filing rules requiring a disclosure line such as “Prepared with AI assistance and reviewed by counsel.” Several jurisdictions are already experimenting with metadata tags in e-filing systems that flag AI-generated text for clerk review. Federal judges in districts across the country have issued standing orders addressing AI disclosure, with some requiring affirmative certification that counsel has verified all citations and legal authority.
Insurance markets and malpractice carriers are watching closely. As AI-related citation errors and ethical violations accumulate in case law, carriers are reassessing risk profiles and coverage terms. Some insurers are beginning to require firms to document their AI usage policies and verification procedures as a condition of coverage. Meanwhile, clients are increasingly pushing back against traditional billable-hour models, demanding that law firms pass on efficiency gains from AI tools rather than charging for work that took minutes instead of hours.
For now, Oregon’s experiment stands as a practical model: adopt the machine’s speed, keep the human conscience. The filing must still answer to the person who signs it.
Key Takeaways for Legal Professionals
- Verify everything: AI output must be independently checked against primary sources
- Protect confidentiality: Understand data handling policies before inputting client information
- Document your process: Maintain records of which AI tools were used and how
- Bill ethically: Disclose AI costs and don’t charge for efficiency gains as if they were human hours
- Train your team: Everyone using AI needs to understand its limitations and your firm’s policies
- Stay informed: Ethics rules are evolving rapidly across jurisdictions
Sources
- American Bar Association: Formal Opinion 512, Generative Artificial Intelligence Tools (July 29, 2024)
- American Bar Association News: “ABA Issues First Ethics Guidance on a Lawyer’s Use of AI Tools” (July 29, 2024)
- Colorado Politics: “Disciplinary judge approves lawyer’s suspension for using ChatGPT to generate fake cases” (2023)
- Dahl, M., Magesh, V., Suzgun, M., and Ho, D.E.: “Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models,” Journal of Legal Analysis 16, no. 1 (2024): 64-93
- Florida Bar News: “Board of Governors Adopts Ethics Guidelines for Generative AI Use” (February 2, 2024)
- Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514 (D. Minn. Jan. 10, 2025)
- Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C.D., and Ho, D.E.: “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools,” Journal of Empirical Legal Studies ( 2025)
- Mata v. Avianca, Inc., Opinion and Order on Sanctions (S.D.N.Y. June 22, 2023)
- NWSidebar: “Oregon Issues Ethics Opinion on AI in Law Practice” (March 24, 2025)
- Oregon Department of Justice: “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence” (December 24, 2024)
- Oregon State Bar Bulletin (Various Issues 2024-2025)
- Oregon State Bar: Formal Opinion No. 2025-205, Artificial Intelligence Tools (February 2025)
- Oregon State Bar: Oregon Rules of Professional Conduct (2025)
- People v. Zachariah C. Crabill, case number 23PDJ067, Colorado Supreme Court Office of the Presiding Disciplinary Judge (November 27, 2023)
- Ropes & Gray: AI Court Order Tracker (2025)
- Stanford RegLab: “Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive” (January 11, 2024)
- State Bar of California: Practical Guidance for the Use of Generative Artificial Intelligence (2024)
- U.S. Courts: Judicial Conference
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: The Two-Tier AI Justice System: Premium Tools for Lawyers, Free Chatbots for Everyone Else
