Courts Are Losing Patience With Fake Case Law and the Lawyers Behind It
A growing list of courtroom blunders shows how unchecked AI use is redefining legal responsibility.
Artificial intelligence has gone from buzzword to brief writer. It can draft contracts, summarize case law, and even spit out litigation strategies in seconds. For many law firms, that’s the future made tangible: faster research, lower costs, cleaner documents, limitless scale. But hidden inside that promise is a trapdoor. Hallucinated citations, phantom rulings, biased algorithms, and privacy breaches aren’t hypotheticals anymore. They’re turning up in real courtrooms with real consequences.
When AI stumbles in law, it doesn’t just embarrass a lawyer. It can mislead a judge, waste judicial time, or damage a client’s case. The line between efficiency and malpractice grows thinner by the day. What began as an experiment is now a stress test for the profession itself. How far can automation go before it erodes the judgment it was built to serve?
The First Warning Shot: Mata v. Avianca
It began in 2023 with Mata v. Avianca (S.D.N.Y.), where lawyers filed a brief containing six entirely fabricated cases generated by ChatGPT. Judge P. Kevin Castel fined them $5,000 and called the filing “unprecedented.” Reuters, June 22, 2023.
A Pattern Emerges
By 2024, the pattern had begun to spread. In United States v. Michael Cohen, Judge Jesse Furman flagged “AI-generated” citations in a motion to end supervised release but stopped short of sanctions. Reuters, Mar. 20, 2024. That December, the Colorado Court of Appeals dismissed a pro se brief after confirming that several cited cases did not exist. Denver Post, Dec. 26, 2024. Around the same period, a Massachusetts lawyer was fined $2,000 for filing a motion with AI-invented case law, one of the first state-level rulings to explicitly address generative AI misuse. Massachusetts Lawyers Weekly, Feb. 13, 2024; see also Boston Globe, Mar. 7, 2024.
Walmart Sanctions: The Stakes Get Real
In February 2025, a federal judge in Wyoming fined three lawyers a combined $5,000 for citing fake AI-generated cases in a personal-injury suit against Walmart. One attorney was removed from the case after admitting that an internal AI tool had “hallucinated” citations. Reuters, Feb. 25, 2025. Two weeks earlier, the same lawyers had voluntarily withdrawn a filing after the court flagged nine questionable citations. Reuters, Feb. 10, 2025.
Across the Atlantic and Beyond
In June 2025, the England & Wales High Court found that 18 of 45 citations in a filing against Qatar National Bank were fabricated by AI. Dame Victoria Sharp issued a public warning against “uncritical reliance on generative software.” The Guardian, June 6, 2025.
In Australia, a Victorian lawyer apologized after AI-generated quotes and bogus citations delayed a ruling; regulators then restricted his practice and imposed two years of supervision. ABC News/AP, Aug. 15, 2025; Regulator statement, Sept. 2, 2025.
When Sanctions Get Serious
In ByoPlanet International v. Johansson (S.D. Fla.), multiple AI-hallucinated filings prompted repeated show-cause orders and sanctions. Relativity Blog, Aug. 7, 2025 (see also Order, July 17, 2025).
That same month, in Coomer v. Lindell (D. Colo.), lawyers for MyPillow’s CEO were fined $3,000 each after 30 fake citations surfaced in their brief. Colorado Sun, July 7, 2025.
Later in July, three attorneys from the firm Butler Snow were disqualified from a case in Alabama after including AI-fabricated citations, with the matter referred to the state bar. Reuters, July 24, 2025.
By September, a federal judge in Puerto Rico issued $24,400 in fines after finding 55 bogus citations in Puerto Rico Soccer League v. Puerto Rico Football Federation, concluding they were AI-generated. Reuters, Sept. 24, 2025.
In California, a Court of Appeal fined a lawyer $10,000 after discovering that 21 of 23 citations in a filing were invented by AI. CalMatters, Sept. 22, 2025; Opinion, Sept. 12, 2025.
Bankruptcy and Other Sanctions
Through mid-2025, bankruptcy courts in Illinois, South Carolina, and Alabama issued sanctions over AI-generated case law in filings, typically citing Rule 9011 or Rule 11 violations. Bloomberg Law, July 2025.
In Gauthier v. Goodyear (E.D. Tex. 2024), a federal judge sanctioned an attorney after finding that several cited cases were nonexistent and quotes were fabricated by an AI tool, flagging a breach of Rule 11 and a cautionary lesson for the bar. ABA Litigation News, Mar. 13, 2025; Reuters, Nov. 26, 2024.
Redefining Accountability in the Age of AI
In late September 2025, the Alberta Court of Appeal ruled that a lawyer who had banned their firm from using AI tools was still responsible for an outside contractor’s submission that contained citations fabricated by AI. The court held that ultimate accountability for filings rests with counsel of record, regardless of internal AI policies. The decision underscored that disclaimers or bans don’t shield lawyers from responsibility for AI misuse. Canadian Lawyer, Sept. 27, 2025
Now the Judges Are Under the Microscope
On Oct. 6, 2025, a U.S. senator sent letters to federal judges asking whether withdrawn or revised rulings had been influenced by AI-generated text, an inquiry that widened the spotlight from lawyers to the bench itself. Reuters, Oct. 6, 2025.
The Lesson: Trust, but Verify
Every incident shares a common thread: overreliance on unverified machine output. AI can speed research and drafting, but it still hallucinates. Courts from London to Melbourne to Miami are clear: failing to check the work is not an excuse. Technology can amplify brilliance or negligence with equal efficiency.
As one Florida order underscored in substance: there’s no substitute for reading the cited cases yourself.
My Take
The lesson for lawyers from these AI blunders is obvious and so I’m going to attempt to put a positive spin on these early situations:
As embarrassing as these early cases were, they forced the legal industry to confront AI’s limits almost overnight. In a strange way, the lawyers behind those filings were the first real testers of generative AI in law. Their mistakes, though public, accelerated the profession’s understanding of what responsible AI use actually looks like. Every sanction since has been part of the same learning curve: fast, uncomfortable, but necessary.
However, the lesson has been learned. Going forward, courts and governing bodies are unlikely to show patience for the same mistakes. What began as a harsh lesson in technological growing pains is now professional negligence.
Sources
Reuters | Law360 | The Guardian | ABC News | Denver Post | Massachusetts Lawyers Weekly | Boston Globe | Relativity Blog | Colorado Sun | CalMatters | Bloomberg Law | ABA Litigation News | Canadian Lawyer
Author’s Note
This story is part of an ongoing project examining how artificial intelligence is reshaping legal practice, from research tools and automation software to the ethics of machine-assisted judgment. As AI adoption accelerates, we aim to track the real-world consequences, regulatory shifts, and professional standards emerging in response.
Disclosure
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.