When Legal AI Gets It Wrong, Who Pays the Price?
From ChatGPT hallucinations to vendor failures, lawyers are discovering that outsourcing to artificial intelligence platforms doesn’t shield them from professional liability.
In June 2023, a federal judge in Manhattan sanctioned two lawyers $5,000 for submitting legal briefs containing fake case citations generated by ChatGPT. The attorneys had relied on the AI tool to research precedents, never verifying that cases like Varghese v. China Southern Airlines and Shaboon v. Egyptair actually existed. They didn’t, ChatGPT had hallucinated them.
The case, Mata v. Avianca, Inc., became a watershed moment for the legal profession. Judge P. Kevin Castel made clear that technological innovation doesn’t absolve lawyers of their fundamental duties. “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” he wrote. But the lawyers had “abandoned their responsibilities” by failing to verify the AI’s output.
Outsourcing legal work to AI platforms and external vendors has become standard practice in modern firms. It saves time, reduces overhead, and can scale research or document review overnight. Yet one principle remains unchanged: when the machine or vendor makes a costly error, the lawyer’s name is still on the filing. Courts and regulators have made clear that delegating work does not delegate responsibility.
Beyond Mata: A pattern emerges nationwide
Mata was not an isolated incident. By December 2023, Michael Cohen, Donald Trump’s former attorney, admitted he had unwittingly passed AI-generated fake citations from Google Bard to his lawyer, David Schwartz, who then submitted them to a federal judge. Cohen claimed he thought Bard was “a super-charged search engine” rather than a generative AI tool prone to hallucinations.
In January 2024, the U.S. Court of Appeals for the Second Circuit referred attorney Jae Lee to a grievance panel for citing a nonexistent state court decision generated by ChatGPT. The appellate court found her conduct fell “well below the basic obligations of counsel.” In February 2025, a Wyoming federal judge discovered fabricated cases in filings that appeared to originate from ChatGPT, ordering lawyers to show cause why they shouldn’t face sanctions.
The pattern is clear: courts across jurisdictions are encountering AI-generated fabrications with increasing frequency. Judges now routinely cite Mata as precedent when sanctioning attorneys who submit hallucinated citations, establishing a body of case law that holds lawyers strictly accountable for verifying AI outputs.
Ethical rules written for humans now govern algorithms
The foundation for accountability lies in the ABA Model Rules of Professional Conduct, particularly Rule 5.3, which governs supervision of nonlawyers. The rule requires lawyers to make reasonable efforts to ensure that anyone assisting them, including AI systems, acts in accordance with the lawyer’s professional duties.
Rule 1.1 on competence also applies. In July 2024, the ABA issued Formal Opinion 512, its first comprehensive ethics guidance on generative AI. The 15-page opinion stresses that lawyers must understand the capabilities, limitations, and risks of AI tools they use. “Lawyers need not become GAI experts,” the opinion states, but they must maintain “a reasonable understanding” of the technology and “remain vigilant about the tools’ benefits and risks.”
Competence demands more than understanding, it requires verification. According to Opinion 512, lawyers cannot rely uncritically on AI-generated content. The appropriate level of review depends on the task and the tool’s track record, but in all cases, “the lawyer is fully responsible for the work on behalf of the client.” A June 2024 Stanford study found that leading legal AI systems hallucinated between 17% and 33% of the time when conducting legal research, underscoring why verification remains essential.
Confidentiality under Rule 1.6 adds another layer of complexity. When client data is processed by external systems, firms must ensure vendors cannot retain or train on that data without informed consent. Multiple bar associations, including the New York City Bar in its August 2024 Formal Opinion 2024-5, urge firms to vet privacy policies and obtain explicit client consent before using generative tools. The ABA’s guidance emphasizes that “general, boiler-plate provisions” in engagement letters are insufficient. For analysis of the requirements, see coverage in the National Law Review and explanatory guidance from the Illinois Commission on Professionalism at 2Civility.
International divergence: How other jurisdictions handle AI liability
While U.S. courts have taken a strict accountability approach, regulatory frameworks vary significantly across jurisdictions. The European Union’s AI Act, which entered force in August 2024, establishes the world’s first comprehensive AI regulatory framework. Under the Act, AI systems used in legal services and justice administration are classified as “high-risk,” subject to strict transparency, human oversight, and risk management requirements. The Act’s extraterritorial reach means any lawyer or firm serving EU clients may fall under its jurisdiction, mirroring the GDPR’s global impact.
In the United Kingdom, the Solicitors Regulation Authority (SRA) has taken a principles-based approach rather than prescriptive rules. The Law Society emphasizes that existing professional duties apply regardless of technology used, but the SRA’s regulatory silence has drawn criticism. An SRA survey found that three-quarters of the largest solicitors’ firms were using AI, nearly twice the number from three years earlier. In May 2025, the SRA authorized the first AI-driven law firm, Garfield.Law, which uses large language models to provide debt recovery services, but made clear that named solicitors remain “ultimately accountable” for all AI outputs and anything that goes wrong.
Canada and Australia have taken intermediate positions, requiring lawyers to maintain competence in technologies they deploy while leaving specific standards to develop through case law and bar guidance. The divergent approaches create challenges for multinational firms and raise questions about regulatory arbitrage, whether firms might structure operations to take advantage of more permissive jurisdictions.
Professional liability insurers are grappling with how to price and structure coverage for AI-related claims. Some carriers have begun adding AI-specific exclusions to standard malpractice policies, while others require detailed disclosures about AI tool usage. Recent reports indicate that AI exclusions are “creeping into insurance” across directors and officers (D&O) and errors and omissions (E&O) policies, with some insurers like Berkley introducing so-called “Absolute” AI exclusions. Premium increases for firms using AI without robust governance frameworks have been reported, though comprehensive industry data remains limited.
The coverage gaps are significant. Traditional malpractice insurance covers negligent errors by lawyers, but policies may not clearly address liability for AI vendor failures, data breaches through AI platforms, or claims arising from algorithmic bias. Cyber liability policies typically cover data breaches but may exclude losses from professional services rendered using compromised systems, creating potential coverage gaps for companies that sell AI-powered products or services. This creates a coverage void that many firms have yet to address systematically. The concept of “silent AI” coverage, similar to the earlier “silent cyber” problem, has emerged as insurers face AI-driven risks that are neither explicitly included nor excluded in policies.
Some insurers now offer AI-specific endorsements or standalone policies, but uptake remains limited. Companies like Munich Re and Armilla, in partnership with Lloyd’s, have introduced dedicated AI insurance products, though the individual process to gain coverage includes extensive due diligence on the AI systems used. The challenge is actuarial: with limited claims history for AI-related legal malpractice, insurers struggle to price risk accurately. As claims accumulate, coverage terms and pricing will likely shift dramatically.
When opposing counsel uses AI: The detection and discovery problem
A growing concern among litigators is how to detect and respond when opposing counsel uses AI-generated work without disclosure. Unlike fabricated citations, which become obvious when researched, subtle AI-generated arguments or analyses may go unnoticed. This creates asymmetric information problems and potential discovery obligations.
Some practitioners are developing protocols for detecting AI-generated content, looking for telltale signs like unusual phrasing patterns, over-confident assertions without proper hedging, or analysis that lacks the nuance typically seen in human legal reasoning. But detection remains unreliable, and no court has yet established clear rules about whether parties must disclose AI use in adversarial proceedings.
The discovery implications are complex. If AI use becomes a material fact in malpractice or sanctions proceedings, opposing parties may seek discovery of prompts, training data, version histories, and vendor agreements. Courts have not yet addressed whether such materials are protected by work product doctrine or whether privacy concerns about other clients’ data limit discoverability.
State bars draw firmer boundaries
Bar associations across the United States are establishing clearer guidelines for AI use. The New York City Bar’s Formal Opinion 2024-5, released in August 2024, requires lawyers to maintain supervision, obtain client consent for information-sharing tools, and ensure conflicts and confidentiality checks when AI is deployed. California, Florida, Pennsylvania, New Jersey, Kentucky, Michigan, and Missouri have issued similar guidance, with variations reflecting their state rules.
The common thread: lawyers must understand what the technology does, verify its output, protect client confidences, and communicate transparently about its use. The ABA’s Formal Opinion 512 also addresses billing practices. Under Rule 1.5, lawyers who bill hourly must charge only for actual time expended. If AI completes in 15 minutes what would have taken an hour, the lawyer may only bill for 15 minutes plus review time. For flat or contingent fees, the same principle applies: efficiency gains should benefit clients, not create windfall profits.
Client perspectives: Demands for disclosure and lower costs
Corporate legal departments increasingly demand transparency about AI use by outside counsel. Some have incorporated AI disclosure requirements into outside counsel guidelines, specifying which tools are acceptable, what consent processes must be followed, and how efficiency gains should be reflected in billing. The Association of Corporate Counsel has signaled that AI adoption should lead to measurably lower legal costs, not just faster delivery at the same price.
Consumer clients present different challenges. Many lack sophistication to meaningfully consent to AI use or evaluate its implications. Bar ethics opinions emphasize that consent must be truly informed, requiring lawyers to explain risks in plain language, a higher bar than standard engagement letter boilerplate.
Some clients explicitly prohibit AI use due to confidentiality concerns or philosophical objections. Others enthusiastically embrace it, expecting cost savings and faster turnarounds. Managing these divergent expectations while maintaining ethical compliance requires sophisticated client communication and intake processes.
Building a governance framework for AI-enabled practice
Legal ethics experts and bar associations recommend that firms implement structured approaches to AI adoption. Key components include:
Vendor due diligence. Require confidentiality clauses prohibiting training on client data. Specify audit rights, uptime guarantees, logging capabilities, and prompt error reporting. Review terms of service and privacy policies in detail, understanding where data is stored and processed.
Mandatory human review. A licensed lawyer must review and approve every AI-generated output reaching a client or tribunal. Test AI tools on sample work before full deployment to establish baseline accuracy rates and identify common failure modes.
Informed client consent. Disclose which tasks will be AI-assisted, what oversight is in place, and what risks exist. Document consent in engagement agreements beyond generic boilerplate language. The ABA Formal Opinion 512 and NYC Bar opinion detail these expectations.
Staff training programs. Provide scenario-based training on verification procedures, citation checking, and handling sensitive data. Train staff to recognize AI limitations and common failure modes like hallucinations, bias amplification, and context misunderstanding.
Insurance review. Confirm that malpractice and cyber liability policies address AI-assisted work and vendor incidents. Document AI use for insurers and understand coverage limitations and exclusions.
Incident response plans. Define procedures for correcting AI errors, notifying clients, and remediating harm. Document what went wrong, how it was discovered, corrective actions taken, and who was responsible. Report serious incidents to insurers promptly.
Data governance protocols. Establish clear rules about what data can be input into AI systems, how long it’s retained, and who has access. Monitor for unauthorized data exfiltration or use in model training.
The economics driving adoption despite risks
Market forces are accelerating AI adoption even as liability concerns mount. Large corporate clients increasingly demand AI-enabled efficiency, threatening to move work to providers who can deliver faster, cheaper services. Alternative Legal Service Providers (ALSPs), staffing companies and legal process outsourcers that handle routine legal work, aggressively deploy AI to undercut traditional firm pricing. Some operate under different regulatory frameworks than law firms, creating competitive pressures.
The economics are compelling: AI can reduce document review time by 80%, cut legal research costs by 60%, and accelerate contract analysis dramatically. For firms operating on thin margins or competing for price-sensitive work, the pressure to adopt is intense. But rushing implementation without robust governance infrastructure has proven costly, as the sanctions cases demonstrate.
Academic researchers and industry groups are working to establish reliability benchmarks for legal AI. A December 2024 framework published on arXiv proposes combining specialized expert systems with adaptive refinement techniques, using retrieval-augmented generation, knowledge graphs, and reinforcement learning from human feedback to reduce hallucination rates. These technical advances may eventually reduce error rates, but they don’t eliminate the fundamental requirement for human judgment.
An emerging question is whether AI tools that provide legal advice directly to consumers constitute unauthorized practice of law (UPL). Several state bars are investigating AI platforms that offer legal document preparation, contract analysis, or case evaluation without lawyer supervision. If such tools are deemed to be practicing law, their operation violates UPL statutes in most jurisdictions.
This intersects with lawyer liability in complex ways. If a lawyer uses an AI tool that’s engaged in UPL, does that implicate the lawyer in the unauthorized practice? If the tool provides advice that proves incorrect, can injured parties sue both the AI provider and any lawyers who facilitated its use?
Courts have not yet resolved these questions, but the implications are significant for the business models of legal tech companies and the lawyers who use their products. Some jurisdictions are considering regulatory sandboxes, controlled environments where AI legal tools can operate experimentally under supervision, to explore these issues without immediately triggering UPL prohibitions. In response to these concerns, Judge Brantley Starr of the Northern District of Texas issued a standing order in May 2023 requiring attorneys to certify either that no AI was used in their filings or that all AI-generated content was verified by a human using traditional legal databases.
A shifting landscape requires ongoing vigilance
The integration of AI into legal practice is accelerating, but the accountability framework remains grounded in traditional ethical principles. Delegation may be efficient, but it is not absolution. Court decisions from Mata forward, combined with evolving bar guidance and international regulatory frameworks, establish that lawyers, not tools or vendors, carry ultimate responsibility for legal work product.
Firms that treat AI vendors like any other supervised assistant, build robust review protocols, obtain genuine informed consent, invest in training and governance, and maintain adequate insurance coverage will be best positioned for scrutiny from courts, clients, and regulators. As Opinion 512 concludes, lawyers must be “vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected.”
The technology will continue to evolve rapidly, but the core principle will not: when AI fails in legal work, it’s the lawyer who answers for it.
My Take
The day lawyers are no longer held responsible for work submitted with their name attached is the day machines control the legal system. This is not the direction we should go. I’m pro-AI with the use of AI in many facets of law and other areas of life, but at the end of the day, the buck stops with the lawyer.
In short, if AI screws up, the lawyer whose name is attached to the filings or is counsel of records, must remain responsible. That accountability is the foundation of trust in the legal system, and it cannot be delegated to a machine.
What do you think? Leave a comment below.
Sources
Above the Law | American Bar Association | ABA Formal Opinion 512 | arXiv | Artificial Intelligence Act (EU) | Bloomberg Law | Hunton Andrews Kurth | K&L Gates | Kennedys Law | Law Society (UK) | Legal Futures (UK) | National Law Review | NYC Bar Association | NPR | Solicitors Regulation Authority (UK) | Stanford Human-Centered Artificial Intelligence | 2Civility | Wikipedia
Disclosure: This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, ethics opinions, and sources cited are publicly available through court filings, bar association publications, regulatory bodies, and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.