Does Your E&O Insurance Cover AI Mistakes? Don't Be So Sure

Does Your Law Firm’s E&O Insurance Cover AI Mistakes? Don’t Be So Sure

Traditional professional liability policies weren’t written for algorithmic errors. When a New York lawyer was sanctioned for ChatGPT’s fabricated citations, the coverage gap became clear: E&O policies exclude “software defects” and “mechanical errors,” categories some insurers now interpret to include AI.

In 2023, New York lawyer Steven Schwartz used ChatGPT to research cases for a brief in Mata v. Avianca. The AI fabricated six case citations, complete with fake judicial opinions and nonexistent legal precedents. Opposing counsel caught the fabrications. Federal Judge Kevin Castel imposed a $5,000 sanction jointly and severally on Schwartz and his firm, and ordered letters of apology to the court and the judges identified in the fake citations.

But here’s the question no one asked in that courtroom: would the firm’s Errors and Omissions insurer have covered that $5,000 sanction, plus the costs of defending against a potential malpractice claim?

Based on typical language in many E&O policies that exclude mechanical errors, or software defects/failures, the answer might be no. Policy wording varies significantly by carrier and form, with some remaining silent on AI, others adding explicit exclusions, and a few offering affirmative AI coverage through endorsements.

Note: in many jurisdictions, fines and penalties are uninsurable or are excluded under professional liability forms, which can independently bar coverage for court-imposed sanctions. Welcome to the AI coverage gap.

Errors and Omissions (E&O) insurance has long existed to protect professionals from human misjudgment: the missed clause, the wrong forecast, the bad call. But now that artificial intelligence writes memos, performs due diligence, and reviews discovery files, the question is becoming urgent: does E&O insurance still cover the mistakes when the “professional” is partly a machine?

According to a recent analysis by the American Bar Association Journal, the answer is often “no,” or at least “it depends.”

What Counts as an “AI Mistake”

AI errors defy tidy categorization. They can hallucinate citations, generate biased language, or misclassify data based on skewed training sets. The distinction between tool malfunction and user negligence is often blurred. Did the human rely too heavily on the model, or did the algorithm malfunction? Insurers care about that line because coverage often turns on whether the act was a “professional service” or a “mechanical failure.”

The NIST AI Risk Management Framework identifies several categories of AI risk, from training data bias to lack of transparency. However, these AI risk categories do not map neatly onto traditional insurance policy concepts, which can create coverage disputes.

Inside a Typical E&O Policy

Most professional liability policies hinge on several core phrases: “wrongful act,” “professional service,” and “damages.” None anticipated probabilistic models. A lawyer who files an erroneous motion or an accountant who miscalculates a balance sheet fits the traditional mold. But when a model trained on large datasets suggests a course of action, it’s unclear whether that suggestion counts as professional advice or mere software output.

Technology E&O and media liability hybrids offer broader coverage, but many policies exclude data processing or software defects. That gap leaves a gray zone where algorithmic misfires may fall between product liability and professional negligence. Insurance Journal noted in a November 2024 viewpoint that professional liability policies typically provide coverage for claims arising from professional services, but whether AI-assisted work qualifies remains unresolved in many policy forms, with coverage often depending on specific wording and available endorsements.

Where AI Slips Through the Cracks

Traditional E&O coverage often excludes losses arising from mechanical or electronic errors or software malfunction. Those carve-outs were designed for broken hardware, not self-learning systems. When a generative model creates a defamatory passage or outputs fabricated facts, the insurer may argue that no “professional judgment” occurred, only a data-processing event. That distinction can mean the difference between a covered loss and a costly denial.

Even intent-based exclusions raise new problems. If an algorithm “learns” to produce discriminatory results, does that qualify as intentional bias? The answer remains unsettled. The Harvard Law School Forum on Corporate Governance warns that E&O policies may not respond as expected when AI contributes to a negligent outcome, particularly where the insured cannot show documented human oversight and review of AI-assisted work.

When Cyber Insurance Enters the Picture

The boundary between E&O and cyber insurance becomes murky when AI is involved. If an AI system is hacked and produces erroneous outputs, is that a professional error or a cybersecurity incident? If an AI tool mishandles personally identifiable information and triggers a data breach notification, which policy responds?

Most cyber policies cover data breaches and privacy violations, but they typically exclude professional services. Conversely, E&O policies cover negligent advice but exclude cyber events. When AI sits at the intersection, providing professional services while processing data, firms can face disputes over which policy responds, with each carrier scrutinizing whether the matter sounds in cyber or professional services. Marsh notes that generative AI impacts multiple lines of insurance including Technology E&O, Cyber, Professional Liability, and Media Liability, creating potential coverage disputes over which policy responds.

Consider an AI-powered contract review tool that inadvertently exposes confidential client information to other users due to a prompt-injection attack. Is this a data breach (cyber), professional negligence in tool selection (E&O), or both? The answer may depend on policy wording, the sequence of events, and which insurer has better coverage counsel. Firms should consider both policy types and understand how they interact, or more often, how they create gaps.

Emerging Insurer Responses

Insurers are beginning to adapt. Some carriers have added explicit AI exclusions, citing the unpredictable nature of generative systems. Others are piloting affirmative endorsements that carve back limited coverage for algorithmic errors. For example, Munich Re has introduced aiSure, a product that aims to backstop AI model performance in defined settings.

Brokers such as Marsh and Aon report growing interest in bespoke policies covering AI model governance, data bias, and training-set integrity. Aon’s 2025 risk capital trends analysis identified AI cyber threats as one of the key challenges organizations face, noting that the interconnected nature of AI risks requires new insurance approaches. But these policies remain niche and are priced cautiously, often with sublimits and tighter reporting obligations than traditional E&O coverage.

Specialized AI liability products are evolving rapidly, with a growing number of carriers offering standalone or endorsement coverage as of early 2025.

Coverage Scenarios: Where Disputes May Arise

While courts have yet to rule definitively on E&O coverage for AI-related errors, coverage attorneys and insurance brokers warn that traditional policy language creates significant uncertainty when AI enters the professional service chain. The following scenarios illustrate realistic coverage disputes that could arise based on typical E&O policy exclusions, insurance industry expert analysis, and the types of AI errors already occurring in professional settings. These hypothetical examples reflect the coverage gaps that legal and insurance professionals anticipate as AI adoption accelerates.

Law firm scenario: An associate uses an AI brief generator that fabricates citations. The client is sanctioned, sues the firm for malpractice, and the insurer denies coverage arguing failure to supervise a software tool does not constitute a covered “professional service.”

Marketing agency scenario: A generative model posts promotional text containing a false medical claim that could run afoul of U.S. Food and Drug Administration rules. The insurer disputes whether the act involved “editorial control” (covered) or constituted an autonomous system failure (excluded).

Healthcare technology scenario: A predictive diagnostic tool misclassifies patient data, leading to delayed treatment and adverse outcomes. Coverage may depend on whether the insured was the software’s developer (product liability) or its user (professional liability).

Financial services scenario: An AI-powered loan underwriting system systematically denies applications from protected classes due to biased training data. The resulting discrimination lawsuit triggers both regulatory fines and civil liability, but the E&O carrier argues the loss stems from “system design” rather than professional judgment.

Copyright infringement scenario: A law firm uses an AI document generator that produces text substantially similar to copyrighted training materials. The copyright holder sues for infringement. The E&O insurer argues that intellectual property violations fall outside professional liability coverage and may be excluded under “advertising injury” or “intellectual property” carve-outs.

Mass harm scenario: An accounting firm’s AI-powered tax software contains a coding error that affects 500 client returns, triggering IRS penalties and audit costs. The insurer invokes aggregate limits, meaning one mistake exhausts the entire policy even though it harmed hundreds of clients. The firm faces uncovered losses exceeding $2 million.

Jurisdictional Differences

In the United States, bar associations and insurance industry groups are still debating how to classify algorithmic errors. Canada’s financial and professional regulators have issued advisories but no formal guidance binding on insurers.

The European Union’s AI Act entered into force on August 1, 2024. Some obligations phase in during 2025, and most high-risk system obligations apply from August 2026. The Act may push carriers to create “high-risk system” insurance categories similar to cyber coverage mandates. Its risk-based classification system will likely inform how EU-based insurers price and structure coverage.

Meanwhile, intellectual property disputes are creating another layer of insurance complexity. The New York Times lawsuit against OpenAI and Microsoft, filed in December 2023 and currently proceeding in federal court, and similar copyright claims raise questions about whether E&O policies cover infringement allegations stemming from AI training data or outputs. Most E&O policies exclude intellectual property violations or limit coverage for advertising injury, leaving professional firms that use generative AI exposed to uncovered copyright claims.

For now, courts have yet to decide a definitive test case directly linking E&O coverage to an AI-generated mistake, though several disputes are reportedly in early litigation or settlement negotiations.

Directors and Officers Liability

Beyond E&O exposure, corporate boards face potential Directors and Officers liability for inadequate AI governance. Shareholders and regulators are beginning to scrutinize whether boards exercised adequate oversight when authorizing AI deployment in high-stakes environments. As AI adoption accelerates, legal and insurance professionals anticipate that derivative lawsuits alleging failure to implement AI risk controls could emerge, particularly after public AI failures that harm company reputation or trigger regulatory action. Disclosure risk is also rising. In March 2024, the SEC brought enforcement actions against two advisers for alleged “AI-washing” in marketing materials, a reminder that misstatements about AI capabilities can create securities exposure. See coverage in The Wall Street Journal, Financial Times, and Barron’s.

What Law Firms and Businesses Should Do Now

Review policy definitions carefully, especially “professional service,” “data processing,” and “technology wrongful act.” Request specimen policy language from your broker before renewal and compare definitions across carriers. Pay particular attention to exclusions for mechanical errors, software defects, and intellectual property violations.

Disclose all AI use to your insurer in writing and document how those tools are supervised, tested, and audited. Implement written AI governance policies that specify which tasks may use AI assistance, required human review protocols, validation procedures for AI-generated outputs, and training requirements for staff using AI tools. Create an AI inventory listing every tool, its vendor, use case, and supervision protocols.

Add contractual disclaimers in engagement letters clarifying that AI-generated outputs are reviewed by qualified professionals before delivery. Consider hybrid policies that blend Technology E&O with Cyber coverage, and maintain written records showing that human oversight remains the final step. The OECD’s AI Principles offer a useful framework for building audit trails that demonstrate responsible AI deployment.

Carefully review vendor contracts for AI tools. Examine terms of service from providers like OpenAI, Microsoft, Google, and Anthropic to understand indemnification limits and liability caps. Most AI vendors disclaim liability for outputs and limit damages to fees paid, meaning your firm bears the risk if AI generates harmful content. Consider requiring vendors to maintain adequate liability insurance and to name your firm as an additional insured where possible. Document vendor due diligence, including security audits, data-handling practices, and incident response capabilities.

Understand your notice obligations. Many E&O policies require insureds to report any “circumstance” that might give rise to a claim. If your AI tool produces a questionable output, even if no claim has been filed, failing to notify your insurer could jeopardize coverage later. Develop clear internal protocols for escalating AI incidents to risk management and insurance teams.

For firms with significant AI exposure, consider separate D&O coverage with explicit AI governance language, and evaluate whether aggregate policy limits are sufficient if a single AI error affects multiple clients simultaneously. Consult with coverage counsel before implementing new AI systems in client-facing work. Prevention is significantly cheaper than coverage litigation.

Insurance Catches Up with the Machines

Underwriters are now treating AI the way they once treated cloud computing: first with caution, then with custom endorsements, and eventually as a standard risk class. The next generation of E&O forms will likely reference “AI-assisted professional services” explicitly, with defined terms for algorithmic outputs, training data quality, and human supervision requirements.

Until then, professionals who rely on AI must assume that their coverage depends on a human signature, and that algorithms still live outside the legal definition of judgment. As one insurance expert observed, if you can’t demonstrate where the human exercised discretion, it becomes difficult to show where the policy responds.

My Take

The best protection against AI-related mistakes starts long before the insurer ever gets involved. Firms need systems and workflows that catch AI errors before they reach a client or court. Human verification must be built into every step. The more human oversight you can show, the stronger your argument that any mistake was professional judgment, not a machine malfunction.

This issue goes far beyond fake case citations. Imagine a lawyer relying on an AI-generated argument without checking its legal footing, or trusting an AI summary of medical records that overlooks a crucial nurse’s note. Those are failures of supervision, not technology. Verification is what turns AI from a liability into an asset.

The deeper problem is that insurers still don’t have enough data to model AI risk. They don’t yet know how to price it or what coverage should look like. That will change as more claims emerge. For now, every firm should talk directly with its insurer to confirm how their current use of AI fits, or doesn’t, within existing coverage.

This uncertainty makes documentation critical. Use AI platforms that log every prompt and output, and establish firm-wide safety protocols that show how AI work is reviewed. Those records may one day prove just as valuable as any insurance endorsement.

Sources

ABA Journal | Aon | AP News | Barron’s | EU AI Act | Financial Times | Harvard Law School Forum on Corporate Governance | Insurance Journal | Justia (SDNY sanctions order in Mata v. Avianca) | Marsh | Munich Re aiSure | NIST AI Risk Management Framework | OECD AI Principles | The Wall Street Journal

Disclosure: This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *