Should AI Receive Attorney-Client Privilege or Is Every Prompt a Potential Waiver?

Should AI Receive Attorney-Client Privilege or Is Every Prompt a Potential Waiver?

Lawyers are racing to define what confidentiality means when legal strategy flows through a machine that never forgets.

Once, attorney-client privilege was a simple equation: two humans, one secret, and an expectation of trust. Today, lawyers are typing into ChatGPT, Harvey.ai, or Lexis+ AI. When those systems process a client’s trade secrets or litigation strategies, where do those words go, and who, or what, owns their confidentiality?

Recent disciplinary rulings, bar advisories, and academic papers suggest that the doctrine built to protect human counsel may not extend neatly to machine intermediaries. The legal world is confronting a paradox: to use AI responsibly, attorneys must decide whether to treat it as a silent assistant or an untrustworthy eavesdropper.

Privilege Under Pressure: What Happens When the Third Party Is a Machine

Attorney-client privilege hinges on confidentiality. Sharing privileged information with a third party typically destroys protection. When that third party is a cloud platform, the line blurs. In Reuters and ABA Journal reports, lawyers have already faced warnings for uploading confidential material to public models, prompting bars to issue cloud-focused guidance.

In July 2024, the American Bar Association released Formal Opinion 512, clarifying that AI tools cannot receive or maintain privilege on behalf of lawyers and that the duty of confidentiality remains with the human operator. Meanwhile, researchers caution that models may retain fragments of user data, heightening the risk that privileged material could resurface in later outputs (SSRN).

Ethics Boards and Bar Associations Sound the Alarm

By mid-2025, many jurisdictions had weighed in. The D.C. Bar’s Opinion 388 warns that uploading privileged communications into generative models can amount to waiver. The Florida Bar’s Opinion 24-1 goes further, requiring client consent for certain AI uses. In the United Kingdom, the Solicitors Regulation Authority emphasizes that once data leaves local control, privilege may evaporate.

Legal academia echoes these cautions. A Columbia Law Review note argues that privilege cannot extend to nonhuman entities incapable of fiduciary duty. The NYU Law Review explores whether AI could someday function as a “privileged instrumentality” akin to a translator or paralegal, but only with strict audit and retention controls.

Real-World Fallout: When Confidential Data Leaks

Confidentiality breaches tied to AI are no longer theoretical. In March 2023, Samsung engineers accidentally uploaded proprietary code to ChatGPT, prompting the company to restrict employee use of generative AI. The incident quickly became a global headline, with reports noting similar corporate bans across industries. Later that year, a midsize U.S. firm reported to Law360 that a staff query exposed fragments of a client’s sealed deposition.

Security leaders note that privilege cannot survive if client material is logged by external providers. Some firms have turned to private “AI sandboxes” that never transmit data to public APIs, arguing isolation best preserves confidentiality (Financial Times).

Work Product and Prompt Protection: The First Line of Privilege Defense

No court has squarely held that an AI can receive or hold attorney-client privilege, but some are drawing boundaries around work product when AI assists legal work. In one early decision summarized by practitioners, carefully constructed AI prompts reflecting attorney strategy were treated as opinion work product under Rule 26(b)(3) (Fazmic Law).

In Concord Music Group v. Anthropic PBC, the court denied attempts to compel all prompts and outputs, emphasizing that discovery must be narrowly tailored to avoid unnecessary exposure of attorney reasoning (summary: Fazmic Law). Practitioner commentary notes that courts are beginning to address whether prompts and AI interactions are discoverable at all (McGuireWoods).

Sanctions and Disclosure: The Cost of Getting It Wrong

Courts have not been forgiving when AI compromises accuracy or confidentiality. In Utah, a state appellate court sanctioned a lawyer for filing a brief with nonexistent AI-generated authorities (The Guardian). Earlier in 2025, a federal judge fined three lawyers after fake AI-generated cases appeared in a personal injury filing, remarking that the tool does not excuse the user (Reuters). An appellate sanction discussed by LawNext introduced a new wrinkle: whether attorneys have an affirmative duty to detect opponents’ AI-generated errors.

AI Vendors as Functional Third Parties

Cloud and AI vendors can be treated like translators, e-discovery providers, or other functional third parties when bound by confidentiality and supervised appropriately. But if vendors store prompts, train on them, or reuse data, privilege risks rise. The ABA’s guidance on secure communications underscores duties to use reasonable safeguards, sometimes including encryption or heightened measures (ABA Formal Opinion 477R; summary YourABA). Virtual-practice guidance likewise stresses supervision of vendors and confidentiality controls (ABA Formal Opinion 498; overview Holland & Knight). For AI-specific workflows, the ABA’s Jurimetrics analysis advises that vendor access, retention, and training practices can trigger waiver if not contractually constrained (ABA Jurimetrics 2024).

Cross-Border Privilege and Data Jurisdiction

When a model is hosted abroad and processes U.S. client material, which country’s privilege rules apply? If a vendor is subpoenaed in another jurisdiction, could privileged data be disclosed? Privacy and security practitioners suggest structuring AI “red teaming” — controlled adversarial testing of systems for vulnerabilities — and related assessments under counsel direction to preserve privilege, including maintaining confidentiality for nonlawyer experts (IAPP). Cross-border programs should integrate privilege strategy alongside data-transfer compliance, recognizing varied expectations by region (Norton Rose Fulbright).

AI-Generated Internal Memoranda

Firms increasingly use models to summarize client communications or draft internal issue memos. If an AI-generated summary introduces errors or combines sources, is the result still privileged work product? Scholars advise that privilege remains tied to the involvement of counsel and the purpose of obtaining legal advice; machine-authored text without legal direction may not qualify (SSRN: AI-ready Attorneys; SSRN: Legal Ethics of Generative AI).

Practical takeaway: document attorney intent and review, and avoid using public models to synthesize privileged facts.

Privilege Logs and AI Discovery

AI can embed legal reasoning in drafts, chat transcripts, and model metadata, complicating privilege logs. E-discovery teams are experimenting with AI-assisted privilege filters and validation workflows to prevent inadvertent production. Recent reporting also highlights courts testing how prompts and outputs fit into discovery, with judges cabining what must be produced to prevent unnecessary exposure of protected reasoning (Reuters).

The Problem of Machine Memory

Even when firms delete prompts, large language models and their surrounding systems can retain traces of client data that reappear in later outputs. This residual “machine memory” threatens privilege through unintended disclosure. Privacy researchers warn that neural networks can memorize training data and reproduce it verbatim under certain prompts, creating risks similar to data breaches (IAPP).

Academic work reinforces the concern. A recent study titled “What Can We Learn from Data Leakage and Unlearning for Large Language Models” found that AI systems may continue to recall fragments of sensitive information even after deletion efforts (arXiv). Legal scholars at the NYU Journal of Intellectual Property & Entertainment Law further argue that because such retention is inherent to model design, lawyers must treat it as a form of ongoing custody risk, subject to Rule 1.6 obligations.

The takeaway is clear: without strict contractual guarantees, private hosting, and technical “non-training” commitments, model memory can transform a routine query into an inadvertent waiver. Privilege in the AI era depends not only on who sees the data, but also on what the machine remembers.

Ethics Opinions: Confidentiality and Client Consent

Beyond the courtroom, ethics boards are setting front-line rules. Oregon’s 2025 advisory opinion cautions that lawyers must understand how AI vendors store, transmit, and reuse data or risk violating Rule 1.6 (OSB 2025-205: Oregon State Bar). The ABA’s Jurimetrics analysis similarly warns that inputting client data into AI tools can risk privilege waiver if the vendor can retrieve it or uses it for training (ABA Jurimetrics 2024).

Privileging AI for Non-Lawyers: Access to Justice and the Next Frontier

If the legal profession is still debating whether its own AI use deserves privilege, the question grows even more complex for everyone else such as self-represented litigants drafting pleadings, couples using AI to start their divorces, or small-business owners asking ChatGPT for basic corporate forms. That’s a very small set of examples. Should any of those communications be protected?

A. The Case for Extending Privilege or Confidentiality

1. Access to Justice

Millions of people can’t afford lawyers. For them, AI tools function as the only form of “legal help” within reach. Denying any confidentiality to those interactions effectively penalizes the poor for using the technology that levels the field. The better policy argument, some say, is that limited privilege encourages responsible use: it’s better for citizens to consult AI than to file alone and uninformed.

The analogy to AI-powered therapy strengthens this point; digital companions are already being used for mental health support (NPR 2025). If machine empathy can promote social well-being, machine-assisted counsel might do the same for justice.

2. Functional Equivalence

AI systems occupy a gray zone somewhere between software and service provider. Translators, paralegals, and e-discovery vendors are not lawyers, yet their participation doesn’t destroy privilege because they operate under confidentiality obligations. With similar technical and contractual controls, AI could be treated as a “privileged instrumentality.” A recent note in Ifrah Law asks whether “AI conversations could ever be privileged,” observing that users increasingly treat AI tools as confidants and that the law may need to adapt accordingly (IfrahLaw 2025).

3. The Competence Argument

GPT-4 and similar models have passed the Uniform Bar Exam—a symbolic but important benchmark. Researchers Daniel Martin Katz and Michael Bommarito documented that GPT-4 achieved scores above human passing thresholds in “GPT Takes the Bar Exam” (SSRN). If a system can meet the minimum competence standard required of licensed attorneys, it strengthens the argument that communications with it deserve at least partial protection—though critics note that subsequent replication studies have questioned the original scoring margins (SpringerLink 2024).

4. Reasonable Expectation of Privacy

Users often believe their AI chats are private. One of the traditional Wigmore factors for privilege is that communications must originate in confidence. When someone uploads sensitive facts to a model they perceive as a secure assistant, there’s an intuitive expectation of secrecy, even if the law hasn’t caught up. Some scholars propose that vendors offering “law-safe modes” or private sandboxes could create a contractual expectation of confidentiality strong enough to approximate privilege.

The Obstacles to Extending Privilege to Unrepresented Folks

Privilege remains the exception, not the rule. Courts are reluctant to create new protected categories, and AI vendors lack fiduciary duties comparable to lawyers. Without ethical accountability or licensing, a chatbot cannot promise loyalty to a client. Moreover, data-retention and training practices make disclosure risks inherent. If user inputs are logged or used for model improvement, any privilege is instantly compromised.

There are also fairness concerns: extending privilege to AI users might let individuals cloak documents or communications from discovery without the corresponding ethical guardrails imposed on lawyers. The challenge, then, is balancing accessibility with accountability.

The Middle Ground

A pragmatic solution may lie in a “privilege-lite” regime. Legislatures or regulators could establish a statutory confidentiality for AI-assisted legal self-help, narrowly tailored, time-limited, and contingent on vendors adopting strict data-isolation protocols. Alternatively, courts could recognize qualified protection, similar to work-product doctrine, shielding AI prompts prepared in contemplation of litigation. Providers could also be contractually bound by non-training, encrypted retention, and audit rights that mirror attorney supervision obligations.

In short: the question is not whether machines deserve privilege, but whether society can afford a two-tier system… one where the privileged get lawyers, and the unrepresented get exposure.

The Future of Privilege: From Human Duty to Machine Design

The center of gravity is shifting. Privilege will continue to rest on human duties of confidentiality and supervision, but it may soon depend just as much on choices in system architecture: where data lives, whether a model trains on it, how long logs persist, and who can compel a vendor to disclose them. In an AI-augmented legal world, trust is no longer just an ethical posture, it’s an engineering problem.

For lawyers, that means aligning professional rules with technical controls: private deployments, non-training commitments, auditable retention limits, counsel-directed workflows, and disciplined privilege logging. For self-represented individuals and small businesses, it means choosing platforms designed to emulate those same safeguards such as encrypted local sessions, deletion guarantees, and transparent data handling.

The next decade may bring a bifurcated privilege regime: one rooted in human ethics, another in machine design. Courts and policymakers will have to decide whether confidentiality follows the actor or the architecture. If only lawyers’ machines qualify as trusted, privilege will remain a professional monopoly. But if society extends limited protection to secure, well-regulated AI systems accessible to the public, privilege could evolve from an elite doctrine into a broader digital right of privacy in legal reasoning itself.

Until then, prudence remains the rule: treat AI as outside the circle of trust unless and until your contracts, or your code bring it within it.

Emerging Areas to Watch

The next phase of the privilege debate will likely unfold in courtrooms and rulemaking bodies rather than law reviews. Several unresolved questions are beginning to surface in litigation and policy circles:

1. Test cases on prompt discovery. No appellate court has yet decided whether AI prompts or chat transcripts reflecting attorney reasoning qualify as privileged or discoverable work product. The first rulings could define how far confidentiality extends into machine-assisted drafting.

2. Vendor subpoenas and data demands. Regulators and opposing parties may soon test whether AI providers can be compelled to produce logs or model data containing client information — an issue that could reshape cloud-contract drafting for law firms.

3. Insurance and malpractice coverage. Professional-liability carriers are beginning to ask how AI use affects risk exposure and privilege breaches. Expect updated policy exclusions and reporting requirements in 2026.

4. Global regulatory overlap. The European Union’s Artificial Intelligence Act and the U.S. AI Safety Institute’s guidelines are both poised to influence how cross-border privilege and data retention are interpreted. These frameworks could effectively codify what is now only professional guidance.

5. Model audits and privilege walls. Firms are exploring “privilege walls” for AI — internal auditing layers ensuring that privileged material never reaches public models. These technical and procedural controls may soon become best practice.

Together, these frontiers suggest that the meaning of privilege in an AI-augmented practice is still being written — sometimes by coders as much as by courts.

My Take

This is a topic on which I have a strong view. While AI should not yet be treated as a licensed lawyer, some level of privilege should attach to the inputs and outputs that people create with AI when seeking legal guidance. I hold the same view in the context of therapy and medicine.

For many, AI is and will continue to be the only affordable access to professional advice. It is reasonable for people to believe that their conversations with AI about legal or personal issues will remain confidential.

Yes, most users rely on non-confidential consumer versions of AI, and their data is stored on cloud servers and sometimes used for training. But that should not mean their words are open to search, seizure, or disclosure. The law should recognize a boundary between data collection and professional privacy, even when the professional is a machine.

What do you think? Leave a comment below.

Sources

ABA Formal Opinions 477R, 498, 512 | ABA Jurimetrics 2024 | arXiv | Bloomberg | CNBC | Columbia Law Review | D.C. Bar | EU Artificial Intelligence Act | Fazmic Law | Financial Times | Florida Bar | The Guardian | IAPP | Justia Law | Law360 | LawNext | McGuireWoods | Norton Rose Fulbright | NYU Law Review | Oregon State Bar | Reuters | Solicitors Regulation Authority (UK) | U.S. AI Safety Institute

Disclosure
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *