License to Prompt: Should Passing the Bar Include Proving AI Literacy

License to Prompt: Should Passing the Bar Include Proving AI Literacy

Artificial intelligence is changing what it means to be competent in law. As large language models learn to analyze precedent and draft filings, the profession’s next threshold may not be passing the bar but proving that lawyers can supervise the machines doing the work.

The New Definition of Competence

For more than a century, the bar exam has defined who may practice law. Now, as generative AI reshapes research, drafting, and analysis, regulators are re-examining what competence actually means. The American Bar Association’s 2024 ethics guidance requires lawyers to understand the “capabilities and limitations” of AI tools they use—a line that effectively folds technological literacy into the duty of professional care. The reasoning is straightforward: lawyers may delegate labor, but never accountability.

The National Conference of Bar Examiners will introduce its NextGen exam in 2026, shifting away from rote memorization toward applied skills. The new exam will test seven foundational lawyering skills including legal research, legal writing, issue spotting and analysis, investigation and evaluation, client counseling and advising, negotiation and dispute resolution, and client relationship management. The exam will reduce testing time from 12 hours to nine hours over one and a half days. What it will not test, at least yet, is whether candidates understand or can oversee artificial intelligence. The omission has sparked debate about whether a lawyer who can prompt an algorithm without understanding it meets the standard of minimum competence.

What Lawyers Need to Know

AI literacy in legal practice is less about coding than about discernment. It includes understanding how generative models create language, when they hallucinate, and how to verify what they produce. ABA Formal Opinion 512 makes clear that lawyers must review, verify, and remain responsible for all AI-assisted work. A 2024 survey of law schools found that 55 percent already teach AI concepts through dedicated courses, while 62 percent have integrated AI learning opportunities into the first-year curriculum. The aim is to train lawyers to audit, not automate.

Practitioners often describe AI literacy as “supervisory competence.” It combines technical skepticism with ethical vigilance—the ability to explain why a model’s answer looks persuasive, yet might still be wrong. In a field built on precedent, that ability to interrogate output is fast becoming a core professional skill. Since the ABA amended Comment 8 to Model Rule 1.1 in 2012 to include understanding “the benefits and risks associated with relevant technology,” 40 states have adopted similar language requiring technological competence as part of the duty to provide competent representation.

The Case for Pre-Admission Testing

Supporters of adding AI literacy to bar testing argue that the license should reflect how law is practiced, not how it was memorized. Public protection demands that new lawyers know the risks of the tools they use. The New York sanctions case over hallucinated citations showed how a single unverified prompt can become an ethical breach. One of the sanctioned attorneys admitted he had been “operating under the false perception that [ChatGPT] could not possibly be fabricating cases on its own.” Proponents say a measured assessment of AI oversight would prevent avoidable malpractice before it begins.

Internationally, regulators are moving ahead. The Solicitors Regulation Authority in the U.K. has published a digital-competence framework, and the Singapore Academy of Law released an AI governance playbook stressing auditability and transparency. The EU AI Act codifies accountability standards that already influence professional ethics abroad. Against that backdrop, U.S. licensing bodies risk appearing analog in a digital profession.

Barriers to Entry

Opponents warn that expanding the bar exam could worsen inequity. Bar preparation courses already represent a significant financial burden, with major providers charging between $1,700 and $3,900 for comprehensive programs. As of 2025, BARBRI’s courses range from $1,699 for its basic Essentials package to over $4,000 for Elite options, with its Premium package at $2,399. Themis charges $2,695 for its standard course. Adding AI-specific testing would likely require supplemental study materials and preparation time, further increasing costs that weigh most heavily on students from under-resourced schools.

Critics also note that technology evolves faster than testing committees. By the time an AI literacy module is finalized, the underlying tools may have changed twice. The bar exam, designed to assess timeless skills, struggles with subjects that expire before the ink dries. Practical design poses another problem: should the test measure theoretical knowledge, ethical reasoning, or performance with real tools?

Standardizing that across 50 jurisdictions could prove unworkable. Many educators favor embedding AI scenarios into existing ethics or professionalism sections instead of creating a new exam silo. That approach preserves awareness without building a new barrier to entry.

Signals from Academia

Law schools are already serving as laboratories for AI-competence training. Washington University School of Law now embeds AI instruction in its first-year Legal Research curriculum, ensuring every student gains fluency in both traditional legal research methods and AI tools. The revamped curriculum emphasizes detecting hallucinated or inaccurate content and comparing AI-generated results with traditional research methods.

Berkeley Law introduced an AI-focused LL.M. program in August 2024, covering AI governance, intellectual property in AI-generated content, and the changing regulatory environment. Case Western Reserve University School of Law now requires all first-year students to complete a certification program called “Introduction to AI and the Law,” which provides hands-on experience with AI-powered legal tools alongside the evolving landscape of AI regulation and ethics.

These experiments suggest a bottom-up solution: normalize AI competence through education, not examination. The approach acknowledges that AI literacy is better suited to continuous learning than one-time testing.

Global Context

Beyond the U.S., the movement is accelerating. The U.K.’s Legal Services Board is consulting on digital competence as a licensing standard. Singapore’s framework treats AI literacy as a duty of transparency, requiring lawyers to demonstrate understanding of the systems they deploy. Across Europe, the EU AI Act ties compliance obligations to risk categories, requiring lawyers who work with or advise on high-risk AI systems to prove oversight capability. These models treat understanding AI not as an elective skill but as part of professional ethics itself, creating pressure for U.S. jurisdictions to align with emerging global norms.

The Continuing Education Bridge

In the meantime, continuing legal education has become the primary vehicle for ensuring AI competence among practicing attorneys. Unlike pre-admission testing, CLE requirements apply to all 1.3 million licensed attorneys in the United States, not just new graduates. North Carolina now requires one hour of technology training annually as part of its 12-hour yearly CLE requirement. Florida mandates three hours of technology credits every three years within its 30-hour cycle. In April 2025, New Jersey adopted a requirement for at least one CLE credit in technology-related subjects—including artificial intelligence, cybersecurity, and emerging technologies—within each two-year reporting cycle.

Beyond mandatory hours, state bars are issuing comprehensive guidance on AI use. California, New York, Florida, Pennsylvania, Michigan, and New Jersey have all published formal ethics opinions addressing lawyers’ obligations when using generative AI. These opinions consistently emphasize that existing duties of confidentiality, competence, supervision, and candor to tribunals all apply to AI-assisted work. The guidance creates de facto standards for practicing lawyers without requiring legislative changes to bar admission requirements.

These CLE initiatives serve dual purposes. They address the immediate need to educate currently practicing lawyers who passed the bar before AI became ubiquitous in legal practice. They also function as testing grounds for determining what AI competence looks like in practical terms, informing future decisions about whether and how to incorporate such requirements into licensing exams. It is easier to update CLE syllabi than to rewrite a licensing exam, making continuing education a pragmatic bridge to broader competence standards.

The Practical Reality

The question of AI literacy on the bar exam intersects with broader questions about legal education and professional standards. Law firms increasingly expect new associates to work with AI tools from day one. A 2024 American Bar Association survey found that approximately 30 percent of law offices now use AI-based technology tools, with legal research-specific platforms like Thomson Reuters’ CoCounsel, Lexis+ AI, and Westlaw AI gaining adoption alongside general-purpose tools like ChatGPT. This creates a disconnect: the profession expects AI fluency while licensing bodies do not yet test for it.

For law students, the calculus is shifting. Those attending schools with robust AI training programs gain competitive advantages in the job market. Those at schools without such programs may find themselves at a disadvantage despite passing the same licensing exam. This emerging gap suggests that market forces, rather than regulatory mandates, may drive standardization of AI competence expectations, though such market-driven solutions tend to exacerbate rather than resolve inequities in access.

Where the Line Is Moving

Whether or not AI literacy appears on the bar exam, its shadow already falls across the profession. Courts expect human verification of all AI-generated content. Clients expect technological fluency from their counsel. Firms expect associates who can question machine output with confidence. State bars expect members to understand the tools they deploy. As the NextGen exam debuts in 2026 with its emphasis on practical skills over memorization, competence is being redefined not as knowing law but as knowing how law is being mechanized.

The path forward likely involves multiple approaches working in concert. Law schools will continue expanding AI literacy training in their curricula. State bars will maintain and expand CLE requirements focused on technology competence. International frameworks will create pressure for standardization across jurisdictions. And eventually, licensing bodies may determine that the weight of evidence demands formal testing of AI oversight capabilities before admission to practice. The question is not whether AI competence becomes a condition of licensure, but when and whether that transition happens proactively through thoughtful reform or reactively through crisis.

For now, the profession is navigating an uncomfortable middle ground. New lawyers enter practice with varying levels of AI literacy depending largely on where they attended school and what resources they could afford. Experienced lawyers scramble to meet new CLE requirements in areas where they have little background. And clients, courts, and opposing counsel expect consistent competence regardless of when someone passed the bar or what training they received. Passing the bar may soon mean passing the test of supervision itself, whether that test is formal or functional, administered at admission or demanded throughout a career.

My Take

In theory, adding AI competency to the bar exam makes perfect sense. In practice, it is a logistical nightmare. You cannot exactly hand out ChatGPT accounts during the test. Still, including a few questions on basic AI concepts, ethics, and situational awareness would make sense. Lawyers do not need to know how to code, but they should understand that machines can make things up and that “looks convincing” is not the same as “is true.”

The larger problem is that the bar exam has always rewarded memorization instead of reasoning. It comes from an era when knowledge was stored in your head rather than accessed through a model. The skill that now matters most is prompting, which has quietly become the new IRAC. Law students once drilled “Issue, Rule, Application, Conclusion.” Now they are learning “Context, Instruction, Constraint, Verification.” It is the same reasoning process, only directed toward a machine instead of a grader.

The real shift is from competence to compliance. Bar regulators will eventually realize that proving you can use AI is less important than proving you can supervise it. That is where this is heading—toward required AI-use policies, audit trails, and oversight logs. Passing the bar will soon mean more than knowing torts; it will also mean being able to defend your prompts and outputs if anyone asks how they were created.

The equity problem is impossible to ignore. Bar preparation already costs as much as a used car. Adding AI training modules would only widen the gap between well-funded law schools and everyone else. The first generation of AI-trained lawyers will have a permanent productivity advantage over those who were not. We are already drifting toward a two-tier profession: one that supervises algorithms and one that gets supervised by them.

In the end, whether or not AI literacy appears on the bar exam may not matter much. Every lawyer is already being tested on it, just not on paper. Clients, judges, and firms are quietly grading everyone on how responsibly they use these tools. The results are not scored yet, but make no mistake, the exam has already begun.

Sources

This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: The Two-Tier AI Justice System: Premium Tools for Lawyers, Free Chatbots for Everyone Else

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *