The Two-Tier AI Justice System: Premium Tools for Lawyers, Free Chatbots for Everyone Else

The Two-Tier AI Justice System: Premium Tools for Lawyers, Free Chatbots for Everyone Else

Artificial intelligence promises to make legal help affordable and accessible for millions who can’t afford a lawyer. But as courts, legal aid organizations, and tech companies race to deploy AI tools, a troubling divide is emerging: sophisticated systems for those who can pay, error-prone chatbots for everyone else.

When the Justice Gap Meets the AI Moment

The “justice gap” is no abstraction: the Legal Services Corporation’s 2022 report found that 92 percent of low-income Americans do not get adequate legal help for their civil legal problems. In the U.S. and Canada, self-represented litigants (or “self-reps”) dominate in family courts, small claims, and many tribunal processes. Yet even as the demand has grown, the cost and complexity of law remain steep barriers.

Into this vacuum, AI is being pitched as a bridge. From chatbots that help people identify their legal issues to automated drafting tools for court forms, new systems promise to collapse hours of research and drafting into minutes of guided interaction. But bridging that gap demands more than clever code. It demands reliability, transparency, and crucially, access to the technology itself.

The AI Tools Already in Use

Legal aid organizations, courts, and start-ups are already deploying AI tools to assist self-reps. Issue triage systems prompt users with questions to narrow down legal issues and procedural paths. Tools like JusticeBot, covering landlord-tenant disputes, combine case databases and rule logic to guide users through complex processes. Research on these systems shows promise in structured contexts. Hybrid models use constrained templates plus generative components to draft court documents with user input and human review.

Some court systems and legal-aid agencies now offer AI chatbots for procedural questions, eligibility screening, and explanatory help. AI is also helping legal services organizations internally, automating document review, summarizing case files, and triaging requests so human counsel can focus on higher-value work.

Stanford’s AI & Access to Justice Initiative is collaborating with practitioners to test prototypes and evaluate performance in real contexts. At the ICAIL workshop in June 2025, researchers presented projects focused on AI copilots for self-help, eviction assistance, and document drafting.

Where AI Fails

AI tools often shine when legal problems are relatively standardized and procedural. But more complex disputes raise challenges the technology hasn’t solved. Generative models sometimes produce incorrect or misleading legal citations or reasoning, the phenomenon known as “hallucination.”

A recent incident in British Columbia saw a self-represented party use Microsoft Copilot to generate ten case citations, nine of which were entirely fabricated. The Civil Resolution Tribunal caught the error.

Legal systems vary dramatically by province, state, or court. A tool built for one jurisdiction may misapply or omit relevant local deadlines, forms, or procedural paths. Perhaps most troubling is the emerging two-tier system. Sophisticated, accurate closed-source AI applications are available to law firms and large organizations. Harvey AI, essentially a more dependable ChatGPT for lawyers, is designed for elite firms. These tools are not publicly available to individuals or access-to-justice organizations. While wealthy clients get AI-powered research that rarely hallucinates, self-reps get free ChatGPT, which sometimes does.

The Trust and Literacy Challenge

Early data from the National Self-Represented Litigants Project suggests that self-reps are generally cautious about AI applications like ChatGPT, given their reputation for inaccuracy. Yet the speed and accessibility of these programs appears to be overcoming that hesitation. Programs that process large amounts of legal information quickly, or generate court materials in the proper “voice” and “style,” offer what appears to be an opportunity to level the playing field, particularly when the opposing party is represented by counsel.

Many self-reps lack the legal literacy to evaluate AI outputs critically. They can’t spot when a citation seems off, when reasoning contradicts established precedent, or when procedural advice applies to the wrong jurisdiction. This creates a system where those least equipped to verify AI outputs are most dependent on them.

Legal Questions and Professional Resistance

The unauthorized practice of law remains a concern in many jurisdictions that restrict nonlawyers from giving legal advice. The line between “information plus forms” and “advice” is blurry, especially when AI systems provide suggestions or interpret rights. DoNotPay, which evolved from helping consumers challenge parking tickets to assisting self-represented litigants in small claims court, has faced both awards for access to justice and multiple legal challenges.

In 2023, the company faced class action lawsuits alleging unauthorized practice of law and misrepresentation of its services. In September 2024, DoNotPay settled with the FTC, agreeing to pay $193,000 and prohibiting claims about substituting for professional services without evidence.

Accountability questions remain unresolved. If an AI tool leads a user astray, it’s unclear whether the developer, intermediary, or court bears responsibility. Self-reps entering sensitive facts (family issues, income, immigration status) into AI systems currently have varying levels of data protection depending on which tool they use.

Bar associations have threatened some developers with unauthorized practice complaints. Some lawyers worry that clients will rely on AI advice for high-stakes decisions without understanding its limitations, or that vulnerable populations will be steered toward inadequate self-help when they need human representation. Others see the resistance as protecting a professional monopoly on legal services.

Some U.S. jurisdictions are exploring regulatory sandboxes, temporary safe harbors for testing innovative services while collecting data on consumer protection and access outcomes.

How Courts Are Responding

Some courts are already experimenting with AI within their infrastructure. Canada’s Federal Court is exploring AI-assisted translation and procedural guidance while preserving judicial independence. The court has established principles requiring accountability, respect for fundamental rights, non-discrimination, accuracy, transparency, cybersecurity, and “human in the loop” oversight.

Court administration reforms emphasize low-risk use: assisting users in navigation and document preparation, not in adjudicative decision-making. Essays in the Yale Law Journal call for standardized protocols to allow AI tools, court data, and legal software to interoperate so that innovations can scale responsibly.

Brazil offers an example of coordinated implementation. Facing a backlog of seventy-eight million lawsuits, the Brazilian National Council of Justice developed a national interoperability framework for AI in courts. By 2023, nearly all judicial cases in Brazil were managed digitally through the Electronic Judicial Process platform. The United States, with fifteen thousand to seventeen thousand different state and municipal courts, has no comparable coordination.

Some judges have declined to read self-reps’ submissions based solely on the fact that they were AI-generated. The Federal Court of Canada and Nova Scotia provincial court now require express identification of AI use in submissions. Other jurisdictions have no policy. The current patchwork creates confusion about when AI must be disclosed, how courts should respond to disclosure, and what oversight mechanisms exist.

The Funding Question

No sustainable funding model has emerged for public-interest AI legal tools. Legal aid budgets are already stretched thin. Courts face chronic underfunding. Some envision law societies or bar associations funding vetted, accurate AI tools as part of their public-interest mandate. Others propose partnerships between nonprofit AI developers, access-to-justice organizations, and justice system users. The Stanford Legal Design Lab is working with legal aid groups nationwide, building tools through iterative design with the people who will actually use them. But these efforts remain small-scale pilots without guaranteed long-term funding.

Measuring Impact

Field studies are beginning to examine whether AI tools reduce failure rates, errors, or cost among self-represented litigants. Stanford’s Legal Design Lab is developing quality rubrics for evaluating AI’s answers to legal questions, involving both expert legal practitioners and community members in the assessment. Research teams are tracking which legal aid systems and courts are incorporating AI tools and what oversight mechanisms they’re implementing.

The European Union, United Kingdom, and Australia are developing coordinated approaches to AI in justice systems, including unified standards for AI disclosure and use. U.S. jurisdictions are proceeding independently, with national organizations like the Conference of State Court Administrators, the National Center for State Courts, and the American Bar Association discussing but not yet coordinating voluntary standards.

What Comes Next

Adoption trends suggest that AI will continue shifting from “adjunct tool” to infrastructure in legal practice. The American Bar Association’s Legal Technology Survey shows law firms leaning ever more heavily into AI-driven research, cloud systems, and digital tools.

Firms are already integrating generative AI into daily workflows, especially for document review, drafting, and legal analysis. As law firms morph and competition intensifies, AI’s role in shaping business models will grow, creating pressure for more efficient, technology-enabled service delivery.

At the same time, technical and institutional challenges remain. Empirical work assessing AI legal research systems finds that even closed-source tools hallucinate legal citations at nontrivial rates, raising concerns about overreliance and error propagation. Without robust oversight, quality control, and sustainable funding models, the promise of AI for access to justice may not reach those who need it most. Whether these technologies bridge the justice gap depends on whether they can be scaled transparently, equitably, and in ways that preserve procedural integrity.

My Take

People will use AI for their legal problems no matter how often it hallucinates or misleads. That reliance is already happening and will only grow. The legal system now faces a choice: either provide the public with access to reliable, well-designed AI tools, or brace for the fallout of free ones flooding the courts.

One way or another, the system will bear the cost. AI will likely unleash a wave of self-represented litigants who finally have the means to prepare and file their own materials. Courts will then face an avalanche of error-filled pleadings and procedurally flawed filings produced without lawyer oversight. How to handle that remains unclear, but the solution will have to be twofold: ensure public access to higher-quality AI and create vetting systems that can detect and manage flawed submissions before they reach the docket.

Sources

This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All sources cited are publicly available through court websites, academic publications, and reputable media outlets. Readers should consult qualified counsel for specific legal or compliance questions related to AI use in access-to-justice contexts.

See also: 33 Mistakes Lawyers Make with AI Today

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *