Should Governments Use AI to Write Laws? Some Already Are
Experiments from the UAE to Estonia are reshaping how laws are drafted, but oversight remains the critical question.
Artificial intelligence is entering one of government’s most sensitive domains: the drafting of laws. Around the globe, governments are testing systems that can suggest legal language, catch errors, harmonize confusing terms, and even propose new amendments. The promise is compelling: fewer mistakes, faster translations, and more consistent legal codes. The challenge is equally significant: ensuring transparency, fairness, democratic control, and adequate oversight of machine-generated recommendations.
Real experiments are already underway. The UAE has launched a “Regulatory Intelligence Office” designed to bring AI into the lawmaking process, helping draft, review, and update legislation. Countries from Singapore to Estonia are running their own trials, and some U.S. state legislatures are testing tools that track amendments in real time. The central question facing policymakers: which aspects of lawmaking should remain exclusively human, and which can machines safely assist with? CIO
What AI-Assisted Legislative Drafting Entails
These systems function as sophisticated writing assistants for lawmakers. AI can summarize existing laws, verify that definitions match across documents, suggest grammar corrections, spot inconsistencies, and sometimes even draft complete clauses. But a critical distinction remains: no government currently allows AI to create legally binding laws autonomously. Human review, editing, and formal approval remain mandatory. The UAE’s experiments have drawn attention because they attempt to integrate AI into legislative workflows while preserving human authority over final decisions. IE University
The Case for Automation
Legislative drafting is notoriously slow and error-prone. Legal drafters spend considerable time checking cross-references, ensuring terminology remains consistent, and coordinating between departments. AI offers the potential to accelerate these processes dramatically. The technology can draft faster, apply terms more consistently, catch reference errors that humans overlook, and provide enhanced support in jurisdictions requiring laws in multiple languages. The UAE government projects it could reduce legislative drafting time by up to 70 percent. Al Suwaidi Law Firm
Significant Risks Emerge
Training AI on historical laws and court decisions risks perpetuating existing biases and inequalities. Systems may favor legal language from well-documented jurisdictions while marginalizing others. AI can also “hallucinate,” producing clauses or logic that collapse under scrutiny if not carefully verified.
Guidance from standards bodies warns against automation bias, meaning the overreliance on AI outputs by human reviewers, which can erode meaningful oversight if not actively mitigated. The U.S. National Institute of Standards and Technology’s AI Risk Management Framework and its SP 1270 report on bias both emphasize the need for measurable safeguards, bias testing, and continuous evaluation of data sources throughout the AI lifecycle.
Democratic Legitimacy and Constitutional Concerns
Incorporating AI into legislative drafting raises fundamental questions about democratic governance. Who bears ultimate responsibility for decisions? Can citizens trace how legislation was created? How does algorithmic involvement affect the separation of powers?
Transparency mechanisms become essential. Freedom-of-information rules and access-to-documents regimes take on heightened importance. Some scholars advocate for “co-governance,” embedding democratic oversight directly into AI regulatory frameworks. Harvard Law Review (co-governance)
In Europe, the AI Act sets requirements for high-risk AI systems, including human oversight, documentation, logging, and transparency. It is now on a staged application timeline through 2026–2027, with obligations for general-purpose AI taking effect in 2025. EUR-Lex: AI Act | European Commission (AI Act)
Human Oversight: From Principle to Proof
Mandating oversight is only the start; making it measurable and effective is harder. Practical guidance stresses “meaningful human review” by staff with authority and expertise to challenge AI, plus audit trails that demonstrate when and how oversight occurred. ICO Guidance (human oversight) | NIST AI RMF: GAI Profile
Beyond Drafting: AI in Regulatory Design and Delivery
Governments are also applying AI beyond statute drafting, shaping how rules are designed, targeted, and enforced. International guidance highlights both the upside (tailored regulation, better evaluations) and the need to avoid a false sense of security from superficial “human-in-the-loop” controls. OECD: Regulatory Design & Delivery
International Norms and Summits
Global frameworks are emerging. The Council of Europe adopted a technology-neutral, binding convention to align AI with human rights, democracy, and the rule of law, with scope and security carve-outs. Council of Europe: Framework Convention
Summit diplomacy is also shaping expectations. The 2024 AI Seoul Summit produced a leaders’ declaration and a statement of intent on safety-science cooperation, while France’s 2025 AI Action Summit convened heads of state and industry to advance implementation and international alignment. In the United States, the Executive Order on Safe, Secure, and Trustworthy AI continues to influence legislative frameworks worldwide. Seoul Declaration (MOFA Korea) | Seoul Statement of Intent (PDF) | Élysée: AI Action Summit
Capacity and Readiness Gaps
Institutional readiness varies widely. Many legislatures and agencies face skills shortages, legacy IT, and data quality issues. Comparative indices and governance reviews point to uneven maturity across countries, and caution that benefits will lag without investment in people, data, and infrastructure. World Bank: GovTech Maturity Index | OECD: Governing with AI
Frontier Models and Threshold Legislation
Some jurisdictions have explored threshold-based legislation for highly capable “frontier” models. In 2024, California advanced SB 1047 before a gubernatorial veto, spotlighting the challenge of balancing innovation with enforceable safety obligations and institutional capacity. Office of the Governor: SB 1047 Veto Message
Public Attitudes and Political Legitimacy
Public acceptance is an essential constraint. Recent surveys show increased concern about AI’s societal impacts and strong preferences for transparency and accountability in public-sector deployments, factors that can influence legislative appetite and the durability of reforms. Pew Research Center (Public & Experts on AI) | Pew Research Center (Americans & AI)
Recommended Safeguards for Legislatures
Governments adopting AI in lawmaking require robust protective measures. Essential safeguards include maintaining detailed logs of every prompt and instruction, requiring formal human attestation of review, making processes publicly accessible, employing adversarial testing teams to identify vulnerabilities, conducting side-by-side comparisons of AI and human drafts, and ensuring AI training relies exclusively on properly licensed or public legal materials rather than biased or proprietary datasets.
Implementation Framework for Legislative Offices
- Select AI models tested specifically for legal accuracy, bias mitigation, and explainability.
- Establish clear data-governance protocols defining input sources, update frequencies, and accountability standards.
- Develop libraries of validated prompts, filtering mechanisms, and output-rejection criteria.
- Maintain comprehensive records including prompts, alternative suggestions, edits, and reviewer annotations.
- Train staff in adversarial-testing methodologies to identify system limitations and reduce automation bias.
- Create public disclosure templates clearly marking AI-assisted sections.
- Develop incident-response and rollback procedures for addressing flawed outputs.
Implications for Future Governance
The next generation of legislative drafters, clerks, and policy staff will require new competencies. Prompt engineering, model auditing, and oversight of explainable systems may become as fundamental as traditional statutory drafting skills. Public administration programs are beginning to incorporate these subjects into their curricula.
The emerging model of lawmaking involves collaboration: machines propose possibilities, humans render final decisions, and the public maintains scrutiny over both. The objective is not replacing human judgment but augmenting it. Success requires lawmakers fluent in both legal principles and technology.
My Take
Without question, legislators should use AI to assist with drafting laws at every level of government and regulatory bodies. Perhaps not yet, but in due course it would be reckless not to given how intelligent AI is.
I understand AI is biased and alignment can be faulty, but assuming these issues are addressed, there is no doubt in my mind that AI can both accelerate and improve the legislative drafting process. It can analyze precedent, harmonize statutes, and spot inconsistencies faster than any human team.
That said, every final draft must still pass through human review. AI can enhance the lawmaking process, but it should never replace human judgment in deciding what becomes law. After all, laws are enacted for the benefit of humans, not machines.
What do you think? Leave a comment below.
Sources
Al Suwaidi Law Firm | California Office of the Governor | CIO | Council of Europe | Élysée (AI Action Summit) | European Commission (AI Act) | EUR-Lex (AI Act) | Harvard Law Review | IE University | ICO (Guidance on AI & Data Protection) | NIST AI RMF (GAI Profile) | NIST SP 1270 (Bias) | Nortal | OECD (Governing with AI) | Pew Research Center | Republic of Korea MOFA (Seoul Declaration) | White & Case | World Bank
Disclosure
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.