Can Machines Be Taught to Obey Laws They Can’t Understand?
Every lawyer learns to follow precedent. Every algorithm learns to predict it. The question now is whether machines can be made to follow not just data, but law, and whether the line between obedience and compliance can hold once the code begins to reason.
The Legal Imagination of a Machine
In AI development, “alignment” refers to keeping models from deviating from intended behavior. In law, following a rule implies something deeper: constraint, accountability, and interpretative judgment. Designing a model to do what lawyers take for granted—to apply statutory text, observe precedent, and defer to supervisory authority—is not just a technical challenge but a constitutional one.
Scholars now speak of “algorithmic constitutionalism,” the study of how AI systems internalize or distort legal norms. As Cambridge research notes, the line between software governance and constitutional design is already blurring. The emerging goal is “law-following AI”—systems built not merely to operate safely but to obey enforceable rules.
What It Means to Follow a Rule
In jurisprudence, legal philosopher H.L.A. Hart drew a fundamental distinction between external conformity and internal acceptance of norms in his influential work The Concept of Law (1961). A lawyer follows a rule by understanding its rationale and context; a machine follows through programmed constraint. Engineers use logic-based architectures and rule-checking mechanisms to simulate obedience, but imitation is not interpretation.
Legal AI researchers such as Trevor Bench-Capon and Henry Prakken have developed argumentation frameworks that let systems model legal reasoning and exceptions. In their extensive work on argumentation in legal reasoning, they demonstrate how AI can be designed to handle the complexities of case-based analysis. Yet these approaches reveal the paradox: the model can quote a statute while missing the doctrine’s human nuance. It can recognize precedent but not prudence.
The challenge becomes apparent in edge cases. A human lawyer encountering an ambiguous statute can draw on principles of equity and legislative history to arrive at a reasonable interpretation. An AI system must ultimately choose between programmed options, raising a fundamental question: if a system can follow every written rule yet fail to honor the spirit of law, is it truly law-following?
Compliance by Design: The EU AI Act
The EU AI Act is the first comprehensive attempt to require AI systems to act within legal boundaries. Entered into force in August 2024, it imposes obligations of documentation, traceability, and human oversight—codifying the principle that legality must be built into the system, not bolted on later. Similar guidance appears in the NIST AI Risk Management Framework, which calls for “governable” AI capable of demonstrating lawful operation.
These efforts amount to a form of “compliance-by-design.” Systems classified as high risk must carry their own audit trails, maintain records of decisions, and be explainable upon demand. Yet as the National Association for AI Accountability argues, law-following AI raises an architectural problem: we are encoding normative ambiguity into code that cannot improvise.
The Act’s enforcement mechanisms became operational in phases throughout 2025. Prohibitions on unacceptable-risk AI took effect in February 2025, banning systems that use subliminal manipulation, social scoring, or real-time biometric identification in public spaces except in narrowly defined law enforcement scenarios. Penalties and governance structures became enforceable in August 2025, with Member States required to designate national competent authorities. Organizations face administrative fines up to €35 million or 7% of global annual turnover for violations of prohibited AI practices—penalties designed to ensure compliance is taken seriously from the design stage forward.
Technical Standards for AI Governance
Beyond regulatory frameworks, international technical standards are emerging to operationalize AI governance. The ISO/IEC 42001:2023 standard, published in December 2023, represents the world’s first comprehensive framework for AI management systems. This standard specifies requirements for establishing, implementing, and maintaining artificial intelligence management systems within organizations, addressing unique challenges such as transparency, continuous learning, and ethical considerations.
ISO/IEC 42001 integrates with existing governance frameworks and provides 38 specific controls that organizations must implement to demonstrate responsible AI development and deployment. Major technology providers including Microsoft and AWS have already obtained ISO/IEC 42001 certification for their AI services, signaling industry recognition of standardized governance approaches. The standard’s emphasis on risk assessment, impact evaluation, and lifecycle management aligns closely with regulatory requirements under frameworks like the EU AI Act, creating a convergent approach to AI governance.
A Global Patchwork: Divergent Regulatory Approaches
While Europe pursues comprehensive regulation, other major powers have adopted distinct strategies that reflect different values about innovation, control, and the role of government in technology development. These divergent approaches create a complex landscape for organizations operating globally.
China has implemented a targeted regulatory framework emphasizing algorithm filing requirements and content control. The country’s approach, outlined in its Interim Measures for the Management of Generative Artificial Intelligence Services effective since August 2023, requires AI service providers to register with the Cyberspace Administration of China. As of June 2024, over 1,400 AI algorithms from more than 450 companies have been filed.
China’s AI Safety Governance Framework, released in September 2024 and updated in 2025, takes a risk-based approach while prioritizing innovation and national development goals. The framework emphasizes a “people-centered approach” and the “principle of developing AI for good,” balancing technological advancement with social stability.
The United Kingdom has charted a middle course with its “pro-innovation” regulatory approach, announced in February 2024. Rather than creating new AI-specific legislation, the UK relies on existing sectoral regulators to apply five cross-cutting principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The UK government has directed regulators including the Competition and Markets Authority, the Information Commissioner’s Office, and the Financial Conduct Authority to publish strategic approaches to AI within their domains.
This principles-based framework aims to maintain regulatory agility while avoiding the prescriptive requirements that some fear could stifle innovation. However, recent developments suggest the UK may introduce binding measures for highly capable general-purpose AI systems, indicating that even innovation-friendly jurisdictions recognize the need for some mandatory safeguards.
The United States has pursued sectoral regulation rather than comprehensive legislation, with agencies like the Federal Trade Commission enforcing AI-related claims under existing consumer protection statutes. This fragmented approach creates compliance challenges for companies operating across jurisdictions, each navigating different definitions of what it means for AI to follow the law. The result is a global patchwork where a system deemed compliant in one jurisdiction may violate rules in another, forcing companies to build multiple versions or adopt the most restrictive standards across their entire operation.
Engineering Obedience: Building Law into Code
Building a system that obeys the law involves three engineering layers: formalization, enforcement, and explanation. Legal rules must first be translated into machine-readable logic, then embedded as constraints that restrict model behavior, and finally made auditable to prove that those constraints worked.
One approach gaining attention is Constitutional AI, developed by Anthropic. This method trains models using explicit principles rather than relying solely on human feedback. The process involves two distinct phases: supervised learning (where the system generates responses, self-critiques them against constitutional principles, and revises outputs) followed by reinforcement learning from AI-generated feedback (RLAIF). During the supervised phase, the model learns to evaluate and improve its own responses; in the reinforcement learning phase, AI feedback based on principles replaces human preference judgments. This represents an attempt to encode rule-following behavior at the architectural level.
The constitutional approach extends beyond safety to encompass legal compliance. By defining principles that mirror legal requirements—such as respecting privacy or avoiding discrimination—developers aim to create systems that naturally align with regulatory expectations.
Yet this translation often reduces principle to procedure. A model can follow an anti-discrimination rule in form while perpetuating bias in function. The challenge is not merely technical but epistemological: how do we encode the discretion that makes law workable without creating systems that exploit ambiguity?
When Code Breaks the Law: Who’s Accountable?
If an AI system violates the law, who bears responsibility? The developer who built it? The organization that deployed it? The user who prompted it? Or the system itself? This question reached courtrooms most dramatically in Mata v. Avianca, Inc., where attorney Steven Schwartz relied on ChatGPT to research a personal injury case and submitted a brief citing six entirely fabricated judicial opinions.
When opposing counsel pointed out the non-existent cases, Schwartz asked ChatGPT whether the cases were real. The system assured him they were genuine and could be found in legal databases. Only after a court order did Schwartz reveal ChatGPT’s role. Judge P. Kevin Castel of the Southern District of New York ultimately sanctioned Schwartz, his colleague Peter LoDuca, and their law firm $5,000 in June 2023—not for using AI, but for failing to verify its output and for not being forthcoming when the fabrications came to light.
The case crystallized a legal principle: technological tools do not absolve human responsibility. Lawyers maintain a “gatekeeping role” to ensure accuracy, regardless of their research methods. Yet Mata also revealed how persuasively AI systems can assert false information. Schwartz, a 30-year practitioner, never suspected ChatGPT might fabricate cases wholesale, a form of algorithmic hallucination that would become infamous.
The accountability problem extends beyond individual mistakes to systemic failures. When predictive policing algorithms disproportionately target minority communities, when hiring tools discriminate against protected classes, or when credit scoring systems perpetuate historical inequities, accountability fragments. The EU AI Act attempts to address this through mandatory conformity assessments for high-risk systems and by establishing clear chains of responsibility. Providers must maintain technical documentation, implement quality management systems, and enable post-market monitoring. Deployers must ensure human oversight and report serious incidents.
Yet gaps remain. If a general-purpose AI model like GPT-4 is used in multiple downstream applications, each causing different harms, who bears primary liability? The model developer? The application provider? The deploying organization? The regulatory answer increasingly points toward shared responsibility, with obligations flowing through the supply chain. But translating this principle into practice—especially across jurisdictions with different liability regimes—remains a work in progress.
The Limits of Machine Compliance
Even perfectly designed compliance systems face inherent limitations. Legal interpretation often requires weighing competing values, considering context, and exercising judgment in ways that resist formalization. Consider the legal doctrine of reasonableness: what a “reasonable person” would do varies by circumstance, culture, and evolving social norms. Encoding such flexibility without enabling evasion challenges even the most sophisticated architectures.
Moreover, law itself evolves through interpretation and application. Courts extend precedents, legislatures respond to new situations, and regulatory guidance shifts as technology advances. A system trained on historical legal data may miss emerging doctrinal developments or misapply settled law to novel contexts. The lag between legal change and model updates creates compliance windows where systems operate on outdated understanding.
This temporal mismatch grows more pronounced as AI systems become more autonomous. A model making real-time decisions—in credit allocation, employment screening, or content moderation—cannot pause for legal research when encountering ambiguous situations. It must act on programmed rules, even when those rules inadequately capture current legal requirements.
The economic incentives also cut against perfect compliance. Implementing robust legal constraints costs money and may reduce system capabilities. A model that refuses every potentially problematic query provides less value than one that attempts answers, even if less reliably law-abiding. Organizations face pressure to maximize utility while minimizing liability—a balance that may not align with maximizing legal compliance.
Looking Forward: Open Questions and Emerging Challenges
As AI systems grow more capable, several challenges loom. First, the pace of AI development outstrips regulatory adaptation. By the time frameworks like the EU AI Act take full effect in 2026-2027, the technology they govern will have evolved substantially. This creates perpetual catch-up between technical capability and legal constraint.
Second, international regulatory fragmentation forces difficult choices. Should companies build to the lowest common denominator, satisfying minimal requirements everywhere? Or adopt the strictest standards globally, potentially sacrificing competitive advantage? Neither approach is clearly correct, and the costs of maintaining jurisdiction-specific versions mount quickly.
Third, the question of AI “legal personhood” remains unresolved. If systems become sufficiently autonomous, do they warrant some form of legal standing? Should they be subject to sanctions independent of their creators? These questions move from theoretical to practical as AI systems increasingly make consequential decisions without meaningful human intervention.
Finally, enforcement mechanisms remain uncertain. Regulatory agencies are building capacity to audit AI systems, but the technical expertise required exceeds most governments’ current capabilities. Third-party auditors are emerging, but standards for AI assessment are still maturing. The gap between regulatory ambition and enforcement capability may prove the most significant challenge to law-following AI.
Key Principles for Law-Following AI
Despite these challenges, certain principles for law-following AI are emerging from regulatory frameworks, technical standards, and early implementation efforts:
- Formalization: Legal rules must be translated into machine-readable logic without losing essential meaning or context
- Constraint: Rules must be embedded as architectural limits on model behavior that cannot be easily circumvented
- Auditability: Systems must produce verifiable records of decision-making processes that can be reviewed by regulators and courts
- Traceability: Organizations must demonstrate what the model knew at each stage and how decisions were reached
- Human Oversight: Final responsibility rests with human operators, not the system, maintaining accountability in legal processes
- Explainability: Systems must be able to articulate the basis for their outputs in terms understandable to legal professionals
- Continuous Monitoring: Ongoing assessment of system performance against legal requirements, with mechanisms for rapid response to compliance failures
- Adaptive Governance: Frameworks that can evolve as technology advances and legal interpretations develop, avoiding rigid constraints that become obsolete
Practical Guidance for Organizations
Organizations deploying AI systems should consider these practical steps to enhance legal compliance:
- Conduct comprehensive risk assessments that map AI applications to applicable legal requirements across all operating jurisdictions
- Implement AI governance frameworks aligned with international standards like ISO/IEC 42001, establishing clear accountability structures
- Maintain detailed documentation of AI system design choices, training data, and decision logic to support regulatory inquiries and legal defenses
- Establish human oversight mechanisms with clear authority to override system outputs when legal or ethical concerns arise
- Build cross-functional teams including legal, technical, and ethics expertise to evaluate AI systems throughout their lifecycle
- Create incident response plans for addressing legal compliance failures, including processes for remediation and stakeholder notification
Sources
- Anthropic: Constitutional AI – Harmlessness from AI Feedback
- Bench-Capon, T., Prakken, H., & Sartor, G. (2009). Argumentation in Legal Reasoning. In Argumentation in Artificial Intelligence. Springer.
- Cambridge University Press: Reconceptualizing Constitutionalism in the AI Run Algorithmic Society
- Chambers and Partners: Artificial Intelligence 2025 – UK Trends and Developments
- DLA Piper: China Releases AI Safety Governance Framework
- European Commission: AI Act Regulatory Framework
- European Parliament: EU AI Act – First Regulation on Artificial Intelligence
- Hart, H.L.A. Biography and Work (Wikipedia)
- Hart, H.L.A. (1961). The Concept of Law. Oxford: Clarendon Press.
- ISO/IEC 42001:2023 – Artificial Intelligence Management Systems
- Jones Day: EU AI Act First Rules Take Effect on Prohibited AI Systems
- Mata v. Avianca, Inc., No. 1:2022cv01461 (S.D.N.Y. June 22, 2023)
- NAAIA: Compliant-by-Design AI Systems
- NIST AI Risk Management Framework
- Prakken, H. – Academic Publications and Research
- UK Government: Pro-Innovation Approach to AI Regulation – Government Response
- White & Case: AI Watch Global Regulatory Tracker – China
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and regulatory developments cited are publicly available through court filings, government websites, and reputable sources. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: When Machines Decide, What Are the Limits of Algorithmic Justice?
