Can Prosecutors Constitutionally Outsource Charging Decisions to Algorithms?
Artificial intelligence is steadily advancing into one of the justice system’s most sacrosanct functions: the charging decision. As AI adoption in criminal justice accelerates nationwide, prosecutors are beginning to use algorithmic tools to prioritize cases, assess evidence, and recommend charges. But what happens when those tools shift from offering advice to making decisions? The constitutional framework, built on human judgment, accountability, and executive discretion, may not be equipped for a process where a machine determines who is prosecuted and for what crime.
The Constitutional Stakes
Prosecutorial discretion has long been understood as an exercise of human judgment within the executive branch. The Wayte v. United States decision emphasizes that charging decisions are “particularly ill-suited to judicial review” because they involve broad prosecutorial authority. At its core, such discretion presupposes a human prosecutor weighing evidence, motive, fairness, and public interest.
But what if an algorithm generates the charging recommendation, or the decision itself? Is the prosecutor still exercising independent discretion, or has the office outsourced that policymaking function to a machine? If human discretion meaningfully disappears, several constitutional concerns arise: due process, equal protection, and separation of powers all may be implicated.
The Due Process Problem: Can Machines Exercise Judgment?
Due-process protections require that individuals not be subject to arbitrary government decisions, and that decisions bear a rational relation to the facts and law. When an algorithm is involved in charging, two key questions emerge: (1) whether the algorithm is transparent and explainable, and (2) whether the human decision-maker retains meaningful control. When the charging decision is mediated by a “black-box” tool, defendants may be deprived of the ability to challenge how and why a charge was selected, raising the specter of automated, unreviewable government action.
In the criminal-justice context, human prosecutors cannot delegate their duty entirely to a machine without risking a due-process violation. The machine’s decision may lack the necessary moral judgment, individualized evaluation, or robust reasoning that human discretion demands. Front-loading of cases through algorithmic gatekeeping (where an algorithm initially determines which cases proceed to human review) creates particular risks when human oversight becomes merely perfunctory rather than genuinely independent.
Equal Protection and Algorithmic Bias
Charging decisions mediated by AI also raise equal-protection concerns. Algorithms trained on historical data reflect the biases embedded in past enforcement and charging practices. For example, a pilot algorithmic tool designed to mask race in charging decisions—the “blind-charging” project—demonstrates both the promise and the risk of algorithmic intervention. (Chohlas-Wood et al., 2021) If an algorithm systematically disadvantages defendants of a protected class (even without intent), can the prosecutor’s office avoid constitutional liability simply by pointing to a “tool”?
Traditional discrimination doctrine, such as the McCleskey v. Kemp framework, is ill-equipped to handle statistical bias introduced by AI. The charge here is not selective prosecution in the classic sense but algorithm-enhanced decision-making that covertly perpetuates disparities. Prosecutors may need to demonstrate meaningful oversight, audit trails, and corrective mechanisms to address such risks.
Who’s in Charge? Accountability When Algorithms Decide
The prosecutor’s charging function is constitutionally executive in character. Delegating it to an algorithm, particularly one developed by a private vendor and not transparent to the public, challenges the accountability regime. If the algorithm decides charges based on internal scoring or proprietary logic, the public cannot hold anyone directly to account. The human prosecutor may become a nominal signatory rather than the decision-maker.
This raises a separation-of-powers concern: if the executive’s prosecutorial power is effectively exercised by non-public, non-elective algorithmic systems, oversight collapses and the democratic accountability of the process is weakened. When proprietary algorithms involve trade secrets, they create additional discovery challenges for defense counsel seeking to understand and challenge charging decisions, implicating both Brady obligations and due process rights.
When AI Becomes the Decision-Maker
Currently, many AI tools in prosecutorial settings serve as decision-support, helping to prioritize cases, identify evidence, or flag high-risk defendants. But the line is blurring. In California, a statewide mandate implemented in January 2025 requires all prosecutors’ offices to adopt “race-blind” charging practices using algorithmic redaction of police reports. (California Assembly Bill 2778) The California Attorney General’s guidelines outline a two-step process where algorithms automatically redact race-related information before prosecutors make initial charging decisions.
How does this work technically? The Stanford Computational Policy Lab’s software uses natural language processing to identify and redact not just explicit racial identifiers—like names and physical descriptions—but also contextual clues such as neighborhood references, school names, or cultural markers that might reveal race. The algorithm scans police reports, arrest records, and witness statements, replacing identifying information with neutral placeholders before a prosecutor conducts the initial charging evaluation. Only after this “blinded” review does the case proceed to a second evaluation with complete, unredacted information.
Yolo County, California pioneered this approach in 2021, partnering with Stanford University’s Computational Policy Lab to develop the redaction software. District Attorney Jeff Reisig, who sponsored the statewide legislation, described the system as bringing Lady Justice’s blindfold “to life.” (Law360, 2025) However, implementation has proven challenging: San Francisco District Attorney Brooke Jenkins reported in early 2025 that her office requires approximately $1.4 million annually to comply with the mandate, including $200,000 for software and funding for six additional staff positions. (Governing, February 2025)
At what point does this become non-human decision-making? A prosecutor retaining only a cursory review may have in law ceded the decision. That shift suggests a delegation to algorithmic authority, triggering constitutional scrutiny.
Real-World Risks: Four Critical Concerns
- Lack of explainability: Algorithms may output scores or recommendations without providing the reasoning behind them, undermining the defendant’s ability to challenge fairness.
- Reinforcing bias: Algorithms trained on historical prosecutorial decisions may replicate past disparities. The “blind-charging” project showed that race could still be inferred from redacted reports. (Chohlas-Wood et al., 2021)
- Front-loading without oversight: If an algorithm becomes a “gatekeeper” determining which cases proceed, human review risks becoming a rubber stamp.
- Vendor secrecy: Many prosecutorial AI tools are built by private vendors under proprietary logic, raising transparency problems in criminal contexts where liberty is at stake. Proprietary algorithms create discovery challenges when defense counsel seek to understand charging decisions, potentially implicating Brady v. Maryland obligations to disclose material information favorable to the accused.
How to Challenge Algorithmic Charging in Court
Defendants facing charges influenced by algorithmic systems may challenge the process on several constitutional grounds. A due-process challenge could argue that the use of an unexplained, black-box algorithm violated the defendant’s right to a fundamentally fair process, particularly if the algorithm’s recommendations were adopted without meaningful human review. Courts might scrutinize whether the prosecutor exercised genuine independent judgment or simply rubber-stamped algorithmic output.
Equal-protection challenges could argue that algorithmic bias resulted in discriminatory charging patterns, even if no discriminatory intent existed. Under McCleskey v. Kemp, proving discriminatory effect and purpose is notoriously difficult, but AI systems that demonstrably produce disparate impacts across protected classes may lower the evidentiary bar. Defense counsel could use statistical analysis of the algorithm’s training data and outputs to show systematic disparities.
Discovery motions will become central battlegrounds. Defendants may seek access to the algorithm’s source code, training data, validation studies, and performance metrics, including false-positive and false-negative rates across demographic groups. Prosecutors may resist on grounds of trade secrecy or work-product privilege, creating a tension between constitutional rights and proprietary interests.
Required Safeguards: What Oversight Looks Like
Before deployment, prosecutors’ offices should conduct bias impact assessments and ensure explainability. However, resource constraints pose significant challenges: many prosecutors’ offices, particularly those serving smaller jurisdictions, lack the funding necessary to implement proper oversight mechanisms, acquire appropriate software, or hire staff with technical expertise to audit algorithmic systems. The California experience illustrates this challenge, with multiple districts struggling to meet the statewide mandate due to budgetary limitations.
Transparency and accountability mechanisms—mirroring those under the European Union’s AI Act—could serve as models. Similar frameworks are emerging in other common-law jurisdictions: the United Kingdom has introduced guidelines requiring explainability for AI systems that influence criminal justice decisions, while Canadian authorities are exploring transparency requirements for algorithmic tools used in prosecution. The constitutional question, ultimately, is whether discretion remains meaningful if the decision-maker cannot explain the reasoning in human terms.
The Federal Picture: DOJ’s AI Framework and Discovery Rights
The state-level innovations in California occur against a broader federal backdrop of AI governance in criminal justice. In December 2024, the Department of Justice released its comprehensive 200-page report “Artificial Intelligence and Criminal Justice,” mandated by President Biden’s Executive Order 14110. This analysis addresses AI use across identification, forensic analysis, predictive policing, and risk assessment—providing the first comprehensive federal framework for algorithmic tools in prosecution and law enforcement.
The DOJ report emphasizes that AI in prosecutorial settings must maintain “robust governance frameworks” and include “pre-deployment measures” such as bias impact assessments and explainability requirements. Critically, it acknowledges that “algorithms trained on historical data reflect the biases embedded in past enforcement and charging practices”—validating concerns that algorithmic tools may systematize rather than eliminate discrimination. While the report does not mandate race-blind charging at the federal level, it establishes principles that influence state and federal practice alike.
Discovery and transparency pose particular challenges at both state and federal levels. The DOJ’s October 2024 Compliance Plan for OMB Memorandum M-24-10 establishes “AI Impact Assessment” processes for rights-impacting uses of AI, including prosecutorial applications. These assessments must include quantitative evaluation and continuous monitoring—but the extent to which defense counsel can access such assessments in discovery remains an open constitutional question.
Brady Implications and the Trade Secrecy Problem: When proprietary AI algorithms influence charging decisions, Brady v. Maryland obligations may require prosecutors to disclose material information about the system’s logic, training data, and performance metrics. If an algorithm demonstrates higher false-positive rates for defendants of certain races, or if its training data contains documented biases, that information could be “favorable to the accused” and thus subject to disclosure. Yet trade secrecy claims by private vendors create tension between due process rights and commercial interests—a conflict the courts have yet to resolve comprehensively.
Some federal courts have begun addressing this tension. Defense motions increasingly demand access to algorithmic risk-assessment tools used in sentencing and bail decisions, arguing that due process requires understanding how the machine reached its conclusion. While these cases primarily involve post-charging applications, the same logic extends to charging decisions: if a prosecutor cannot explain why an algorithm recommended specific charges, can the decision satisfy constitutional scrutiny?
Federal Compliance and Corporate Standards: The DOJ’s updated September 2024 Evaluation of Corporate Compliance Programs now requires that companies using AI conduct “risk assessments” and maintain “controls to monitor and ensure [AI’s] trustworthiness, reliability, and use in compliance with applicable law.” While focused on corporate defendants, this signals heightened DOJ scrutiny of algorithmic decision-making across all criminal justice contexts. Prosecutors using AI tools may face the same accountability standards they impose on corporate targets.
Pilot Projects and Future Development: Several federal agencies are exploring predictive charging models, though these remain largely in research phases. The National Institute of Justice has funded studies on algorithmic bias detection in charging decisions, examining whether machine learning can identify prosecutorial disparities more effectively than traditional auditing methods. However, no federal mandate comparable to California’s AB 2778 currently exists. Federal prosecutors operate under broader AI governance principles established by OMB guidance, but without specific requirements for algorithmic redaction in charging documents.
The intersection of state innovation and federal standards creates a patchwork landscape: California prosecutors must implement race-blind charging using algorithmic tools, while federal prosecutors in the same geographic region operate under different governance frameworks. This divergence raises federalism questions about whether uniform national standards for algorithmic charging decisions may eventually become necessary—particularly if circuit splits emerge over the constitutional adequacy of various approaches.
The Accountability Question
The adoption of AI tools in prosecutorial charging decisions raises fundamental questions about the nature of discretion, accountability, and fairness in the criminal-justice system. While algorithms hold promise for efficiency and consistency, they risk upsetting the balance of human judgment, democratic oversight, and constitutional protection. If prosecutors delegate decision-making to machines without meaningful human control and transparency, the core principles of due process, equal protection, and the separation of powers may be eroded.
As algorithmic tools move from experimental pilots to statewide mandates, and as federal agencies develop comprehensive governance frameworks, the legal system must grapple with unprecedented questions. Can prosecutors constitutionally outsource charging authority to algorithms? What discovery rights attach to algorithmic decisions? How do courts review decisions when the reasoning exists in proprietary code rather than prosecutorial judgment? And when systemic bias emerges from historical data baked into training sets, who bears responsibility—the algorithm’s designer, the prosecutor who deployed it, or the system that generated the biased data in the first place?
The key question remains simple but profound: when an algorithm plays the starring role, who is accountable—the machine or the human who pressed “go”?
Sources
- American Bar Association GPSolo, “AI’s Complex Role in Criminal Law: Data, Discretion, and Due Process” (March-April 2025)
- Arnold Ventures, “Could AI Help Make Prosecutors’ Decisions Race-Blind? A Q&A with Alex Chohlas-Wood” (October 2024)
- California Attorney General’s Office, “Attorney General Bonta Issues Race-Blind Charging Guidelines for Prosecutors” (January 5, 2024)
- California Attorney General’s Office, “Race-Blind Charging Guidelines: Penal Code Section 741” (January 1, 2024)
- California State Legislature, “Assembly Bill 2778: Crimes: Race-Blind Charging” (2022)
- European Union, “Artificial Intelligence Act” (2024)
- Governing, “Some California Prosecutors Struggle to Comply with New ‘Race Blind’ Charging Rule,” by Megan Cassidy and David Hernandez (February 10, 2025)
- Law360, “Seven Months In, Race-Blind Charging Faces Test In California” (2025)
- Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, “Blind Justice: Algorithmically Masking Race in Charging Decisions,” by Alex Chohlas-Wood et al. (May 2021)
- Stanford Impact Labs, “Evaluating and Scaling Race-blind Charging” (accessed 2025)
- US Supreme Court, McCleskey v. Kemp, 481 U.S. 279 (1987)
- US Supreme Court, Wayte v. United States, 470 U.S. 598 (1985)
- US Department of Justice, “Evaluation of Corporate Compliance Programs” (September 2024)
- US Department of Justice, “Compliance Plan for OMB Memorandum M-24-10” (October 2024)
- US Department of Justice, “Artificial Intelligence and Criminal Justice: Final Report” (December 2024)
- USC Gould School of Law, “Bringing AI to the District Attorney’s Office: A Policy Framework for Innovation in Criminal Justice,” by Meekness Ikeh (March 2025)
- Villanova Law Digital Commons, “Progressive Algorithms,” by Itay Ravid (2022)
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, statutes, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Truth on Trial: Courts Scramble to Authenticate Evidence in the Age of Deepfakes
