AI and Big Data Reshape Criminal Justice—But at What Cost?

AI and Big Data Reshape Criminal Justice—But at What Cost?

In recent years, the global criminal justice system has undergone a quiet yet profound transformation. Driven by advances in artificial intelligence (AI), big data analytics, and cloud computing, courts, police departments, and prosecutors’ offices around the world are increasingly adopting digital tools to streamline investigations, accelerate trials, and enhance decision-making. Dubbed “smart justice” or “judicial informatization,” this movement promises greater efficiency, consistency, and transparency. Yet, as these technologies become embedded in the very fabric of legal procedure, a growing chorus of legal scholars warns that without robust safeguards, this technological leap could erode foundational principles of fairness, due process, and individual rights.

Nowhere is this tension more evident than in China, where the integration of AI into criminal proceedings has advanced with remarkable speed and scale. From Shanghai’s “206 System”—an AI-powered platform that guides investigators through evidence collection protocols—to nationwide “smart court” initiatives that reduce trial durations by over 50%, the state has embraced digital innovation as a pillar of judicial modernization. Yet this progress comes with complex trade-offs, particularly concerning procedural justice, judicial discretion, and the balance of power between prosecution and defense.

A new analysis by Jianlin Bian and Can Cao of the Institute of Procedural Law at China University of Political Science and Law offers a rigorous appraisal of these challenges. Published in the Journal of Jishou University (Social Sciences Edition), their paper, Challenges and Countermeasures of Criminal Procedure in the Information Age, examines how the infusion of information technology into criminal justice is simultaneously enhancing operational efficiency and destabilizing long-standing legal norms.


The Allure of Algorithmic Efficiency

The appeal of AI and big data in criminal justice is undeniable. In a system strained by high caseloads and limited personnel, automation promises relief. According to China’s Supreme People’s Court, judges handled an average of 228 cases each in 2019—an increase of 13.4% from the previous year. Meanwhile, internet courts, which operate entirely online, resolved cases in just 42 days on average, slashing processing time by 57.1% compared to traditional courts.

These gains are powered by sophisticated digital infrastructures. AI systems now assist in evidence validation, legal research, and even sentencing recommendations. Big data platforms enable predictive policing—identifying crime hotspots, flagging potential offenders, and enabling preemptive interventions. In Guangdong and Jiangsu provinces, “smart policing” frameworks integrate facial recognition, real-time surveillance, and behavioral analytics to monitor public spaces and anticipate criminal activity.

On the surface, such tools appear to strengthen public safety and judicial reliability. But as Bian and Cao emphasize, efficiency alone is not justice. When algorithms replace human judgment in sensitive legal domains, they risk introducing new forms of bias, opacity, and overreach—often without adequate oversight or recourse.


Clash with Core Legal Principles

At the heart of the authors’ concern lies a fundamental contradiction: the logic of algorithmic prediction often runs counter to the bedrock principle of presumption of innocence. In traditional criminal law, no individual may be treated as guilty until proven so in a court of law. Yet predictive policing models, by design, assign risk scores to individuals based on historical data, behavioral patterns, and social connections. Those flagged as “high risk” may face heightened surveillance, travel restrictions, or even preemptive detention—despite having committed no crime.

This preemptive labeling, Bian and Cao argue, effectively reverses the burden of proof. Rather than reacting to offenses, the system acts on suspicion generated by opaque algorithms. The result is a subtle but significant shift from reactive justice to anticipatory control—one that treats citizens as data points rather than rights-bearing individuals.

Equally troubling is the impact on judicial discretion. In criminal trials, judges rely not only on facts but on context, nuance, and human experience to interpret evidence, assess credibility, and determine appropriate sentences. AI systems, however advanced, lack this capacity for empathetic, situational reasoning. Current tools like the “206 System” standardize evidence evaluation against predefined templates, reducing complex factual disputes to binary compliance checks. While this may minimize errors in routine cases, it also flattens the rich, case-specific deliberation that defines fair adjudication.

As Bian and Cao note, “The life of the law lies in experience, not computation.” When machines dictate evidentiary thresholds or sentencing ranges, they risk converting justice into a mechanical output, devoid of moral reasoning or social understanding.


The Asymmetry of Digital Power

Perhaps the most pernicious effect of judicial informatization is its reinforcement of power imbalances between prosecution and defense. In China’s criminal justice system—like many others—the state holds overwhelming informational and technological advantages. Police and prosecutors operate within closed, interconnected digital ecosystems: evidence is digitized, analyzed, and shared across agencies using proprietary platforms. Defense attorneys, by contrast, often lack access to these systems.

The “206 System,” for instance, provides real-time alerts to prosecutors when evidence appears inconsistent or incomplete, enabling rapid corrections before cases proceed to trial. Defense lawyers, meanwhile, must rely on paper copies or delayed disclosures, placing them at a permanent informational disadvantage. This disparity undermines the principle of equality of arms—a cornerstone of fair trial rights under international standards.

Moreover, the opacity of algorithmic decision-making compounds this imbalance. When an AI system flags a suspect or recommends a sentence, the underlying logic is rarely disclosed. Defense counsel cannot cross-examine an algorithm, nor can they effectively challenge a risk score derived from thousands of unverifiable data points. The result is a “black box” justice system where outcomes are determined by processes that are neither transparent nor contestable.


Legislative Gaps and Institutional Fragmentation

Compounding these conceptual challenges is a stark institutional reality: China’s legal framework has not kept pace with technological change. The Criminal Procedure Law, last substantially revised in 2018, contains no provisions governing the use of AI or big data in investigations or trials. While electronic data is recognized as a form of evidence, “big data evidence” exists in a regulatory gray zone. In practice, such evidence is often shoehorned into existing categories like electronic records or documentary proof—despite its fundamentally different nature.

This legislative lag leaves courts without clear standards for evaluating the reliability, relevance, or admissibility of algorithmically generated evidence. It also creates accountability vacuums. When errors occur—whether due to faulty data input, biased training sets, or flawed programming—there is no established mechanism for redress. Who is liable when an AI misidentifies a suspect? How should courts handle algorithmic errors that lead to wrongful detention? Current law offers few answers.

Adding to the complexity is the fragmented nature of China’s judicial informatization efforts. Provincial and municipal agencies have developed their own digital platforms with little coordination. Police in Nanjing use different systems than those in Guangzhou; prosecutors in Hangzhou rely on tools incompatible with those in Quanzhou. The result is a patchwork of siloed databases and non-interoperable software—a phenomenon known as “information islands.”

This fragmentation incurs real costs. In Beijing’s Haidian District, prosecutors and judges separately scanned the same case files thousands of times in 2017 simply because their systems could not share digital records. Such redundancy wastes resources, delays proceedings, and contradicts the very efficiency goals that justify informatization in the first place.


Toward a Rights-Aware Digital Justice

Bian and Cao do not reject technological innovation outright. On the contrary, they acknowledge its potential to enhance accuracy, reduce bias in manual processes, and improve access to justice. Their critique is not of technology itself, but of its unregulated, unreflective deployment.

To reconcile innovation with justice, they propose a multi-pronged reform agenda grounded in legal principle and institutional coherence. First, the use of AI and big data in criminal proceedings must be explicitly authorized and bounded by law. Legislation should define permissible applications, set evidentiary standards for algorithmic outputs, and mandate transparency requirements for high-stakes decisions.

Second, the principle of proportionality must guide technological interventions. Surveillance, data collection, and predictive tools should be deployed only when strictly necessary—and only after less intrusive alternatives have been exhausted. Data retention and usage policies must be designed to minimize privacy intrusions, with clear protocols for deletion and access control.

Third, robust safeguards for defense rights are essential. This includes granting defense attorneys timely, meaningful access to digital evidence and algorithmic reports. Where AI influences charging or sentencing decisions, defendants should have the right to request explanations and challenge the underlying methodology.

Fourth, China must prioritize national-level coordination in judicial informatization. A unified data-sharing architecture—compatible across police, prosecution, and court systems—would eliminate redundant work, reduce errors, and enhance oversight. Such a framework should be developed through inter-agency collaboration, with input from legal scholars, technologists, and civil society.

Finally, human capital must keep pace with hardware. Most judicial personnel lack training in data literacy or algorithmic reasoning. Continuous professional development programs are needed to equip judges, prosecutors, and investigators with the skills to critically evaluate—and, when necessary, override—machine-generated recommendations.


The Road Ahead

The integration of AI into criminal justice is not a matter of if, but how. As Bian and Cao’s analysis makes clear, the path forward cannot be driven solely by technological capability or administrative convenience. It must be anchored in the enduring values of fairness, accountability, and human dignity.

Around the world, similar debates are unfolding. In the United States, courts are grappling with the use of risk assessment algorithms in bail decisions. In Europe, the General Data Protection Regulation imposes strict limits on automated decision-making in legal contexts. China’s experience offers both cautionary tales and potential models—especially in its ambition to build a comprehensive digital justice infrastructure.

Yet without deliberate legal guardrails and a commitment to procedural integrity, even the most advanced systems risk becoming engines of efficiency without equity. As Jianlin Bian and Can Cao compellingly argue, the goal should not be to replace judges with algorithms, but to ensure that technology serves justice—not the other way around.


Jianlin Bian, Can Cao
Institute of Procedural Law, China University of Political Science and Law, Beijing 100088, China
Journal of Jishou University (Social Sciences Edition)
DOI: 10.13438/j.cnki.jdxb.2021.05.003