AI in Criminal Courts: From Tool to Power?

AI in Criminal Courts: From Tool to Power?

In a quiet corner of China’s judicial modernization drive, a technological transformation has been unfolding—one that is reshaping not just how cases are processed, but how justice itself is perceived. At the heart of this shift lies artificial intelligence (AI), quietly embedded within courtrooms from Shanghai to Beijing, assisting judges in evidence analysis, sentencing recommendations, and case comparisons. On the surface, these systems are marketed as tools—meant to support, not supplant, human judgment. But a closer look reveals a more complex and unsettling narrative: AI in criminal justice may no longer be just a tool. It may be evolving into a form of power.

This is the central thesis of a groundbreaking paper by Wei Chenshu, a doctoral researcher at the Institute of Evidence Law and Forensic Science at China University of Political Science and Law. Published in the Journal of Xi’an Jiaotong University (Social Sciences Edition), the study titled “The Power Logic of Artificial Intelligence in Criminal Trials” presents a compelling argument that AI systems in criminal courts are undergoing a subtle but profound transformation—from neutral assistants to influential actors capable of shaping judicial outcomes.

Wei’s analysis is not rooted in dystopian speculation, but in a rigorous examination of how AI integrates into the daily workings of the judiciary. Drawing on philosophy, cognitive science, and institutional dynamics, the paper maps out the mechanisms through which AI gains authority over human decision-makers, often without overt resistance or even awareness.

The journey begins with the practical applications of AI in Chinese criminal courts. Systems like Shanghai’s “206 System,” Beijing’s “Rui Judge,” and Guizhou’s “Zhengfa Big Data” platform are already operational, handling tasks once reserved for seasoned legal professionals. These include automated evidence verification, where AI checks whether collected evidence meets procedural standards; sentencing assistance, where algorithms analyze past rulings to suggest appropriate penalties; and case deviation alerts, where the system flags judgments that significantly differ from historical patterns.

At first glance, these functions appear benign—efficiency-enhancing tools designed to reduce human error and promote consistency. But Wei argues that their very integration into judicial workflows creates conditions for a deeper, more insidious influence. The first clue lies in what he calls the “prosthetic nature” of human-technology interaction.

Inspired by French philosopher Bernard Stiegler’s concept of “prothèse”—a technical supplement that compensates for human biological inadequacies—Wei suggests that judges are not merely using AI, but becoming dependent on it. Just as a person with a prosthetic limb eventually perceives it as part of their body, judges begin to internalize AI functionalities as extensions of their own cognitive abilities. The system’s ability to process vast volumes of case data, extract patterns, and generate structured outputs fills a gap that no individual judge can realistically overcome. Over time, this reliance blurs the boundary between tool and agent, turning AI from an external aid into an embedded cognitive partner.

This fusion is reinforced by a well-documented psychological phenomenon: automation bias. Human operators, across various domains, tend to trust machine-generated outputs more than their own judgment, especially when under pressure or cognitive load. In the context of criminal trials, where judges face immense workloads and high stakes, this bias becomes particularly dangerous. When an AI system flags a piece of evidence as “incomplete” or suggests a sentencing range, judges are more likely to accept it without critical scrutiny—even when contradictory information exists.

Wei cites experimental research from the United States involving a recidivism prediction tool known as RRPI. In a simulated sentencing exercise, law students who had access to the AI’s risk assessment were significantly more likely to impose harsher sentences on defendants labeled as high-risk, even when the underlying case facts were nearly identical to those of lower-risk defendants. The mere presence of the algorithmic output shifted human judgment, demonstrating how automation can subtly override individual reasoning.

But psychological tendencies alone do not explain the full scope of AI’s influence. Wei identifies institutional and structural forces that elevate AI from a mere instrument to a de facto authority. One of the most powerful of these is state endorsement. From the very top, China’s leadership has championed the concept of “smart courts” as a cornerstone of judicial reform. The 2016 National Informatization Development Strategy Outline, jointly issued by the Central Committee of the Communist Party and the State Council, explicitly calls for the integration of AI into judicial processes to enhance fairness, transparency, and efficiency.

This top-down support does more than provide funding and policy direction—it confers legitimacy. When a system is backed by the highest levels of government, it acquires a symbolic authority that makes it difficult to challenge. Judges, prosecutors, and court administrators are less likely to question the outputs of a system that has been officially sanctioned as a tool for justice.

Equally important is the role of data. AI systems are only as good as the data they are trained on, and in China’s judicial context, this data is tightly controlled by state institutions. Unlike open-source datasets or commercial platforms, judicial data—especially case files, internal memos, and procedural records—is largely inaccessible to the public, defense attorneys, and even academic researchers. This creates a closed loop: AI systems are trained on proprietary judicial data, deployed in courtrooms, and then used to justify decisions—all without external oversight or independent validation.

The data asymmetry is further compounded by the prevailing model of system development: technology outsourcing. Rather than building AI in-house, most courts partner with private tech firms. Shanghai’s system was developed with iFlytek, Beijing’s with Huayu Yuandian, and Zhejiang’s with Alibaba. These collaborations bring technical expertise but also introduce new power dynamics. The engineers and data scientists who design these systems often lack legal training, while the judges who use them rarely understand the underlying algorithms. This knowledge gap creates a dependency where legal professionals must trust the system’s outputs, not because they understand them, but because they cannot verify them.

Wei draws on Michel Foucault’s theory of power-knowledge to illustrate this point: power is not just exercised through coercion, but through the control of information and expertise. In this case, the fusion of legal authority and technological capability creates a hybrid form of power—one that operates not through commands, but through subtle guidance, nudges, and defaults.

This brings us to the architecture of the systems themselves. Wei emphasizes that AI in criminal justice is not just a collection of algorithms, but a structured environment—a “system architecture” that shapes user behavior. Take the Shanghai 206 System: it doesn’t merely suggest evidence standards; it enforces them. If a piece of evidence fails to meet the system’s criteria, it can be blocked from moving to the next stage of prosecution. This architectural rigidity means that legal professionals must adapt their practices to fit the system, rather than the other way around. The system, in effect, becomes a silent regulator of judicial conduct.

The consequences of this shift are far-reaching. Wei identifies three major risks: disciplinary, exclusionary, and misjudgment risks.

The first, disciplinary risk, refers to the way AI systems subtly shape judicial behavior. By standardizing evidence collection, promoting uniform sentencing, and issuing deviation alerts, these systems encourage conformity. Judges who consistently issue rulings that differ from the system’s recommendations may face internal scrutiny or be required to justify their decisions to superiors. Over time, this creates a culture of compliance, where judicial independence is eroded not by overt pressure, but by the quiet pull of algorithmic normality.

The second, exclusionary risk, highlights the asymmetry between prosecution and defense. While prosecutors and judges have access to sophisticated AI tools, defendants and their lawyers are often left in the dark. The algorithms that generate evidence, assess risk, or recommend sentences are typically protected as trade secrets. When defense attorneys request access to the source code or data models, they are often denied on grounds of commercial confidentiality. This creates a two-tiered system: one side operates with advanced analytical capabilities, while the other struggles with limited information and outdated methods.

A striking example comes from tax fraud cases, where prosecutors use data mining to trace complex financial flows across millions of transactions. The resulting visualizations and network analyses are presented as objective evidence. But without access to the underlying algorithms or raw data, defendants cannot effectively challenge these findings. As Wei notes, this undermines a fundamental principle of due process: the right to confront and cross-examine the evidence against you.

The third, misjudgment risk, stems from the inherent limitations of translating legal reasoning into code. Law is not a set of fixed rules, but a dynamic, interpretive practice shaped by context, precedent, and moral judgment. When programmers attempt to encode legal norms into algorithms, they inevitably simplify, omit, or misrepresent them. A well-known case in Colorado saw a welfare benefits system deny thousands of claims due to a coding error that misinterpreted eligibility rules. In criminal justice, similar errors could lead to wrongful convictions or unjust sentences.

Moreover, algorithms are not neutral. They reflect the biases of their designers and the data they are trained on. If historical sentencing data reflects racial or socioeconomic disparities, the AI will learn and replicate those patterns. This is not a hypothetical concern: studies in the U.S. have shown that risk assessment tools are more likely to label Black defendants as high-risk, even when controlling for criminal history.

Wei does not advocate for the abolition of AI in criminal justice. Instead, he calls for a framework of power regulation—one that acknowledges AI’s growing influence and seeks to contain its risks. His recommendations are both practical and principled.

First, he proposes a differentiated application mechanism. Not all cases should be treated the same. For complex or high-stakes trials, especially those involving contested facts or constitutional issues, human judgment should take precedence. AI should be used more cautiously in cases where defendants plead not guilty, to avoid the risk of “pre-judgment” based on algorithmic suggestions.

Second, he emphasizes the need for equal participation. Defense attorneys should have the right to access and challenge AI-generated evidence. This could include the publication of system manuals, public algorithmic audits, and the creation of independent review boards. When trade secrets are invoked, courts could allow in-camera review—where sensitive code is examined privately by a neutral expert—so that defendants’ rights are protected without compromising proprietary information.

Third, he calls for improved development practices. Legal professionals should be involved in the design and testing of AI systems to ensure that legal principles are accurately encoded. Regular third-party evaluations should be conducted to assess system accuracy, fairness, and reliability. And judges should receive training in data literacy, so they can critically engage with algorithmic outputs rather than blindly accept them.

Wei’s paper is a timely and necessary intervention in a field that is moving faster than public understanding. As AI becomes more embedded in criminal justice, the line between assistance and authority will continue to blur. Without careful regulation, the promise of efficiency and consistency could come at the cost of fairness, transparency, and human dignity.

The question is no longer whether AI should be used in courts, but how to ensure that it serves justice rather than distorts it. As Wei Chenshu’s research makes clear, the answer lies not in rejecting technology, but in recognizing its power—and learning how to govern it.

Wei Chenshu, Institute of Evidence Law and Forensic Science, China University of Political Science and Law, Journal of Xi’an Jiaotong University (Social Sciences Edition), DOI: 10.15896/j.xjtuskxb.202103015