Relationship analysis between AI and rule of law

AI Cannot Replace the Soul of Law, Says Jiangsu University Scholar

In an era where artificial intelligence (AI) is increasingly embedded into public services, a growing debate has emerged over whether machines can truly uphold the principles of justice. As governments and judiciaries around the world invest heavily in “smart courts” and AI-assisted legal systems, one legal scholar is urging caution. Liu Qiang, a lecturer at Jiangsu University’s School of Law, argues that while AI may enhance efficiency in judicial administration, it fundamentally lacks the human qualities necessary to realize the rule of law.

Published in the Journal of Chongqing University (Social Science Edition), Liu’s comprehensive analysis challenges the prevailing optimism surrounding AI’s role in legal reform. Titled “Relationship analysis between AI and rule of law,” the paper presents a philosophical and institutional critique of the assumption that technology can substitute for human judgment in matters of justice.

The article arrives at a time when digital transformation in the judiciary is accelerating. China, for instance, has launched ambitious initiatives such as the “Smart Court” project, aiming to integrate AI into case management, document generation, and even sentencing recommendations. Similar developments are underway in the United States, the European Union, and India, where governments are experimenting with predictive analytics, automated legal drafting, and virtual hearings.

Yet, Liu warns that such technological enthusiasm risks mistaking efficiency for justice. “AI is a tool, not a judge,” he asserts. “It operates on algorithms and data structures—mathematical constructs that can process information at unprecedented speed. But justice is not a computational problem. It is a moral and social practice that requires empathy, interpretation, and faith in the law.”

At the heart of Liu’s argument is a distinction between what he calls “algorithm” and “good law.” Drawing from classical political philosophy, particularly Aristotle’s definition of the rule of law—where laws must not only be obeyed but also be just—Liu contends that AI cannot contribute to the creation of good laws. “Good law is not a technical output,” he explains. “It emerges from social practice, historical experience, and collective value judgments. Algorithms, no matter how sophisticated, are incapable of engaging in this kind of deliberative process.”

He elaborates on this point by examining the nature of algorithms. According to computer science pioneer Niklaus Wirth, a program is simply “algorithm plus data structure.” Algorithms, in turn, are finite sets of rules designed to solve specific problems. While effective in domains like logistics or finance, where outcomes can be quantified, they fall short in legal contexts where moral reasoning and contextual understanding are paramount.

Liu emphasizes that the formulation of just laws is not a mechanical process but a deeply human one. It requires grappling with questions of fairness, dignity, and social equity—issues that cannot be reduced to binary code. “Even if an AI system were fed every legal statute, historical precedent, and sociological dataset, it would still lack the capacity to weigh competing values or anticipate the long-term consequences of a legal rule,” he writes. “Justice is not about optimizing outcomes; it is about affirming principles.”

This critique extends to the courtroom itself. Liu challenges the idea that AI can assist or even replace judges in decision-making. He draws a sharp contrast between judicial reasoning and algorithmic reasoning. Judges, he argues, operate under a “normative mindset”—they apply laws impartially, guided by legal principles rather than strategic calculations. In contrast, AI functions through a “strategic mindset,” rooted in game theory and optimization logic.

“An AI system evaluates options based on predicted outcomes and efficiency metrics,” Liu explains. “It seeks the most favorable result given certain constraints. But a judge does not operate like a strategist. A judge’s duty is not to win a case but to uphold the law, even when doing so leads to personally or politically inconvenient outcomes.”

This distinction becomes especially critical in adversarial legal systems, where neutrality is a cornerstone of legitimacy. Liu warns that AI, by design, tends to favor one side over another—either the plaintiff or the defendant—based on historical patterns and statistical probabilities. “Such a system may be useful for predicting litigation outcomes,” he concedes, “but it is fundamentally incompatible with the role of a neutral arbiter.”

Moreover, Liu highlights the expressive function of legal procedures—something that AI cannot replicate. Legal proceedings are not merely about reaching correct verdicts; they are also about ensuring that parties feel heard and respected. Social science research shows that people are more likely to accept unfavorable rulings if they believe the process was fair and their voices were acknowledged.

“This is where human interaction becomes irreplaceable,” Liu argues. “A judge’s tone, body language, and willingness to listen convey respect and legitimacy. These are not incidental elements of justice—they are constitutive of it. An AI system, no matter how advanced, cannot offer empathy or moral recognition.”

He cites the example of the “final question” practice adopted by China’s Second Circuit of the Supreme People’s Court, where judges ask litigants at every stage of the trial whether they have anything further to add. While seemingly procedural, this practice fosters a sense of inclusion and dignity. “It is not about gathering more data,” Liu notes. “It is about affirming the humanity of the participants. No algorithm can be programmed to understand this.”

Another key argument in Liu’s paper concerns the role of legal faith. He draws on the work of American legal scholar Harold Berman, who famously stated, “Law must be believed in, or it will not work.” For Liu, this belief is not mere compliance but a deep-seated commitment to justice as a moral ideal. Judges, he argues, are not just legal technicians; they are guardians of a legal tradition that transcends written rules.

“AI can memorize every statute and precedent,” Liu observes. “But it cannot feel the weight of injustice or the moral imperative to correct it. Faith in the law is not a cognitive function—it is an emotional and spiritual one. It is what drives judges to go beyond the letter of the law and seek equitable solutions.”

This leads to a broader philosophical point: the nature of human experience. Citing Oliver Wendell Holmes Jr.’s famous dictum that “the life of the law has not been logic; it has been experience,” Liu argues that judicial wisdom arises from lived reality, not data accumulation. While AI excels at processing vast datasets, it cannot internalize the nuances of human suffering, social change, or cultural memory.

“Experience is not data,” Liu insists. “Data is static, codified, and decontextualized. Experience is dynamic, interpretive, and embodied. A judge’s understanding of a case is shaped by years of professional practice, personal reflection, and engagement with society. You cannot digitize that.”

He further distinguishes between two types of legal experience: individual judicial experience and institutional legal tradition. The first refers to the accumulated wisdom of judges who have presided over thousands of cases, developed intuition, and refined their judgment. The second refers to the evolutionary nature of common law, where precedents are not rigid rules but evolving principles shaped by societal needs.

“Even in common law systems, past decisions are not treated as absolute truths,” Liu explains. “They are hypotheses subject to revision. Legal progress often comes from dissenting opinions that challenge established norms. But AI systems, by their very design, favor consensus and predictability. They are ill-suited to accommodate legal creativity or dissent.”

Liu also questions the practical effectiveness of AI in addressing core judicial challenges. One of the main promises of AI in law is the reduction of “same case, different judgment” disparities—where similar cases receive different rulings due to human inconsistency. Proponents argue that AI can standardize decisions by identifying patterns in past rulings.

But Liu counters that this approach oversimplifies the problem. In China, for example, only the Supreme People’s Court has the authority to issue binding guiding cases. Lower courts may use AI to search for precedents, but these are not legally binding. Moreover, judicial consistency is often achieved through internal court mechanisms—such as meeting minutes and internal guidelines—rather than external database queries.

“AI-based case retrieval may provide references,” Liu acknowledges, “but it does not create legal authority. And without institutional support, its impact on judicial uniformity is limited.”

He also critiques the use of AI in automating judicial documents. While AI can help correct grammatical errors and generate templates, Liu argues that the real burden on judges lies not in writing but in reasoning. “The time-consuming part of judgment writing is not typing—it is thinking,” he says. “A judge must weigh evidence, interpret law, and justify conclusions. No AI can do that.”

Instead of relying on technology to solve systemic inefficiencies, Liu calls for deeper institutional reforms. He points to issues such as the case backlog, the pressure of performance metrics, and the opacity of internal court records—problems that cannot be resolved through software alone.

“For example, judges often rush to meet case closure quotas,” he notes. “This leads to last-minute filings and procedural shortcuts. No AI system can fix that unless the underlying incentive structure changes.”

Liu also raises concerns about transparency. While public access to trial procedures has improved through digital platforms, he argues that true judicial transparency requires more than online tracking. The existence of internal “supplementary files” containing unofficial instructions or political interventions remains a major obstacle to accountability.

“Publishing trial timelines online does not eliminate behind-the-scenes influence,” he warns. “Unless we reform the institutional culture of secrecy, technological transparency will remain superficial.”

Despite his skepticism, Liu does not dismiss AI’s potential altogether. He acknowledges that AI has played a positive role in improving judicial efficiency—particularly in case management, document automation, and public access to information. “AI can handle routine tasks,” he concedes. “It can remind judges of deadlines, flag procedural errors, and assist in legal research. These are valuable contributions.”

But he insists that such applications should be seen as supportive tools, not transformative solutions. “The soul of the legal system is not in its databases or algorithms,” he writes. “It is in its institutions, its values, and its people. Technology must serve this soul, not replace it.”

Liu’s paper concludes with a call for a balanced approach to legal modernization. Rather than chasing technological novelty, policymakers should focus on strengthening the foundations of the rule of law—judicial independence, professional ethics, and public trust.

“AI can be a useful instrument,” he says, “but only if it is embedded within a robust legal framework. Without institutional reform, even the most advanced AI will fail to deliver justice.”

His perspective resonates with a growing body of critical scholarship on AI and law. Scholars in Europe and North America have similarly warned against the “automation bias”—the tendency to trust algorithmic outputs simply because they appear objective. They emphasize that algorithms are not neutral; they reflect the assumptions, biases, and limitations of their creators.

Liu’s contribution lies in grounding this critique within a distinctly philosophical and institutional framework. By drawing on Marxist practice theory, legal realism, and hermeneutic philosophy, he offers a multidimensional analysis that transcends technical debates.

His work also reflects a broader global conversation about the limits of technology in governance. From predictive policing to algorithmic welfare allocation, governments are discovering that efficiency gains often come at the cost of fairness, accountability, and human dignity.

In this context, Liu’s message is both timely and urgent. As nations race to build “digital governments,” they must remember that the rule of law is not a software update. It is a living tradition—one that depends on human judgment, moral courage, and institutional integrity.

“The future of justice,” Liu concludes, “does not lie in smarter machines, but in wiser institutions.”

Liu Qiang, School of Law, Jiangsu University. Relationship analysis between AI and rule of law. Journal of Chongqing University (Social Science Edition), DOI: 10.11835/j.issn.1008-5831.fx.2020.05.003