AI and Criminal Law: A Call for Theoretical Evolution in the Face of Technological Change
The rapid advancement of artificial intelligence (AI) is no longer a distant future; it is a present reality reshaping every facet of human society, from how we work and communicate to how we govern and maintain security. As AI systems grow increasingly sophisticated, capable of autonomous decision-making and complex learning, they are not only transforming industries but also posing profound and unprecedented challenges to the very foundations of our legal systems. Among these, the field of criminal law stands at a critical juncture, facing a wave of new questions that demand more than just technical fixes. A recent, comprehensive analysis by a leading Chinese legal scholar argues that the current state of AI criminal law research is mired in confusion and fragmentation, calling for a fundamental theoretical overhaul to ensure the law remains relevant and effective in the intelligent age.
The central dilemma, as articulated in the paper, is one of identity and responsibility. For centuries, criminal law has been built upon a simple, unshakeable premise: only a human being, possessing free will, moral agency, and the capacity for guilt, can be a criminal subject. The law punishes the individual who commits a crime, with the goals of retribution, deterrence, and rehabilitation. However, the advent of highly autonomous AI systems—machines that can learn from their environment, make independent choices, and cause significant harm without direct human intervention—threatens to shatter this foundational principle. Can a machine, devoid of consciousness and emotion, truly be “guilty”? Can it be “punished” in any meaningful way? These are not abstract philosophical musings but urgent practical questions arising from real-world technologies like self-driving cars, autonomous weapons, and algorithmic trading platforms.
The scholarly debate on this issue, as the paper meticulously documents, is deeply polarized. On one side are the proponents of a “positive theory,” who argue that sufficiently advanced AI, particularly so-called “strong AI,” should be granted a form of legal personhood and thus be held criminally liable for its actions. Their arguments often draw parallels to the legal fiction of corporate personhood, where a non-human entity (a company) is treated as a “person” under the law for the purposes of rights and responsibilities. Scholars like Xianquan Liu have championed this view, suggesting that an AI’s ability to learn, adapt, and exhibit a form of “will” through its algorithms could form the basis for attributing criminal responsibility. This perspective is driven by a pragmatic, utilitarian concern: if an AI causes harm, someone must be held accountable, and holding the distant designer or manufacturer liable for every unforeseen action of a highly autonomous system can seem unjust and impractical.
On the opposing side, the “negative theory” holds firm to the “anthropocentric” view. This camp, represented by scholars such as Fang Shi and Liangfang Ye, contends that AI is, at its core, nothing more than a tool, an extension of human will and design. It lacks the essential qualities of a legal subject—consciousness, emotions, moral understanding, and true free will. An AI cannot feel remorse, understand the concept of justice, or be rehabilitated. Therefore, it cannot be criminally liable. Any criminal act involving AI, they argue, must ultimately be traced back to a human actor: the programmer who wrote flawed code, the operator who misused the system, or the company that failed in its duty of care. This view seeks to preserve the integrity of the existing legal framework, avoiding what it sees as the dangerous and premature anthropomorphization of machines.
Caught in the middle is a “compromise theory,” which acknowledges the potential for AI to gain legal status in the future but deems it unfeasible with current technology. This view, held by scholars like Gaochen Wu, suggests a more cautious, step-by-step approach, focusing on regulating the human actors in the AI lifecycle—developers, deployers, and users—while leaving the door open for future theoretical developments.
This three-way split, the paper argues, is not merely an academic disagreement; it is symptomatic of a deeper crisis in the field. The debate is often characterized by what the author describes as “over-fantasy” and “nihilism.” The former refers to the tendency to project far-future, science-fiction scenarios onto present-day technology, creating problems that do not yet exist and distracting from more immediate, tangible issues. The latter describes a sense of futility, a belief that the existing legal system is so ill-equipped to handle AI that any attempt at reform is doomed, leading to a paralysis of thought. This intellectual turmoil, the paper contends, has led to a “fragmented” body of research, where scholars focus on isolated, specific problems—like the liability for a self-driving car accident—without establishing a coherent, overarching theoretical framework.
The author, Daocui Sun, an associate professor at the Institute of National Legal Aid at China University of Political Science and Law, identifies this lack of a foundational theory as the field’s greatest shortcoming. He argues that before we can effectively discuss the criminal liability of an AI, we must first have a clear and consensus-driven definition of what constitutes an “AI crime” in the first place. What is the essence of a crime in an age where the perpetrator may not be human? Is the core harm still “social danger,” as defined in traditional criminal law, or is it something new, such as a violation of “algorithmic safety” or a breach of digital trust? Without a clear answer to this fundamental question, any discussion of punishment or responsibility is “a house built on sand.”
Sun further critiques the current research for its lack of “normative” depth. Much of the discourse remains at a descriptive or predictive level, outlining potential risks, but fails to engage deeply with the doctrinal tools of criminal law—concepts like actus reus (the guilty act), mens rea (the guilty mind), and the structure of the crime’s constituent elements. For instance, how can we attribute “intent” or “negligence” to a machine? If an AI makes a decision based on a deep learning algorithm that even its creators cannot fully understand—a “black box”—how can we say it acted “intentionally” or “recklessly”? The paper suggests that the very concept of “subjective fault” may need to be radically reimagined or even abandoned in the context of autonomous systems.
This leads to the third major area of concern: the future of punishment. Traditional criminal sanctions—imprisonment, fines, community service—are designed for human beings. They rely on the concepts of suffering, deterrence, and rehabilitation. What, then, is the purpose of “punishing” a machine? Destroying it? “Reprogramming” it? These are not punishments in the traditional sense but rather technical corrections. The paper posits that the entire purpose of criminal law may need to shift in the AI era. Instead of focusing on punishing a guilty subject, the emphasis might need to move toward risk management, system correction, and the restoration of public trust in technology. The goal would be less about retribution and more about ensuring the safe and reliable operation of critical AI systems that society depends on.
Sun’s analysis is not merely a critique; it is a call to action for a new, more robust, and forward-looking field of study. He advocates for the creation of a new “Artificial Intelligence Criminal Law,” a distinct theoretical discipline that can systematically address the unique challenges posed by AI. This new field, he argues, must move beyond the current “imagination” that borders on science fiction and instead build a “rational and feasible” knowledge system grounded in the actual trajectory of technological development.
To achieve this, Sun proposes a multi-faceted approach. First, the function and foundational basis of criminal law itself must be re-evaluated. In a world where humans may no longer be the sole “creators” of social order, the very nature of the law may change. The “anthropocentric logic” that has underpinned legal systems for centuries is being “diluted” by the rise of intelligent agents. Criminal law may need to evolve from a system designed solely to regulate human behavior to one that also governs the interaction between humans and intelligent machines, ensuring a harmonious “man-machine” relationship.
Second, the core categories of criminal law—the concept of crime, the nature of criminal responsibility, and the purpose of punishment—must undergo a “qualitative reconstruction.” This is not about discarding the old but about adapting it. Sun suggests that the primary legal interest protected by criminal law in the AI era may shift from traditional human-centric values to “AI safety.” The protection of algorithmic integrity, data security, and the stability of intelligent systems could become paramount. The “algorithm,” as the “brain” of the AI, would thus take on immense legal significance, becoming a central object of regulation and a key factor in determining liability.
Third, the paper emphasizes the need for a more “normative” and systematic approach. Rather than jumping straight to the most futuristic and controversial questions, researchers should prioritize building a solid theoretical foundation. This means starting with a clear, legally sound definition of AI crime, then methodically working through the implications for the elements of a crime, the principles of liability, and finally, the structure of sanctions. This step-by-step, doctrinal approach is essential for developing a coherent legal framework that can be translated into actual legislation and judicial practice.
Sun also highlights the critical role of “techno-ethics” in this new legal landscape. The development and deployment of AI are not neutral technical processes; they are imbued with ethical choices. The biases in training data, the goals programmed into an algorithm, and the level of autonomy granted to a system are all ethical decisions with profound legal consequences. Therefore, a robust AI criminal law must be intertwined with a strong framework of technological ethics, ensuring that AI systems are designed and used in a “trustworthy” manner. This requires not just legal rules but also institutional mechanisms for oversight, accountability, and public participation.
The path forward, Sun acknowledges, will be gradual and evolutionary, not revolutionary. He envisions a phased development, mirroring the progression of AI technology itself. In the initial phase, where AI acts as a sophisticated “tool,” the existing criminal law, with some legislative updates, can largely suffice. The focus will be on the human actors—designers, manufacturers, and users—holding them accountable for the misuse or malfunction of their AI tools. In a second, transitional phase, as AI systems achieve greater autonomy and “co-exist” with humans, the law will need to develop new doctrines to handle the grey areas where human control is partial. This is where the theoretical work on attributing fault and establishing new forms of liability will be crucial. Finally, in a hypothetical third phase, if truly autonomous “strong AI” emerges, the legal system may face its most profound transformation, potentially requiring the recognition of AI as a legal subject in its own right.
In conclusion, Daocui Sun’s paper is a powerful and timely intervention in the field of AI and law. It moves beyond the hype and the fear, offering a clear-eyed assessment of the intellectual challenges at hand. He diagnoses the current state of research as suffering from a lack of foundational clarity and a tendency toward unproductive extremes. His prescription is for a new, more rigorous, and imaginative discipline—a true “Artificial Intelligence Criminal Law”—that can provide the theoretical underpinnings for a legal system capable of navigating the complex and uncertain future shaped by intelligent machines. It is a call for scholars, lawmakers, and practitioners to engage in a serious, sustained, and systematic effort to ensure that the rule of law can keep pace with the relentless march of technology.
Daocui Sun, Institute of National Legal Aid, China University of Political Science and Law, ACADEMICS, DOI: 10.3969/j.issn.1002-1698.2021.12.007