Artificial Intelligence Lacks Legal Personhood, Study Says

Artificial Intelligence Lacks Legal Personhood, Study Says

A recent academic study published in the Journal of China University of Petroleum (Edition of Social Sciences) argues that artificial intelligence (AI) should not be granted legal personhood. Instead, AI should be classified as a special category of property under civil law, according to the paper’s author, Xu Yuewei, a deputy researcher at the Legal Affairs Office of China University of Petroleum (East China).

The article, titled “The Negation of the Qualification of Legal Personality of Artificial Intelligence and Its Legal Regulation Design,” was published in August 2021. It presents a comprehensive critique of the growing trend among legal scholars and policymakers to treat advanced AI systems as legal entities—akin to persons or corporations—with rights and responsibilities.

Xu’s central argument is grounded in rationalism and legal realism. He contends that despite the rapid development of AI technologies, such systems fundamentally remain tools created and controlled by humans. They lack the essential attributes required for legal personhood: rational consciousness, moral agency, and the ability to interpret and abide by legal norms.

The debate over AI legal status has intensified in recent years, driven by high-profile technological milestones. In 2017, the world witnessed AlphaGo defeat Ke Jie, one of the strongest human Go players, signaling AI’s growing cognitive capabilities. Around the same time, Saudi Arabia granted citizenship to a humanoid robot named Sophia, sparking global discussion on whether machines could possess rights.

Meanwhile, regulatory bodies in various countries have explored frameworks for governing AI. The European Parliament’s Legal Affairs Committee proposed in 2016 to classify advanced robots as “electronic persons,” suggesting they could hold rights and obligations similar to legal entities. In the United States, California’s Department of Motor Vehicles recognized an AI system as a “driver” following a traffic incident involving a self-driving vehicle.

These developments have fueled a wave of scholarly support for granting AI some form of legal recognition. Some researchers advocate for “limited legal personality,” arguing that highly autonomous systems should bear partial responsibility for their actions. Others propose “electronic personhood” or “artificial legal personality,” drawing analogies to corporate personhood.

Xu, however, challenges this line of thinking. He asserts that such proposals are based more on technological enthusiasm than on sound legal reasoning. “The idea of granting legal personality to AI,” he writes, “is not a necessary evolution of law, but a speculative leap that risks undermining the coherence of existing legal systems.”

One of the key flaws in the pro-personhood argument, Xu explains, is its misunderstanding of the distinction between human autonomy and machine automation. While AI can simulate decision-making, it does so through pre-programmed algorithms and statistical models. Its actions are not driven by intention, desire, or ethical reasoning, but by data processing and optimization routines designed by human engineers.

“AI lacks the ‘soul of a person’—rational consciousness,” Xu emphasizes. He draws on classical legal theory, referencing the concept of the “rational actor” as a foundational assumption in civil law. In both Western and Chinese legal traditions, legal subjects are presumed to possess cognitive and behavioral rationality—the ability to understand rules, foresee consequences, and make value-based choices.

AI, in contrast, operates within a framework of “digital rationality.” Its decisions are the result of pattern recognition and computational efficiency, not moral deliberation. Even when AI systems appear to make independent choices—such as selecting a route for a self-driving car or generating a news article—they are executing instructions embedded in their code by human designers.

Moreover, Xu points out, AI cannot comprehend the social meaning of laws. Legal norms are not merely sets of instructions; they embody values, ethics, and societal expectations. Humans interpret laws through context, precedent, and moral reasoning. AI, however, lacks the capacity for such interpretation. It cannot grasp the implications of breaking a law or understand the harm caused by its actions.

Even if future AI systems surpass human intelligence—a scenario often referred to as the “technological singularity”—this would not automatically qualify them for legal personhood. Superior computational ability does not equate to moral agency. As Xu notes, “Intelligence without consciousness is not personhood.”

Another argument Xu refutes is the analogy between AI and legal fictions such as corporations. Corporations are granted legal personhood not because they are sentient, but because they represent collective human will. A corporation’s decisions are made by people—boards, executives, shareholders—and its liability is ultimately tied to human actors. This is why corporate crime often involves “dual punishment,” where both the entity and responsible individuals are held accountable.

AI, by contrast, does not represent a collective human will. Its decisions emerge from algorithms trained on data, not from democratic or managerial processes. There is no “will” behind the machine, only code. Therefore, Xu argues, the legal fiction of personhood cannot be extended to AI in the same way it was to corporations.

Granting AI legal personhood would also create significant practical and ethical risks. If AI systems were allowed to bear legal responsibility, manufacturers and developers might use this as a shield to avoid liability. For example, if a self-driving car causes a fatal accident, the company could claim that the AI “driver” was independently responsible, thus evading accountability.

This would not only undermine justice but also weaken incentives for safety and ethical design. If creators know they won’t be held liable, they may cut corners in testing, transparency, and risk assessment. Xu warns that such a shift could lead to a “moral hazard” in AI development, where innovation is prioritized over public safety.

Furthermore, the expansion of legal personhood to non-human entities could set a dangerous precedent. If AI can be a legal person, why not animals? Or plants? Or even software agents? The boundaries of personhood would become blurred, potentially diluting the rights and protections currently afforded to human beings.

Given these concerns, Xu proposes an alternative framework: AI should be recognized as a special object within civil law. This classification acknowledges that AI is not just another piece of property like a chair or a car, but a technologically advanced, semi-autonomous system that requires tailored legal treatment.

The concept of “special object” is rooted in the theory of object hierarchy, which categorizes property based on its social, ethical, and functional significance. Traditional categories include “ethical objects” (e.g., human remains, pets), “general objects” (e.g., ordinary goods), and “special objects” (e.g., intellectual property, virtual assets).

AI, Xu argues, fits best within the “special object” category due to its unique characteristics: high complexity, partial autonomy, and deep integration into social and economic systems. Classifying AI this way allows the law to impose specific rules without altering the fundamental subject-object distinction.

Under this model, AI systems remain legal objects—property owned and controlled by individuals or organizations. However, because of their capabilities, they are subject to enhanced regulatory oversight. This includes requirements for transparency, safety testing, and accountability mechanisms.

For instance, in cases of AI-caused harm—such as a robot injuring a worker or a recommendation algorithm spreading misinformation—the responsibility should fall on the human stakeholders: designers, manufacturers, operators, and deployers. This aligns with existing product liability principles, where defective products lead to liability for producers, not the products themselves.

Xu suggests that AI-related liability should be governed by a modified version of strict liability. Under this regime, victims of AI harm would not need to prove negligence; instead, the burden would be on the AI’s operators to demonstrate that they took all reasonable precautions. This would encourage higher safety standards and reduce the incentive to deploy risky systems.

To support this framework, Xu recommends several policy measures. First, a national AI registration and certification system should be established. Before any AI system enters the market, it should undergo rigorous evaluation for safety, bias, and compliance with ethical guidelines. This would function similarly to how medical devices or pharmaceuticals are regulated.

Second, specialized technical agencies should be created to investigate AI-related incidents. Given the complexity of algorithms and data systems, standard legal procedures may not suffice. Experts in computer science, ethics, and law would need to collaborate to determine how an AI system failed and who is responsible.

Third, mandatory insurance should be required for high-risk AI applications. Just as car owners must carry liability insurance, companies deploying autonomous vehicles, medical diagnostic systems, or financial trading algorithms should be required to have insurance coverage. This would ensure that victims can be compensated promptly, even if the responsible party faces financial difficulties.

In the realm of intellectual property, Xu addresses the issue of AI-generated content. With AI now capable of writing articles, composing music, and creating visual art, questions arise about authorship and copyright. Current laws in China and most other jurisdictions only recognize natural persons, legal entities, or organizations as authors.

Since AI lacks the capacity for autonomous creation, Xu argues that AI-generated works should be treated as derivative works owned by the human or corporate entity that controls the AI. This could be the developer, the user, or the investor, depending on the context. For example, a news article written by an AI assistant for a journalist would belong to the journalist or their employer.

To manage the growing volume of AI-generated content, Xu proposes a registration system. Instead of relying solely on automatic copyright upon creation, creators could register AI-assisted works to clarify ownership and facilitate dispute resolution. This would help prevent conflicts over attribution and protect the rights of human contributors.

Regarding privacy and data protection, Xu highlights the risks posed by AI’s ability to collect, analyze, and exploit personal information. Platforms like TikTok, Toutiao, and Kuaishou use AI to track user behavior, predict preferences, and deliver targeted content—often without meaningful consent.

While China’s Civil Code includes a dedicated chapter on personality rights, Xu believes it does not go far enough in regulating AI-driven data practices. He calls for stronger obligations on data controllers, including clear notice requirements, opt-in consent mechanisms, and adherence to the principle of data minimization.

Additionally, users should be granted new rights to counterbalance AI’s power. These include the right to request deletion of personal data (the “right to be forgotten”), the right to receive explanations about how AI systems process their data (algorithmic transparency), and the right to opt out of automated decision-making.

These rights, Xu argues, are essential for maintaining human dignity and autonomy in an age of pervasive AI. Without them, individuals risk becoming passive subjects of algorithmic control, their choices shaped by invisible systems they cannot understand or challenge.

Xu’s paper also touches on the broader philosophical implications of AI personhood. He warns against what he calls the “dehumanization of law”—a shift where legal systems begin to prioritize technological efficiency over human values. If AI is treated as a legal subject, it may signal that machines are becoming ends in themselves, rather than tools serving human ends.

This, he cautions, could erode the anthropocentric foundation of law. Legal systems have historically evolved to protect human rights, promote justice, and regulate social relationships among people. Introducing non-human actors as legal subjects risks distorting these purposes.

Instead, Xu advocates for a human-centered approach to AI governance. The law should focus on regulating the human actors behind AI—those who design, deploy, and profit from it. This ensures accountability, preserves ethical standards, and upholds the primacy of human agency.

In conclusion, Xu’s study offers a sobering counterpoint to the growing enthusiasm for AI personhood. While acknowledging the transformative potential of AI, he insists that the law must remain grounded in reality. AI, no matter how advanced, remains a product of human ingenuity—not a peer to human beings.

By classifying AI as a special object rather than a legal person, the legal system can adapt to technological change without sacrificing its core principles. This approach balances innovation with responsibility, ensuring that AI serves humanity—not the other way around.

As AI continues to evolve, Xu’s framework provides a principled foundation for future legislation. It reminds policymakers and scholars alike that legal innovation must be guided by reason, ethics, and a deep respect for human dignity.

The full paper was published in the Journal of China University of Petroleum (Edition of Social Sciences), Volume 37, Issue 4, August 2021, with the DOI: 10.13216/j.cnki.upcjess.2021.04.0012. Author: Xu Yuewei, Legal Affairs Office, China University of Petroleum (East China).