Artificial Intelligence and the Law: Who Owns the Future?
As artificial intelligence systems grow increasingly sophisticated, they are no longer just tools—they are becoming creators, drivers, and decision-makers. From self-driving cars navigating city streets to AI algorithms composing music and generating news articles, the line between human and machine agency is blurring. This technological evolution raises profound legal questions: Can an AI be a legal person? Who owns the works it produces? And when an AI causes harm, who should be held accountable?
These are not hypothetical dilemmas. In 2017, Saudi Arabia granted citizenship to a humanoid robot named Sophia, marking a symbolic milestone in the global conversation about AI’s role in society. While the gesture was largely ceremonial, it sparked serious debate among legal scholars about the status of intelligent machines under the law. As AI systems become more autonomous, traditional legal frameworks—designed for human actors—are struggling to keep pace.
A recent study published in the Journal of Yangtze Normal University by Wang Hongxia of the People’s Procuratorate of Zhengzhou Airport Economic Comprehensive Experimental Zone and Zhang Anyi of the School of Civil and Commercial Law at Henan University of Economics and Law offers a comprehensive analysis of the legal challenges posed by AI. Their research, grounded in Chinese civil law and comparative jurisprudence, argues that while AI systems are transforming society, they do not—and should not—qualify as legal persons. Instead, the authors advocate for a legal framework that holds human actors accountable, treats AI-generated content as protectable intellectual property, and applies product liability principles to AI-related harms.
The debate over AI’s legal personhood is not new. Some scholars have proposed creating a new category of legal entity—often called an “electronic person” or “digital legal person”—to accommodate AI systems that operate with a high degree of autonomy. Proponents argue that granting AI a form of legal status would allow it to own property, enter contracts, and bear liability, thereby streamlining legal processes in a world where machines make decisions independently.
However, Wang and Zhang challenge this notion. They argue that legal personhood is not granted based on intelligence or autonomy alone, but on the capacity for independent will, purpose, and responsibility. AI, they assert, lacks all three. Unlike a corporation—which has a board of directors, bylaws, and internal decision-making structures—AI operates according to algorithms designed and deployed by humans. Even the most advanced machine learning systems do not possess intrinsic goals or desires. They optimize for objectives set by their creators, but they do not choose those objectives themselves.
“Artificial intelligence does not have its own independent purpose or independent will,” the authors write. “It is a product designed to serve human needs.” This distinction is crucial. Legal systems exist to regulate human behavior and protect human interests. Granting AI legal personhood would not serve this purpose; instead, it could create loopholes that allow human actors to evade responsibility.
The authors draw a historical parallel to the evolution of corporate personhood. In the 19th century, courts began recognizing corporations as legal persons not because they were sentient, but because doing so facilitated economic activity, limited investor liability, and enabled organizations to enter contracts and sue or be sued. The same logic does not apply to AI. There is no social or economic value in treating a machine as a legal actor in its own right. If the goal is to limit liability for AI developers, existing corporate structures already provide that protection.
Moreover, the idea that AI could be held accountable for its actions is fundamentally flawed. Accountability requires the ability to understand rules, anticipate consequences, and modify behavior in response to legal norms. AI systems do not possess these capacities. They cannot feel remorse, understand punishment, or be deterred by the threat of liability. Even if an AI system were programmed to comply with legal rules, its compliance would be mechanical, not moral.
The authors also dismiss the idea of tiered legal status based on AI capability—such as distinguishing between “weak” and “strong” AI. They argue that such classifications are arbitrary and unworkable. The level of autonomy an AI exhibits reflects differences in engineering and design, not a fundamental change in its ontological status. A self-driving car may make real-time decisions on the road, but it does so within parameters defined by its programmers. It does not choose to drive; it does not decide where to go unless instructed. Its “intelligence” is a function of its programming, not an emergent property of consciousness.
In practice, granting AI legal personhood could lead to absurd outcomes. If Sophia the robot were truly a citizen, would she have the right to vote? To marry? To own property? Saudi Arabia has not clarified these implications, and few legal systems are prepared to answer them. The authors caution against rushing to create new legal categories without a clear understanding of their consequences.
Instead, they propose a more pragmatic approach: treat AI as property, not person. This aligns with existing legal frameworks in which tools, machines, and software are considered objects within legal relationships, not subjects. Under this view, the AI system is a means through which human actors achieve their goals, much like a factory machine or a word processor.
This perspective has important implications for intellectual property law. As AI systems generate music, art, and literature, the question of authorship becomes urgent. Can a machine be an author? Does an AI-generated poem qualify for copyright protection?
The authors affirm that AI-generated works can meet the legal standard of originality. Copyright law protects expressions of creativity that are original and fixed in a tangible medium. It does not require that the creator be human. In fact, the U.S. Copyright Office has already registered works created by AI, provided that a human was involved in the creative process. Similarly, Chinese copyright law focuses on the objective characteristics of a work, not the identity of its creator.
However, the authors emphasize that recognizing a work as original does not resolve the question of ownership. Since AI cannot hold rights, the copyright must be assigned to a human or legal entity. The authors argue that the most logical and equitable solution is to grant ownership to the AI’s owner—the individual or organization that possesses and operates the system.
This approach serves two key purposes. First, it aligns with the incentive theory of intellectual property. Copyright exists to encourage creativity by granting creators exclusive rights to their works. But AI systems do not need incentives; they do not seek recognition or profit. Humans do. By granting copyright to the AI owner, the law encourages investment in AI development and deployment. Without such protection, there would be little economic incentive to develop creative AI systems.
Second, the ownership model reflects the reality of AI creation. While AI may generate content autonomously, the process is initiated and guided by humans. The owner decides what type of content to generate, sets the parameters, selects the training data, and often curates the output. In this sense, the AI functions as a tool, much like a camera or a paintbrush. Just as a photographer owns the images taken with their camera, the AI owner should own the works produced by their system.
The authors acknowledge exceptions. In cases where a user provides significant creative input—such as selecting unique data sets, modifying algorithms, or editing outputs—the user and the AI owner may be considered co-authors. In commissioned works, where a third party pays for AI-generated content, the rights may be transferred by contract. But in the default case, ownership should reside with the AI owner.
This framework avoids the pitfalls of alternative models. Some scholars have suggested treating AI-generated works as “works made for hire,” similar to employee creations. But this analogy fails because AI is not an employee; it has no legal capacity to enter into contracts or perform duties. Others propose placing AI works in the public domain, arguing that no one should monopolize machine-generated content. But this would discourage innovation and fail to protect the human investment behind AI systems.
The authors also address the growing challenge of identifying the source of digital content. As AI-generated text, images, and videos become indistinguishable from human-created works, the risk of misinformation and copyright infringement increases. To address this, they recommend establishing a mandatory registration system for AI-generated works. Such a system would require creators to disclose the use of AI in the production process, making it easier to verify authorship and enforce rights.
This proposal echoes emerging regulatory trends. The European Union’s AI Act, for example, requires high-risk AI systems to maintain detailed logs of their operations. Similarly, the U.S. Copyright Office has begun requiring applicants to disclose AI involvement in submitted works. A formal registration system could enhance transparency, protect consumers, and support the enforcement of intellectual property rights.
The most pressing legal challenge, however, may be liability. When an AI system causes harm—such as a self-driving car crashing into a pedestrian or a medical AI misdiagnosing a patient—who is responsible? Traditional tort law relies on fault: liability is assigned to those who acted negligently or intentionally caused harm. But in fully autonomous systems, there may be no human at the controls. The AI made the decision, but it cannot be sued.
Wang and Zhang argue that AI-related harms should be treated as product liability cases. Under this model, the manufacturer and designer of the AI system bear strict liability for defects that cause injury. This approach shifts the focus from human fault to product safety, ensuring that victims can obtain compensation even when no individual acted negligently.
Product liability is well-suited to the realities of AI technology. Unlike traditional machines, AI systems learn and adapt over time, making their behavior difficult to predict. Their decision-making processes—especially in deep learning models—are often opaque, even to their creators. This “black box” problem makes it nearly impossible to determine whether a specific error was due to a design flaw, a data anomaly, or an unforeseen interaction.
By applying strict liability, the law incentivizes developers to build safer systems from the outset. Knowing they will be held responsible for harms, companies have a stronger motivation to invest in rigorous testing, robust safety protocols, and transparent design. The authors also recommend extending liability to AI designers, not just manufacturers, since many defects originate in the algorithmic architecture rather than the physical product.
For sellers and third parties, the authors maintain the traditional fault-based standard. If a retailer sells a defective AI product without proper warnings, or if a user modifies the system in a way that introduces risk, they should be held liable based on negligence. This ensures that responsibility is distributed fairly across the supply chain.
The authors also advocate for mandatory liability insurance for AI products. Given the high costs of potential damages and the difficulty of predicting risks, insurance would help spread the financial burden and ensure that victims are compensated promptly. It would also allow developers to innovate without fear of catastrophic lawsuits, as long as they adhere to best practices.
Their proposal aligns with recent developments in AI regulation. The European Commission has proposed an AI Liability Directive that would introduce a presumption of causality in AI-related injury cases, making it easier for victims to claim compensation. China’s Cybersecurity Law and Data Security Law also impose strict obligations on AI developers, though they do not yet establish a comprehensive liability framework.
The authors conclude that while AI is transforming society, it does not require a complete overhaul of the legal system. Instead, existing legal concepts—such as property, contract, and product liability—can be adapted to address new challenges. The key is to maintain a human-centered legal framework that protects individual rights, promotes innovation, and ensures accountability.
They warn against the temptation to anthropomorphize AI. Machines may mimic human behavior, but they do not possess consciousness, intention, or moral agency. Treating them as if they do risks undermining the foundations of law. “Legal norms are designed to regulate human behavior,” they write. “The goal of AI regulation should be to serve human interests, not to create artificial entities with rights and duties of their own.”
As AI continues to evolve, the legal community must remain vigilant. New technologies will always outpace legislation, but thoughtful, principle-based regulation can help bridge the gap. The work of Wang Hongxia and Zhang Anyi offers a clear, pragmatic roadmap for navigating the legal complexities of the AI era—one that prioritizes human dignity, responsibility, and justice.
Artificial Intelligence and the Law: Who Owns the Future?
Wang Hongxia, Zhang Anyi, Journal of Yangtze Normal University, DOI: 10.19933/j.cnki.ISSN1674-3652.2021.04.011