AI-Generated Content Sparks Copyright Debate in China’s Legal Sphere

AI-Generated Content Sparks Copyright Debate in China’s Legal Sphere

As artificial intelligence (AI) systems grow increasingly sophisticated in producing text, music, and visual art, a pressing legal question has emerged: can AI-generated content qualify for copyright protection, and if so, who owns it? This issue is no longer confined to academic speculation. With AI tools like Tencent’s Dream Writer generating news articles and Google’s DeepDream producing paintings, the intersection of machine creativity and intellectual property law has become a focal point of legal discourse in China.

A recent in-depth study by Song Xiaomei, a graduate researcher at the Law School of Guilin University of Electronic Technology, examines the complex legal terrain surrounding AI-generated works. Published in the journal Science, Technology and Law, the paper delves into the challenges of defining authorship, assigning liability, and adapting existing copyright frameworks to accommodate the rapid evolution of AI technologies. The research arrives at a time when global legal systems are grappling with similar questions, yet China’s unique legal context and technological landscape offer a distinct perspective on how these issues might be resolved.

The core of the debate lies in the definition of a “work” under copyright law. In China, as in most jurisdictions, a work must possess originality—meaning it is independently created and reflects the author’s intellectual input, thoughts, or emotions. Historically, this requirement has been tied to human authorship. But as AI systems produce content that is indistinguishable from human-created works, the traditional boundaries of authorship are being tested.

Song’s analysis begins by acknowledging the dual nature of AI: it is both a technological advancement and a social phenomenon. As such, its development necessitates a corresponding evolution in legal and regulatory frameworks. The author highlights that AI-generated content—ranging from news reports to musical compositions—often originates from algorithms trained on vast datasets of existing human-created works. This process, known as machine learning, involves the AI analyzing patterns, styles, and structures from copyrighted material to generate new outputs. While the final product may be original in expression, its foundation is deeply rooted in pre-existing intellectual property.

This raises a critical concern: does the use of copyrighted works to train AI systems constitute infringement? Unlike traditional plagiarism, where a direct copy or close imitation can be identified, AI training involves statistical analysis and pattern recognition across millions of data points. The resulting output may not reproduce any single work but may still embody elements derived from protected content. This makes infringement detection exceptionally difficult, as there is no single “source” to compare against.

Song points out that the conventional legal test for copyright infringement—“access plus substantial similarity”—is ill-suited for AI-generated content. In traditional cases, courts assess whether the alleged infringer had access to the original work and whether the two works are substantially similar in expression. However, AI systems do not “access” works in the human sense; they process them algorithmically, often without human oversight. Moreover, the output may be statistically similar to thousands of works in the training set, making it nearly impossible to pinpoint a specific infringement.

The ambiguity extends to the ownership of AI-generated works. If an AI produces a novel or a painting, who holds the copyright? The software developer who designed the algorithm? The user who prompted the AI with specific instructions? The company that owns the computing infrastructure? Or the AI itself?

Current Chinese law does not recognize AI as a legal person. Unlike corporations, which are granted legal personality to own property and enter contracts, AI systems are considered tools—albeit highly advanced ones. This means that AI cannot hold copyright in its own right. Therefore, the question shifts to which human or corporate entity should be recognized as the author.

Song reviews several scholarly perspectives on this issue. Professor Wu Handong argues that AI-generated content can possess originality, especially when human input is involved in selecting training data, adjusting parameters, or curating outputs. In such cases, the human’s creative choices contribute to the final work, making it eligible for copyright protection. By contrast, Professor Wang Qian maintains that current AI systems are “weak AI,” meaning they operate within predefined parameters and lack true autonomy. Since they merely execute instructions without independent thought, their outputs cannot be considered original creations under copyright law.

Another scholar, Cao Yuan, emphasizes the absence of “thought” in AI systems. According to Cao, copyright protects expressions of thought, and since AI lacks consciousness, its outputs are mere mechanical reproductions, regardless of how complex they appear. Song challenges this view by invoking the “idea-expression dichotomy,” a foundational principle in copyright law. Under this doctrine, only the expression of ideas—not the ideas themselves—is protected. Therefore, the focus should be on whether the output is an original expression, not whether it stems from a conscious mind.

Drawing on this reasoning, Song concludes that AI-generated content can indeed qualify as a “work” under Chinese copyright law, provided it meets the originality standard. The key lies in the degree of human involvement. When a human user provides creative direction—such as selecting themes, adjusting stylistic parameters, or editing the final output—the resulting work reflects human intellectual effort and should be protected.

However, this raises another challenge: how to allocate rights among multiple stakeholders. Song proposes a flexible approach based on contractual agreements. If a user and a developer have a contract specifying ownership, that agreement should prevail. In the absence of such a contract, ownership could default to the user, as they are the one who initiated and directed the creative process. This mirrors the treatment of works made for hire in traditional employment contexts, where employers often hold copyright over works created by employees using company resources.

The issue of liability is equally complex. If an AI-generated work infringes on someone else’s copyright, who is responsible? The AI itself cannot be held liable, as it lacks legal personhood and the capacity to bear responsibility. The developer may argue that they merely created a tool and cannot control how it is used. The user might claim they were unaware of the AI’s training data or the potential for infringement.

Song suggests that liability should be assigned based on control and benefit. The party that profits from the AI-generated content and has the ability to influence its output should bear the primary responsibility. In most cases, this would be the user or the organization deploying the AI. However, if the developer knowingly designed the system to replicate protected works or failed to implement reasonable safeguards, they could also share liability.

To address the difficulty in detecting infringement, Song recommends refining the legal standards for assessing AI-generated content. Instead of relying solely on direct comparisons, courts should consider broader indicators, such as market impact and audience perception. If an AI-generated song displaces sales of an original artist’s work, even without direct copying, it may still constitute unfair competition or unjust enrichment.

The paper also emphasizes the need for a balanced approach that protects creators without stifling innovation. Overly restrictive copyright rules could hinder the development of AI technologies, limiting their potential to enhance creativity and productivity. Conversely, a lack of protection could discourage investment in AI development and devalue human creativity.

Song advocates for a regulatory framework that promotes transparency in AI training processes. Developers could be required to disclose the sources of their training data or implement content filtering mechanisms to minimize the risk of infringement. Licensing schemes, similar to those used in music sampling, could allow AI developers to legally use copyrighted works for training, with compensation to rights holders.

International cooperation is also essential. As AI systems operate across borders, a fragmented legal landscape could create confusion and legal uncertainty. Harmonizing standards on AI-generated content would facilitate global innovation while ensuring fair treatment of creators.

The implications of this research extend beyond law. They touch on fundamental questions about creativity, authorship, and the role of machines in society. As AI becomes more integrated into creative industries—from journalism to film to music—the legal system must adapt to reflect these changes. The goal is not to resist technological progress but to guide it in a way that respects both innovation and the rights of individuals.

Song’s work contributes to a growing body of scholarship that seeks to reconcile the rapid pace of technological change with the stability of legal principles. By proposing practical solutions—such as contractual defaults, liability allocation based on control, and updated infringement standards—the study offers a roadmap for policymakers, courts, and industry stakeholders.

In the coming years, as AI systems become even more autonomous, the line between human and machine creativity may blur further. Some envision a future where AI not only generates content but also negotiates licenses, detects infringement, and manages intellectual property portfolios. While such scenarios may seem futuristic, they underscore the importance of establishing clear legal foundations today.

The debate over AI and copyright is not just about legal technicalities; it is about values. It forces society to reconsider what it means to create, who deserves recognition, and how to distribute the benefits of technological progress. As AI reshapes the creative landscape, the law must evolve to ensure that it serves the public interest, protects individual rights, and fosters continued innovation.

Song Xiaomei’s research stands as a timely and rigorous contribution to this evolving conversation. By grounding her analysis in Chinese legal doctrine while engaging with international perspectives, she provides a nuanced understanding of one of the most complex challenges at the intersection of law and technology.

The discussion also highlights the importance of interdisciplinary collaboration. Legal scholars, computer scientists, ethicists, and policymakers must work together to develop solutions that are both technically sound and ethically responsible. This is especially crucial in a field where technological capabilities often outpace regulatory frameworks.

One potential avenue for reform is the creation of a specialized legal category for AI-generated works. Instead of forcing them into existing copyright paradigms, a new regime could be designed to reflect their unique characteristics. Such a system might grant limited protection with shorter terms, require mandatory attribution to the AI system and its human operators, or establish collective licensing mechanisms for training data.

Another approach is to expand the concept of fair use or fair dealing to explicitly include AI training. Many jurisdictions already allow limited use of copyrighted material for research, education, and criticism. Extending this to non-expressive uses—such as data mining for machine learning—could provide legal clarity without undermining creators’ rights.

Ultimately, the resolution of these issues will depend on ongoing dialogue between stakeholders. Industry leaders must engage with creators’ organizations, legal experts, and civil society to build consensus on best practices. Public consultations, pilot programs, and experimental licensing models could help test different approaches before enacting permanent rules.

As AI continues to transform the creative economy, the legal system faces a dual mandate: to protect the fruits of human ingenuity and to enable the next wave of technological advancement. Striking the right balance will require careful thought, empirical evidence, and a willingness to adapt. Song’s research offers a valuable foundation for this effort, demonstrating that with thoughtful policy design, it is possible to uphold the principles of copyright while embracing the possibilities of artificial intelligence.

In conclusion, the rise of AI-generated content is not a threat to copyright, but an opportunity to re-examine and strengthen it. By addressing the challenges of originality, ownership, and liability with clarity and foresight, legal systems can ensure that both human and machine creativity are recognized and rewarded. The path forward lies not in resistance, but in adaptation—ensuring that the law remains a dynamic force in a rapidly changing world.

Song Xiaomei, Law School of Guilin University of Electronic Technology, Science, Technology and Law, DOI: 10.13555/j.cnki.cnlaw.2022.03.005