AI and Intellectual Property: Navigating Legal Frontiers in the Digital Age
As artificial intelligence continues to evolve at an unprecedented pace, its impact on creative industries, technological innovation, and legal frameworks has become impossible to ignore. From automated journalism to algorithm-driven drug discovery, AI systems are no longer mere tools but active participants in the creation of content and inventions. This shift raises profound questions about ownership, authorship, and the very foundations of intellectual property (IP) law. While the technology advances rapidly, legal systems around the world struggle to keep up, creating a growing gap between innovation and regulation.
At the heart of this dilemma lies a fundamental question: who owns the output generated by an AI system? Is it the developer who designed the algorithm, the user who prompted the system, the owner of the training data, or the machine itself? The absence of clear answers has led to increasing legal uncertainty, particularly in jurisdictions like China, where recent scholarly work is beginning to illuminate the complexities involved.
One of the most comprehensive explorations of this issue comes from Zheng Wansheng, a professor at the Security Office of Daqing Campus, Harbin Medical University. In his 2021 article published in Technology and Law, Zheng offers a systematic analysis of the challenges AI poses to existing IP frameworks, focusing on three core areas: subject system, object system, and content system. His work, grounded in Chinese legal doctrine yet relevant to global debates, provides a timely roadmap for policymakers, technologists, and legal practitioners navigating this uncharted terrain.
Zheng begins by defining artificial intelligence not as a monolithic entity but as a branch of computer science that simulates, extends, and amplifies human intelligence through intelligent algorithms. Unlike traditional software, AI systems—particularly those based on neural networks—can process vast datasets, recognize patterns, and generate outputs without explicit step-by-step programming. This capability enables machines to produce articles, music, designs, and even patentable inventions, often at speeds and scales far exceeding human capacity.
A striking example cited by Zheng is the case of Tencent’s AI-powered news bot, which during the Rio Olympics generated an average of 30 sports reports per hour across multiple disciplines, including table tennis, badminton, and basketball. The bot’s efficiency and consistency outperformed most human journalists in terms of speed and volume, demonstrating AI’s potential to revolutionize content creation. However, such achievements also expose the limitations of current IP laws, which were designed with human creators in mind.
The first major challenge Zheng identifies is the subject system—the question of who qualifies as a legal subject entitled to hold intellectual property rights. Under current Chinese law, including the Civil Code and the Copyright Law, only natural persons, legal entities, and unincorporated organizations are recognized as IP holders. Artificial intelligence, regardless of its sophistication, is not considered a legal person. This creates a vacuum when it comes to assigning ownership of AI-generated works.
Zheng argues that this gap is becoming increasingly untenable. As AI systems grow more autonomous through deep learning and reinforcement mechanisms, the line between tool and creator blurs. If a machine independently generates a novel pharmaceutical compound or a piece of music, should the credit go solely to the programmer who wrote the initial code? Or should the user who fine-tuned the model and provided the input be recognized? The current legal framework offers no clear guidance, leaving rights ambiguous and disputes inevitable.
Moreover, the issue extends beyond ownership to liability. If an AI system produces content that infringes on existing copyrights—say, by reproducing a protected melody or mimicking a distinctive writing style—who is responsible? The AI itself cannot be sued, and holding the developer accountable for every output may stifle innovation. Similarly, end users may lack the technical knowledge to foresee potential infringement. Zheng emphasizes that without a redefined subject system, the legal landscape will remain fragmented and unpredictable.
One proposed solution, which Zheng cautiously endorses, is the concept of legal personhood for AI. By treating advanced AI systems as “legal persons” in a limited, functional sense—similar to how corporations are treated under the law—governments could assign rights and responsibilities more clearly. This would not imply full human-like rights but rather a statutory recognition that allows AI systems to be named as authors or inventors for the purpose of IP registration.
However, Zheng acknowledges the philosophical and practical hurdles. Unlike humans, AI lacks consciousness, intentionality, and emotional depth. Its creations are the result of pattern recognition and statistical inference, not personal expression or lived experience. Granting AI full authorship rights could undermine the moral underpinnings of copyright, which traditionally protects the personal connection between creator and creation.
This leads to the second major challenge: the object system, or the criteria for what constitutes a protectable work. In copyright law, protection typically requires “originality” or “independence of creation”—a threshold that implies human intellectual effort. But when an AI generates a poem or a painting, how do we assess originality? Is the work truly “original” if it is derived from millions of pre-existing examples?
Zheng points out that AI-generated content often bears clear traces of imitation, shaped by the data it was trained on and the algorithms that govern its output. While this enables high-quality, consistent results, it also increases the risk of unintentional plagiarism. The “black box” nature of many AI models makes it difficult to trace the source of specific elements, complicating infringement claims.
To address this, Zheng advocates for a shift toward objective evaluation standards. Instead of focusing on the subjective intent of the creator—which is absent in AI systems—laws should assess the output based on measurable criteria such as novelty, creativity, and aesthetic value. If a machine-generated design demonstrates sufficient innovation and technical advancement, it should qualify for protection, regardless of whether a human directly authored it.
This approach aligns with patent law principles, where inventions are judged on their technical contribution, not the identity of the inventor. Zheng notes that China’s Patent Law already emphasizes objective criteria such as novelty, inventiveness, and utility. Extending this logic to AI-generated inventions would ensure consistency and fairness, encouraging innovation without overburdening developers.
Yet, the question of originality remains contentious. Some legal scholars argue that true creativity requires intention and meaning—qualities AI lacks. Others counter that the law has always protected works created through mechanical or collaborative processes, from photography to computer-aided design. If a human directs an AI system with specific goals and parameters, the resulting work may still reflect human creativity, albeit indirectly.
Zheng suggests a hybrid model: recognizing the dual nature of AI creations, where both the human input (in setting up the system) and the machine’s autonomous processing contribute to the final output. In such cases, the law could adopt a co-authorship framework, assigning rights to the human stakeholders—developers, users, or owners—while acknowledging the AI’s role as a creative engine.
The third layer of the challenge lies in the content system—the scope and duration of IP rights. Traditional IP laws grant long protection periods: copyright typically lasts for the life of the author plus 50 to 70 years, while patents are protected for 20 years. But in the fast-moving world of AI, such durations may be excessive.
AI-driven fields like machine learning, robotics, and digital media evolve rapidly, with new models and applications emerging every few months. A 20-year patent on an AI algorithm could stifle competition and hinder progress, especially if the technology becomes obsolete within a few years. Similarly, extended copyright terms on AI-generated content could restrict public access and limit the development of new creative works.
Zheng proposes a more flexible and differentiated approach to IP duration. For AI-generated works, protection periods should be shorter and tailored to the technology’s lifecycle. He suggests that copyright for AI content could be limited to 10–15 years, while patents on AI inventions might be adjusted based on technological obsolescence rates. This would balance the need to incentivize innovation with the public interest in access and reuse.
Another key aspect of the content system is the treatment of moral rights—the non-economic rights of authors, such as the right to be credited and the right to object to distortion of their work. These rights are deeply tied to human identity and dignity. Since AI lacks personal identity, Zheng argues that moral rights should not apply to machine-generated works. Instead, protection should focus on economic rights, such as reproduction, distribution, and licensing.
This distinction has practical implications. For instance, AI-generated news articles or stock images could be freely adapted and repurposed, as long as proper attribution is given to the system or its operator. This would facilitate the growth of open data ecosystems and reduce transaction costs in digital markets.
To further clarify the legal status of AI creations, Zheng recommends several policy measures. First, mandatory labeling of AI-generated content. Just as genetically modified foods are labeled, AI-produced works should be clearly marked to inform users and prevent deception. This transparency would help distinguish human-authored works from machine-generated ones, supporting fair competition and consumer trust.
Second, expanding the scope of fair use and fair dealing exceptions. Current IP laws allow limited use of copyrighted material for purposes such as criticism, education, and research. Zheng suggests broadening these exceptions to cover AI training, provided the use is non-commercial and does not harm the market for the original work. This would enable developers to train AI models on large datasets without facing constant litigation risks.
Third, incorporating AI outputs into neighboring rights or sui generis protection schemes. Neighboring rights, which protect performers, broadcasters, and database producers, could be extended to cover AI operators. Alternatively, a new category of IP rights could be created specifically for AI-generated works, offering limited protection that reflects their unique nature.
Fourth, lowering the threshold for compulsory licensing of AI inventions. In cases where a patented AI technology is essential for public welfare—such as in healthcare or environmental monitoring—governments should have the authority to grant licenses to third parties under reasonable terms. This would prevent monopolies and ensure broader access to critical innovations.
Zheng’s analysis is not just theoretical; it reflects a growing trend in global IP policy. In 2022, the European Union proposed a directive on AI that includes provisions for transparency and accountability in AI-generated content. The United States Copyright Office has issued guidance stating that only works with human authorship are eligible for copyright, though it allows registration of works where AI played a supportive role. China, meanwhile, has begun pilot programs in Shenzhen and Beijing to test new IP frameworks for AI innovations.
Despite these efforts, significant challenges remain. One major obstacle is the lack of international harmonization. Different countries have different approaches to AI and IP, creating legal fragmentation that complicates cross-border innovation. A Chinese AI company may find its product protected in one jurisdiction but treated as public domain in another. This uncertainty discourages investment and limits the global reach of AI technologies.
Another challenge is enforcement. Even if new laws are enacted, detecting and prosecuting AI-related IP violations is technically difficult. Deepfakes, algorithmic plagiarism, and automated content scraping are hard to trace and attribute. Legal systems will need to invest in digital forensics, blockchain-based provenance tracking, and AI-powered monitoring tools to keep pace with offenders.
Moreover, there is a risk of over-regulation. While legal clarity is essential, excessive restrictions could stifle the very innovation the laws aim to protect. Policymakers must strike a delicate balance—providing enough certainty to encourage investment while preserving the openness and dynamism that fuel technological progress.
Zheng concludes that the path forward requires collaboration across disciplines. Legal experts, computer scientists, ethicists, and industry leaders must work together to develop frameworks that are both principled and practical. He calls for interdisciplinary research, public consultations, and experimental legal zones where new IP models can be tested in real-world settings.
Ultimately, the rise of AI is not just a technological revolution but a legal and philosophical one. It forces us to reconsider what it means to create, to own, and to innovate. As machines become increasingly capable of generating meaningful output, the law must evolve to reflect this new reality—not by abandoning its core principles, but by adapting them to a world where human and artificial intelligence coexist.
The journey is just beginning. But with thoughtful analysis and proactive policy-making, societies can harness the power of AI while ensuring fairness, accountability, and respect for intellectual effort—whether that effort comes from a person or a machine.
Zheng Wansheng, Harbin Medical University, Technology and Law, DOI: 10.13510/j.cnki.jit.2021.05.028