AI Inventions Challenge Patent Law Frameworks
As artificial intelligence (AI) evolves from a tool into an autonomous creator, the global patent system faces a transformative moment. No longer confined to science fiction, AI technologies now underpin advancements in facial recognition, autonomous vehicles, drug discovery, and agricultural automation. These developments, once the domain of human ingenuity, are increasingly being driven by machine learning algorithms and neural networks capable of independent innovation. This shift raises profound legal questions: Who owns the rights to an invention created by AI? Can a machine be listed as an inventor? And how should patent laws adapt to ensure innovation is both protected and incentivized?
A comprehensive study by Li Min, a doctoral candidate at the Law School of Jilin University, explores these pressing issues in the December 2021 issue of Innovation Science and Technology. Published under the DOI 10.19345/j.cxkj.1671-0037.2021.12.008, the paper titled “Research on the Frontier Issues of Artificial Intelligence Derived Inventions and Patent Law” offers a detailed analysis of the legal, technical, and economic implications of AI-generated inventions. The work stands out for its rigorous examination of international patent frameworks, its forward-looking proposals for reform, and its grounding in both technological reality and legal theory.
The central argument of Li Min’s research is that current patent laws, designed with human inventors in mind, are ill-equipped to handle the rise of AI as a creative agent. While AI has not yet achieved consciousness or self-awareness, its ability to process vast datasets, identify patterns, and generate novel solutions means it can now contribute substantively—or even solely—to the invention process. This capability disrupts traditional notions of inventorship, which are legally restricted to natural persons. In most jurisdictions, only humans can be recognized as inventors, the right to be named, a personal right that cannot be transferred. This creates a paradox: if an AI generates a patentable invention, there is no legally valid way to acknowledge its role, potentially rendering the invention unpatentable or leading to inaccurate attribution.
Li Min categorizes AI involvement in invention into three distinct phases. The first is AI as a tool, where the machine performs computational tasks under human direction, such as filtering chemical compounds in drug discovery. In this scenario, AI functions like a high-speed calculator—valuable but not inventive. The second phase involves collaboration between humans and AI, where the algorithm contributes novel insights based on learned data. For example, an AI might suggest a previously unconsidered molecular structure for a new pharmaceutical compound. Here, the line between human and machine contribution blurs, raising questions about co-inventorship. The third and most disruptive phase is fully autonomous invention, where AI operates independently, using self-improving models to generate new knowledge without human intervention. Systems like AlphaGo Zero, which mastered the game of Go through self-play, exemplify this level of autonomy.
The implications for patent law are significant. If an AI independently discovers a new material or process, who should hold the patent? Current legal frameworks offer no clear answer. In many countries, including the United States and members of the European Patent Office, patent applications must list human inventors. Attempts to list AI systems as inventors—such as those made by Stephen Thaler with his DABUS system—have been rejected on the grounds that only natural persons can be inventors. This legal rigidity forces applicants into a dilemma: either misrepresent human involvement, risking patent invalidation, or forgo patent protection altogether, leaving innovations vulnerable to imitation.
Li Min argues that this status quo undermines the very purpose of patent law—to promote innovation by granting temporary monopolies in exchange for public disclosure. If AI-generated inventions cannot be patented, developers may resort to trade secrets, which do not require disclosure and thus hinder the spread of knowledge. This outcome would slow technological progress and contradict the open innovation model that has driven much of the digital revolution.
To address this, Li Min proposes a nuanced approach that distinguishes between inventorship and ownership. While the concept of a machine as a legal person remains controversial, she suggests recognizing AI as a “deemed natural person” for the limited purpose of inventorship. This would allow accurate attribution without requiring full legal personhood. A human or corporate entity could then act as a legal representative, managing the AI’s intellectual property rights. This model preserves the integrity of the patent record while enabling practical enforcement of rights.
Alternatively, Li Min explores the possibility of assigning patent rights to the human actors involved in the AI’s development and deployment. She identifies five key stakeholders: the AI program designer, the data provider, the hardware manufacturer, the AI owner, and the end user. Each plays a role in enabling AI-driven invention, but their contributions vary in significance. The program designer creates the algorithm’s architecture; the data provider supplies the training inputs; the manufacturer integrates the software into physical systems; the owner maintains and operates the AI; and the end user initiates the inventive process by posing problems or setting parameters.
Among these, Li Min concludes that the end user—the individual or organization that employs the AI to solve a specific problem—is the most appropriate default patent holder. This recommendation aligns with the economic incentives behind patent law. By granting rights to the user, the system encourages investment in AI applications and rewards those who derive practical value from the technology. It also avoids disincentivizing innovation among software developers, who may otherwise restrict access to powerful AI tools if they fear losing control over downstream inventions.
This user-centric model reflects broader trends in technology law, where the focus has shifted from creators to deployers. Just as cloud computing and platform economies have redefined ownership and liability, AI demands a rethinking of intellectual property norms. The traditional model, where the inventor is also the owner, no longer fits a world where creativity emerges from complex socio-technical systems involving multiple actors across global supply chains.
The paper also examines how AI affects the substantive criteria for patentability: novelty, non-obviousness (or inventive step), and utility. Novelty requires that an invention be new, not previously disclosed in the prior art. With AI capable of generating millions of design variations, the volume of prior art could explode, making it harder for any invention to meet the novelty threshold. Li Min warns that without careful management, this could lead to a “prior art overload,” where patent offices are overwhelmed by machine-generated disclosures, and inventors face insurmountable hurdles in proving originality.
To mitigate this risk, she proposes a tiered approach to prior art. AI-generated disclosures should be treated the same as human-generated ones, but only if they are publicly accessible and meaningfully documented. Random or nonsensical outputs should not count as prior art. Moreover, the assessment of novelty should consider the context of creation—whether the invention was produced by AI or human—only when necessary to avoid unfair disadvantages.
Non-obviousness, or the requirement that an invention represent a significant advance over existing knowledge, presents another challenge. Patent law typically assesses this by asking whether the invention would have been obvious to a “person having ordinary skill in the art” (PHOSITA). But what happens when AI, not a human, is the inventor? Should the standard be based on human capabilities, AI capabilities, or a hybrid benchmark?
Li Min suggests a dynamic standard that evolves with technological adoption. In fields where AI is not yet widely used, the PHOSITA standard should remain human-centered. However, in domains where AI is commonplace—such as bioinformatics or materials science—the standard should reflect the typical use of AI tools. This ensures that patents are granted for truly innovative work, not just routine applications of automated systems. Over time, as AI becomes ubiquitous, the baseline level of skill in all technical fields may rise, reflecting the enhanced problem-solving capacity now available to practitioners.
Utility, the third patentability criterion, is less contentious. Most AI-generated inventions are designed to solve practical problems, whether optimizing energy consumption, diagnosing diseases, or improving manufacturing efficiency. As long as the invention has a specific, credible, and substantial use, it satisfies the utility requirement. The real challenge lies not in proving utility but in ensuring that AI systems are trained on diverse, high-quality data to avoid biased or unsafe outcomes.
Internationally, approaches to AI and patents vary. The United States, under the 2019 Patent Eligibility Guidance, allows software and AI-related inventions if they integrate abstract ideas into practical applications. The U.S. Patent and Trademark Office has signaled openness to protecting AI innovations, provided they meet existing legal standards. In Europe, the European Patent Office (EPO) maintains a stricter stance, requiring that AI inventions produce a “technical effect” beyond mere data processing. Mathematical methods and algorithms are generally excluded unless applied to a concrete technical problem, such as medical imaging or industrial control systems.
Japan has taken a more pragmatic approach, updating its examination guidelines in 2018 to include AI-specific examples. The Japan Patent Office now accepts AI-related inventions if they demonstrate technical applicability and are supported by experimental data. China, too, has adapted its rules, clarifying in the 2017 revision of the Patent Examination Guidelines that computer programs are patentable if they produce a technical effect. This shift reflects a growing recognition that software and AI are integral to modern innovation.
Despite these efforts, a global consensus remains elusive. The lack of harmonization creates uncertainty for multinational companies and may lead to forum shopping, where applicants seek patents in jurisdictions with the most favorable rules. Li Min calls for international dialogue to develop common principles for AI and intellectual property. Such coordination could prevent fragmentation, reduce legal risks, and promote fair competition.
Beyond legal reform, Li Min emphasizes the need for ethical and policy considerations. Granting patents to AI-generated inventions could concentrate power in the hands of a few large tech firms that control the most advanced AI systems. Smaller entities and individual inventors might be marginalized, exacerbating existing inequalities in the innovation ecosystem. To prevent this, policymakers should consider measures such as compulsory licensing, patent pools, or open-access requirements for certain types of AI-generated knowledge.
Moreover, the environmental and social impacts of AI-driven innovation must be weighed. While AI can accelerate the development of green technologies and life-saving medicines, it can also enable surveillance, misinformation, and autonomous weapons. Patent systems should not blindly reward all AI-generated inventions but should incorporate safeguards to prevent harmful applications. This could involve ethical review panels, public interest exceptions, or restrictions on patenting certain categories of AI outputs.
In conclusion, Li Min’s research provides a timely and thorough examination of one of the most complex challenges facing intellectual property law in the 21st century. As AI transitions from assistant to innovator, the legal framework must evolve to reflect this new reality. Her proposals—ranging from redefining inventorship to adjusting patentability standards—offer a balanced and pragmatic path forward. By recognizing the contributions of AI while preserving human accountability and economic incentives, the patent system can continue to fulfill its mission of promoting progress in science and the useful arts.
The debate over AI and patents is not merely academic; it has real-world consequences for innovation, competition, and societal well-being. As governments and international organizations begin to grapple with these issues, Li Min’s work serves as an essential reference point. It underscores the importance of interdisciplinary collaboration—between technologists, lawyers, economists, and ethicists—in shaping a future where artificial intelligence enhances, rather than undermines, the human capacity for creation.
Li Min, Law School, Jilin University. Innovation Science and Technology, DOI: 10.19345/j.cxkj.1671-0037.2021.12.008