Artificial Intelligence Ethics in China: A Call for Multidimensional Oversight
As artificial intelligence (AI) reshapes industries and redefines human interaction with technology, its rapid advancement has outpaced the development of ethical and regulatory frameworks capable of managing its societal implications. In China, where AI innovation is a cornerstone of national strategy, the urgency for a robust, adaptive, and inclusive governance system has never been greater. A comprehensive study published in Engineering by Liu Lu, Yang Xiaolei, and Gao Wen from Peking University examines the pressing ethical challenges posed by AI and proposes a forward-looking regulatory model that balances innovation with accountability.
The research, titled Artificial Intelligence Ethics Supervision in China: Demand Analysis and Countermeasures, underscores a critical juncture in the evolution of AI policy. While technological progress continues to accelerate, the social, legal, and moral consequences of AI deployment are becoming increasingly evident. From algorithmic bias and data exploitation to autonomous systems making life-or-death decisions, the risks are no longer theoretical—they are operational. The authors argue that the current governance landscape, though evolving, remains fragmented and insufficient to address the full spectrum of ethical concerns.
At the heart of their analysis is the recognition that AI ethics cannot be treated as an afterthought. The integration of ethical considerations must begin at the earliest stages of research and development. This proactive approach is essential to prevent the embedding of biases, security vulnerabilities, and unintended consequences into AI systems before they are deployed at scale. The paper emphasizes that ethical oversight should not be a static checklist but a dynamic process embedded throughout the AI lifecycle—from design and training to deployment and post-market monitoring.
One of the most compelling aspects of the study is its multidimensional framework for AI governance. Rather than relying solely on legal mandates or technical standards, the authors advocate for a hybrid model that combines ethics, policy, and law into a cohesive regulatory ecosystem. This tripartite structure allows for flexibility in response to rapidly changing technologies while ensuring that core principles—such as fairness, transparency, and human dignity—are upheld.
The ethical dimension serves as the foundation. It provides the moral compass that guides decision-making in the absence of clear legal precedents. The authors stress the importance of establishing a shared value system among stakeholders, including developers, policymakers, and the public. This consensus is not merely aspirational; it is necessary to build public trust in AI technologies. Without trust, even the most advanced systems risk rejection or misuse.
Policy, in this framework, acts as the intermediary between abstract ethical principles and enforceable laws. It offers a mechanism for experimentation and adaptation, allowing regulators to test new approaches in controlled environments before scaling them up. The paper highlights the potential of regulatory sandboxes—controlled testing grounds where AI applications can be evaluated under relaxed rules—as a way to foster innovation while managing risk. Such mechanisms enable startups and research institutions to explore cutting-edge applications without fear of immediate legal repercussions, provided that risks are contained and monitored.
Law, meanwhile, serves as the final safeguard. It establishes clear boundaries and accountability mechanisms, particularly in high-stakes domains such as healthcare, transportation, and criminal justice. The authors caution against overly rigid legislation that could stifle innovation, but they also warn that the absence of legal clarity creates uncertainty and undermines public confidence. A balanced legal framework should provide guidance without constraining technological progress, ensuring that developers understand their responsibilities while retaining the freedom to innovate.
A key insight from the study is the need for staged regulatory expansion. As AI capabilities evolve—from narrow, task-specific systems to more general and autonomous forms—the scope of oversight must grow accordingly. The authors propose a three-phase model for ethical research and governance. The first phase focuses on technical ethics, addressing issues such as algorithmic transparency, data integrity, and system reliability. This is the foundational layer, where engineers and computer scientists play a central role in embedding ethical considerations into code and architecture.
The second phase extends to networked environments, where AI systems interact with users, platforms, and other digital entities. Here, the ethical challenges shift from individual system behavior to systemic effects—such as the amplification of misinformation, the erosion of privacy, and the manipulation of user behavior through personalized content feeds. The rise of deepfakes, synthetic media, and hyper-targeted advertising exemplifies how AI can distort reality and undermine social cohesion. Governance at this level requires collaboration between technologists, social scientists, and communication experts to understand and mitigate these emergent risks.
The third and most complex phase involves the broader societal implications of AI. This includes labor market disruptions, shifts in power dynamics between individuals and institutions, and the potential for autonomous weapons and surveillance systems to alter the nature of governance itself. At this stage, ethical oversight must transcend technical and organizational boundaries, engaging with philosophy, political theory, and international relations. The authors emphasize that AI is not just a technological phenomenon but a societal transformation—one that demands a correspondingly comprehensive response.
Central to the proposed governance model is the establishment of multi-tiered ethics oversight bodies. The National Science and Technology Ethics Committee, established in 2019, represents a significant step forward in institutionalizing ethical review at the national level. However, the authors argue that this body must be complemented by sector-specific and institutional ethics committees. Industry-led ethics boards, academic research ethics panels, and corporate AI governance units should all play a role in ensuring that ethical considerations are integrated across different contexts.
These committees must be diverse and inclusive, representing not only technical experts but also legal scholars, social scientists, civil society representatives, and members of the public. The legitimacy of ethical oversight depends on its perceived neutrality and representativeness. Past attempts by some international tech firms to establish AI ethics councils have faltered due to accusations of tokenism or lack of independence. To avoid similar pitfalls, the authors recommend transparent appointment processes, clear mandates, and mechanisms for public accountability.
Another critical component of effective AI governance is public engagement. The study stresses that ethical norms cannot be imposed from above; they must emerge from an inclusive dialogue involving all stakeholders. Creating open forums for discussion—where citizens, developers, and policymakers can exchange views on what constitutes acceptable AI use—is essential for building social consensus. This participatory approach helps ensure that regulations reflect societal values rather than the interests of a narrow elite.
The paper also addresses the global dimension of AI ethics. As AI technologies transcend national borders, unilateral regulatory efforts risk creating fragmentation and competitive disadvantages. China, as a major player in the global AI landscape, has both the responsibility and the opportunity to shape international norms. The authors call for active participation in multilateral dialogues and standard-setting initiatives, advocating for rules that promote innovation while safeguarding fundamental rights.
This international engagement is not merely about influence; it is about interoperability. In a world where AI systems interact across jurisdictions, harmonized ethical standards can reduce friction and enhance cooperation. For example, common definitions of algorithmic fairness, data portability, and liability frameworks can facilitate cross-border data flows and joint research initiatives. Without such alignment, the risk of regulatory arbitrage and technological balkanization increases.
The study further highlights several concrete areas where ethical oversight is urgently needed. One is the issue of machine rights and human agency. As AI systems assume greater autonomy—particularly in domains like self-driving vehicles and medical diagnostics—the question of control becomes paramount. Who is responsible when an autonomous system makes a decision that leads to harm? The authors point to incidents involving malfunctioning autopilot systems in aviation and automotive contexts as cautionary tales. These cases reveal the dangers of over-reliance on AI and the need for clear rules governing human-machine interaction.
Another pressing concern is the erosion of social trust. The proliferation of AI-generated content—ranging from deepfake videos to synthetic voices—has made it increasingly difficult to distinguish truth from fiction. This not only undermines individual credibility but also threatens democratic processes by enabling the spread of disinformation. The paper notes that such technologies have already been used to fabricate political speeches and impersonate public figures, creating confusion and polarization.
Smart home devices, while offering convenience, introduce new vulnerabilities. Many consumer-grade IoT devices lack robust security features, making them easy targets for hackers. Cases of unauthorized access to smart cameras and voice assistants have raised alarms about privacy and personal safety. The authors argue that manufacturers must be held to higher standards of cybersecurity, with mandatory certification processes and regular audits.
Data security remains a foundational challenge. The massive scale of data collection required to train AI models creates significant risks, especially when sensitive personal information is involved. The paper warns against the commodification of data, where user information is treated as a tradable asset rather than a protected right. The legacy of scandals like the “PRISM” revelations underscores the need for strong legal protections and independent oversight mechanisms.
Algorithmic bias is another area of intense scrutiny. Far from being neutral, algorithms often reflect and amplify the prejudices of their creators. Whether in hiring, lending, or law enforcement, biased AI systems can perpetuate systemic discrimination. The authors cite U.S. legal precedents where discriminatory algorithms have been subject to judicial review, suggesting that similar accountability mechanisms should be adopted globally.
The concentration of AI power in a few large corporations also raises antitrust and equity concerns. When a handful of firms control the infrastructure, data, and talent needed to develop advanced AI, it creates barriers to entry for smaller players and limits diversity in innovation. The paper calls for policies that promote open access to datasets, algorithmic transparency, and fair competition.
In the realm of intellectual property, the rise of AI-generated content challenges traditional notions of authorship and ownership. Can a machine be considered an inventor? Who holds the copyright to a novel written by an AI? These questions have legal, economic, and philosophical dimensions that require careful deliberation. The authors suggest that new legal categories may be needed to accommodate the unique characteristics of AI-created works.
Looking ahead, the study envisions a future where AI governance is not reactive but anticipatory. By leveraging foresight methodologies, scenario planning, and interdisciplinary research, policymakers can identify emerging risks before they materialize. This proactive stance is crucial in a domain where technological change often outpaces regulatory response.
Ultimately, the success of AI ethics supervision will depend on its ability to adapt. The authors caution against rigid, one-size-fits-all regulations that may become obsolete as technology evolves. Instead, they advocate for agile governance—flexible, iterative, and responsive to new evidence and stakeholder feedback. This approach aligns with the broader trend toward adaptive regulation in complex socio-technical systems.
The paper concludes with a call to action. As China positions itself as a global leader in AI, it has a unique opportunity to demonstrate that technological advancement and ethical responsibility can go hand in hand. By investing in multidimensional oversight, fostering public dialogue, and engaging in international cooperation, the country can set a precedent for responsible AI development worldwide.
The implications of this research extend far beyond China’s borders. In an interconnected world, the choices made by one nation reverberate across continents. How China navigates the ethical challenges of AI will influence global norms and shape the future of human-machine coexistence. The path forward is not without obstacles, but with thoughtful leadership and inclusive governance, it is possible to harness the benefits of AI while safeguarding the values that define society.
Liu Lu, Yang Xiaolei, Gao Wen, Peking University, Engineering, DOI 10.15302/J-SSCAE-2021.03.006