AI Ethics: Toward a Global Consensus in the Age of Intelligent Machines

AI Ethics: Toward a Global Consensus in the Age of Intelligent Machines

The rapid evolution of artificial intelligence (AI) is no longer confined to laboratories or tech conferences. It has become a transformative force reshaping economies, societies, and human identities. From autonomous systems in healthcare to algorithmic governance in judicial systems, AI’s reach is both profound and pervasive. Yet, as its capabilities expand, so too do the ethical, social, and political challenges it presents. In a landmark interdisciplinary discussion published in Yuejiang Academic Journal, leading scholars from institutions including the Chinese Academy of Social Sciences, Nanjing University, and Shanghai Normal University have called for a reevaluation of AI’s role—not just as a technological advancement, but as a civilizational challenge demanding urgent ethical reflection and global coordination.

The conversation, titled “Artificial Intelligence: Theoretical Interpretation and Practical Reflection,” brings together voices from philosophy, political economy, law, and media studies to confront the multifaceted implications of AI. At the heart of their discourse lies a central question: How can humanity ensure that intelligent systems serve human flourishing rather than undermine it?

The Ethical Imperative: Beyond Technological Determinism

One of the most pressing concerns raised in the forum is the growing gap between technological advancement and ethical governance. Duan Weiwen, a researcher at the Institute of Philosophy, Chinese Academy of Social Sciences, emphasizes that AI development cannot be understood in isolation from its cultural and philosophical underpinnings. He introduces the concept of “human-machine symbiosis,” arguing that AI and robotics have evolved alongside human imagination, particularly through science fiction. The 1920 play R.U.R. by Karel Čapek, which introduced the term “robot,” already embedded early anxieties about artificial life and labor exploitation. This historical continuity suggests that our current debates are not new, but rather an intensification of long-standing human concerns about creation, control, and autonomy.

Duan warns against what he calls “technological mysticism”—the tendency to treat AI as an autonomous force beyond human control. Instead, he advocates for a philosophical approach that re-examines the very nature of intelligence. Drawing on Marx’s notion of “technology as a mode of production,” Duan proposes a “production technology critique” framework. This perspective shifts focus from what AI produces to how it produces, examining AI not merely as a tool but as a new mode of intelligence production.

In this view, three primary systems have historically enabled intelligence: the biological neural network of the human brain, the symbolic system of written language, and now, the artificial neural networks of AI machines. Each represents a leap in cognitive extension. However, Duan cautions that while AI can simulate aspects of human cognition, it cannot fully replicate the ineffable dimensions of human meaning—what ancient Chinese philosophy described as “words fail to convey the fullness of meaning” (yan bu jin yi). This insight challenges the notion of AI achieving full human equivalence and underscores the need for humility in both design and deployment.

Labor in the Algorithmic Age: Redefining Value and Agency

The impact of AI on labor is perhaps the most immediate and tangible challenge. Liu Fangxi, a researcher at the Institute of Literature, Chinese Academy of Social Sciences, offers a critical analysis rooted in Marxist political economy. He argues that AI is not simply replacing jobs—it is reconfiguring the very definition of labor. In the digital economy, user activity on platforms like YouTube, Bilibili, or Twitter generates vast amounts of data, which in turn trains AI systems. This process transforms everyday behavior into a form of invisible labor—what scholars call “digital labor.”

Liu highlights a paradox: while traditional labor is being automated, new forms of labor are emerging under algorithmic control. Platform workers, content creators, and even casual users contribute value without formal compensation, their worth determined not by wages but by metrics such as likes, shares, and engagement rates. This shift erodes the boundaries between work and leisure, turning life itself into a data-generating enterprise.

The consequences are profound. As AI systems optimize for efficiency and profit, they often neglect human dignity and social equity. Liu cites the case of delivery workers trapped in algorithmic management systems that prioritize speed over safety, leading to burnout and accidents. The rigidity of these systems reflects what philosopher Bernard Stiegler calls the “principle of calculability”—the reduction of all human activity to quantifiable data points. In such a world, the unemployed worker is not merely jobless; they are rendered invisible by the algorithm.

To counter this, Liu calls for a revival of labor value theory, one that recognizes the social and ethical dimensions of work beyond mere productivity. He warns against “Luddism”—the outright rejection of technology—as ineffective and counterproductive. Instead, he advocates for systemic reforms in education, governance, and economic policy that prepare societies for an AI-driven future while safeguarding human agency.

Algorithmic Governance and the Erosion of Autonomy

As AI penetrates public institutions, questions of power and accountability come to the fore. Wu Jing, a professor at Nanjing Normal University, examines the growing influence of algorithms in shaping social norms and individual behavior. She recounts a common experience: downloading an app and blindly accepting terms of service without reading them. This act, she argues, symbolizes a broader surrender of autonomy to algorithmic systems.

Algorithms, she explains, operate by fragmenting complex human experiences into discrete, calculable units. This “modularization” enables precise control but also strips away context, nuance, and moral ambiguity. In judicial systems, AI tools like Shanghai’s 206 AI-assisted prosecution system offer benefits in efficiency and consistency. However, as Zheng Xi, a legal scholar, warns, the delegation of decision-making to machines risks outsourcing not just technical tasks but core judicial authority.

The danger lies in the opacity of algorithmic design. When private tech companies develop AI for public use, their values—often shaped by profit motives and market competition—can be embedded in the code. A developer with bias against LGBTQ+ communities, for instance, might inadvertently create an algorithm that disadvantages certain groups in legal outcomes. This raises fundamental questions about transparency, fairness, and democratic oversight.

Wu proposes a series of safeguards: limited algorithmic disclosure, the use of expert witnesses in legal proceedings, and the recognition of data rights for individuals. These measures aim to restore balance between state power, corporate influence, and citizen autonomy. Without such checks, she warns, algorithmic governance could evolve into a form of digital authoritarianism, where freedom is constrained not by overt coercion but by invisible, automated systems.

Social Robots and the Boundaries of Intimacy

Beyond labor and governance, AI is entering the most intimate spheres of human life. Gao Shanbing, an associate professor at Nanjing Normal University, explores the rise of social robots—AI-driven agents designed for interaction in domains such as customer service, education, and companionship. These robots are not just tools; they are increasingly perceived as social actors.

Research shows that social robots can influence public opinion, as seen in their use during U.S. and French elections to spread misinformation. But their impact extends beyond politics. The prospect of sex robots, for example, has sparked intense ethical debate. David Levy, a futurist, predicts that by 2050, human-robot marriages will be legally recognized. He sees this as the next frontier of “marriage equality.” However, Catherine Richardson, an ethicist at De Montfort University, counters that such robots risk normalizing the objectification of women and children, potentially fostering abusive behaviors.

Zhang Aijun, a professor at Northwest University of Political Science and Law, expresses deep concern about the invasion of privacy. During the pandemic, facial recognition systems required people to remove masks in public spaces, turning biometric data collection into a routine. Zhang fears that social robots equipped with cameras and sensors could become instruments of constant surveillance, eroding personal dignity.

He calls for a clear boundary: China should compete with the U.S. in high-tech AI development, but not allow social robots to infiltrate private life. This reflects a broader tension between innovation and protection—a tension that requires careful policy navigation.

Toward a Global Ethical Framework

Given the transnational nature of AI, national regulations alone are insufficient. Yang Tongjin, a scholar at the Chinese Academy of Social Sciences, analyzes the European Union’s efforts to establish a global ethical standard. In 2019, the EU released its Ethics Guidelines for Trustworthy AI, emphasizing principles such as transparency, justice, non-maleficence, and accountability. These guidelines have influenced AI policies worldwide, demonstrating that international consensus is possible.

Yang identifies six strategies for strengthening global AI ethics:

  1. Categorization and Gradual Progress: Different AI applications—medical, industrial, military—require tailored ethical frameworks. Start with low-hanging fruit like commercial AI before tackling contentious areas like autonomous weapons.
  2. Scientist Engagement: Researchers must take ethical responsibility, not just technical innovation.
  3. Corporate Leadership: Multinational companies should lead by example, adopting ethical standards voluntarily.
  4. Government Support: States must provide regulatory frameworks and incentives.
  5. Institutional Innovation: New global institutions may be needed to oversee AI governance.
  6. Cosmopolitan Vision: Overcoming narrow nationalism is essential for building a shared future.

These strategies are echoed in China’s Prospect of Robot Ethics Standardization, which proposes a “symbiotic design” model emphasizing pluralism, nature, justice, and well-being. Unlike Western frameworks that prioritize individual autonomy, the Chinese approach integrates ecological and collective values, offering a broader ethical horizon.

The Limits of Intelligence and the Future of Humanity

Despite AI’s advances, many scholars remain skeptical of claims about superintelligence or the “singularity.” Wu Guanjun, a professor at East China University of Science and Technology, stresses that current AI operates on a fundamentally different logic than human cognition. It excels at pattern recognition and data processing but lacks genuine understanding, intentionality, or moral reasoning.

He draws a distinction between “weak AI”—narrow, task-specific systems—and “strong AI”—hypothetical machines with general human-like intelligence. The former is already here; the latter remains speculative. The real danger, Wu argues, is not that machines will surpass humans, but that humans will abdicate their moral responsibility by deferring to machines.

This view is shared by Han Dongping, who asserts that intelligent robots will never dominate humanity. Instead, the challenge is to ensure that AI remains a tool for human empowerment. The goal should not be to create machines that replace us, but to build systems that enhance our capacity for empathy, creativity, and ethical judgment.

Conclusion: Reimagining the Human in the Age of Machines

The discussions in Yuejiang Academic Journal converge on a powerful insight: AI is not just a technological revolution, but a mirror held up to humanity. It forces us to confront fundamental questions—What is intelligence? What is labor? What is a good life?

As AI continues to evolve, the answers to these questions will shape the future of civilization. The path forward requires more than technical fixes; it demands a renaissance of humanistic thought, interdisciplinary dialogue, and global cooperation. Only by integrating ethical reflection into the very fabric of AI development can we ensure that the age of intelligent machines becomes an era of human flourishing.

The journey is just beginning. But as these scholars remind us, the future is not predetermined. It is a collective project—one that must be guided not by algorithms alone, but by wisdom, compassion, and a shared commitment to the common good.

Duan Weiwen, Institute of Philosophy, Chinese Academy of Social Sciences; Wu Guanjun, School of Philosophy, East China University of Science and Technology; Liu Fangxi, Institute of Literature, Chinese Academy of Social Sciences; Gao Shanbing, School of Journalism and Communication, Nanjing Normal University; Zhang Aijun, School of Journalism and Communication, Northwest University of Political Science and Law; Wu Jing, School of Public Administration, Nanjing Normal University; Yang Tongjin, Institute of Philosophy, Chinese Academy of Social Sciences; Zheng Xi, School of Law, China University of Political Science and Law; Han Dongping, School of Philosophy, Huazhong University of Science and Technology; Lan Jiang, Department of Philosophy, Nanjing University; Cui Zhongliang, School of Marxism, Nanjing University of Information Science and Technology; Zhao Tao, School of Marxism, Nanjing University of Information Science and Technology. Yuejiang Academic Journal, 2021, 4: 19–70. DOI: 10.16747/j.cnki.yjxk.2021.04.003