The Future of AI: Balancing Innovation and Ethics

The Future of AI: Balancing Innovation and Ethics in the New Era

As artificial intelligence (AI) continues to reshape industries and redefine human-machine interactions, a recent academic gathering in Shanghai has brought critical attention to the societal implications of this rapidly evolving technology. The symposium titled “Social Needs in the Age of Artificial Intelligence,” hosted by Shanghai University in October 2020, convened scholars from across China to examine the transformative potential and inherent challenges of AI through interdisciplinary lenses. With contributions from experts in philosophy, ethics, education, labor economics, and technology policy, the discussions underscored a growing consensus: while AI presents unprecedented opportunities for human advancement, its trajectory must be guided by thoughtful institutional frameworks that prioritize human well-being, equity, and long-term sustainability.

The event, organized by the School of Marxism and the Department of Philosophy at Shanghai University, along with major national research projects funded by the Ministry of Education and the National Social Science Foundation, highlighted how AI is no longer merely a technical domain but a profound social phenomenon. Scholars emphasized that the integration of AI into daily life—from autonomous vehicles to algorithmic content delivery—demands a reevaluation of traditional norms in ethics, employment, education, and governance. Rather than treating AI as a neutral tool, participants argued for a more nuanced understanding of its role as an active agent within complex socio-technical systems.

One of the central themes explored during the conference was the philosophical question of AI’s status in relation to human beings. Can machines ever attain subjectivity? Should they be granted rights or moral consideration? Sheng Ning, a researcher at Shanghai University, approached these questions from a Confucian perspective, arguing that current forms of algorithmic AI remain extensions of human agency rather than autonomous entities. According to Sheng, AI lacks intrinsic moral character, self-constitution, and value generation—qualities essential to full subjecthood in classical philosophical traditions. This view aligns with broader skepticism about claims of machine consciousness, particularly given that AI operates through pattern recognition rather than genuine understanding.

Dai Yibin, also from Shanghai University, reinforced this position by analyzing natural language processing in AI systems. Drawing on the philosophical works of Donald Davidson and Charles Taylor, Dai demonstrated that while AI can process syntax and statistical correlations in language, it fails to grasp meaning, intentionality, or contextual nuance. Language, he argued, is not merely a functional instrument but a constitutive element of human identity and intersubjective experience. Because AI cannot engage with language in this existential way, it cannot become a true participant in human discourse or moral reasoning. These insights challenge narratives that envision AI eventually surpassing or replacing humans in creative and ethical domains.

However, not all perspectives were so restrictive. Zhou Liyun, a professor at Shanghai University, pointed out that the boundary between human and machine is becoming increasingly blurred due to technological convergence. As AI integrates with biometrics, neural interfaces, and wearable devices, the distinction between subject and object dissolves, giving rise to what she calls a “multi-agent society.” In such a context, she advocates for a balanced approach—neither overly optimistic nor alarmist—toward the development of emotionally and cognitively advanced AI. Rather than pursuing artificial general intelligence (AGI) without restraint, Zhou calls for a “community of shared future for humanity” framework, where human-machine coexistence is governed by mutual respect, transparency, and collective responsibility.

This philosophical grounding sets the stage for practical concerns about labor and employment, another major focus of the symposium. As automation accelerates across manufacturing, logistics, and service sectors, fears of widespread job displacement have intensified. Han Dongping from Huazhong University of Science and Technology presented a stark assessment: the unemployment caused by AI is not temporary or cyclical but irreversible and absolute. Unlike previous industrial revolutions, where mechanization displaced certain jobs but created new ones in emerging industries, AI’s capacity to learn, adapt, and perform cognitive tasks threatens to eliminate entire categories of work without generating proportional replacements.

Yet Han does not view this development as inherently negative. He interprets mass technological unemployment as a sign that humanity is approaching what Karl Marx termed the “realm of freedom”—a condition where survival no longer depends on compulsory labor. In this vision, AI liberates individuals from alienated work, allowing them to pursue self-fulfillment, creativity, and leisure. However, Han cautions that this transition must be managed carefully through institutional mechanisms to prevent social unrest. Universal basic income (UBI), restructured education systems, and public ownership of AI infrastructure are among the policy tools he suggests to ensure equitable distribution of AI-generated wealth.

Wang Shuixing from Jiangxi Normal University offered a complementary perspective, introducing the concept of “soft work” as a hallmark of the AI era. Soft work refers to labor that minimizes instrumental and utilitarian motives, instead emphasizing personal growth, intrinsic motivation, and meaningful contribution. As AI takes over routine and repetitive tasks, human labor can shift toward roles that require empathy, judgment, and ethical discernment—qualities difficult to automate. Wang sees this transformation as a step toward overcoming Marx’s notion of alienated labor, where workers are estranged from the products and processes of their labor. In this sense, AI could catalyze a historical shift from capitalist modes of production to more humane and socially integrated forms of economic organization.

Nonetheless, the transition is not without risks. Li Yang from Chongqing University warns that while AI may reduce homogenized labor, it does not inherently dismantle the structures of private ownership and capital accumulation that underpin alienation. Without deliberate policy intervention, AI could deepen existing inequalities by concentrating power and profits in the hands of a few tech corporations. Moreover, new forms of labor exploitation may emerge, such as the invisible digital labor involved in training AI models or moderating online content. Therefore, Li argues, the state must play an active role in guiding AI development to serve public interest and advance the goal of fully eliminating labor alienation.

Ethical considerations emerged as perhaps the most urgent area of inquiry. As AI systems make decisions affecting human lives—from medical diagnoses to criminal sentencing—the need for robust ethical frameworks becomes paramount. Wang Tian’en, a leading scholar at Shanghai University, introduced the concept of intelligence ethics to describe a new domain of moral reasoning that extends beyond human-to-human relations to include human-AI and AI-AI interactions. Traditional ethical theories, rooted in carbon-based life and human consciousness, are insufficient for addressing dilemmas posed by silicon-based intelligence. Intelligence ethics, Wang proposes, should be information-based, holistic, and capable of integrating rules with dynamic learning processes.

A key challenge lies in aligning AI behavior with desirable evolutionary trajectories. Unlike static rule-based systems, modern AI learns from data and adapts over time, raising concerns about unintended consequences and value drift. For instance, an AI system optimized for engagement might promote emotionally charged or misleading content, undermining democratic discourse. Therefore, Wang stresses the need for AI to be guided by normative principles that ensure its development remains aligned with human flourishing.

Chen Hai, also from Shanghai University, examined the problem of algorithmic bias through the lens of moral philosophy. He distinguishes between “God algorithms”—ideal, impartial decision-making systems—and “non-God algorithms,” which reflect real-world biases embedded in training data. His analysis reveals a fundamental paradox: even if a higher-order algorithm is designed to be fair, when it processes historical data shaped by discrimination, it produces biased outcomes. This implies that ethical “oughts” cannot emerge solely from factual “ises” without incorporating normative judgments. Thus, fairness in AI requires not just technical fixes but deliberate ethical input at every stage of design and deployment.

Autonomous vehicles provided a concrete case study for ethical decision-making under uncertainty. Guo Liang from Zhejiang University analyzed the famous “trolley problem” in the context of self-driving cars. When faced with unavoidable harm, should an AI prioritize passenger safety, pedestrian protection, or minimize overall damage? Existing ethical algorithms—utilitarian, deontological, or virtue-based—each lead to problematic outcomes. For example, a “moral dial” allowing users to customize ethical settings could incentivize selfish choices, while strict adherence to traffic rules might place law-abiding drivers at greater risk.

To circumvent these dilemmas, Guo proposes a “neither-left-nor-right” braking dynamics algorithm that avoids explicit moral trade-offs by focusing on technical parameters such as deceleration rate and collision probability. By reframing the problem as one of risk minimization rather than moral choice, this approach sidesteps the need for AI to make life-and-death judgments. While not a complete solution, it illustrates how engineering innovation can complement ethical reasoning in high-stakes applications.

Privacy emerged as another critical concern, especially as AI systems increasingly rely on vast datasets for training and personalization. Bao Jianzhu from Shanghai University argued that privacy should not be seen merely as an individual right but as a relational and contractual issue rooted in the distinction between public and private spheres. In the age of AI, he calls for a shift from static legal contracts to dynamic, socially negotiated agreements that adapt to changing technological conditions. This includes moving from individual consent models to collective governance frameworks, where communities have a say in how their data is used.

Focusing on value conflicts, Zhao Baojun from Shaanxi University of Science and Technology highlighted the tension between competing values within and across social groups. For example, the pursuit of efficiency in AI-driven services may conflict with principles of equity and inclusion. Similarly, personalized content recommendation may enhance user satisfaction but erode shared public understanding. Resolving these conflicts requires more than technical optimization; it demands deliberative processes that foster value pluralism and democratic consensus.

In the realm of education, AI is already transforming teaching and learning paradigms. Yu Tianfang from Yangzhou University noted that AI tools are being used for adaptive learning, automated assessment, and personalized tutoring. However, moral education poses unique challenges because it involves cultivating emotional dispositions, ethical motivations, and social norms—dimensions that resist algorithmic quantification. To address this, Yu envisions AI-powered simulations of moral dilemmas that provide immersive, interactive experiences, helping students develop empathy and ethical reasoning through experiential learning.

Liu Shuwen, a graduate student at Shanghai Normal University, emphasized the potential of AI to revitalize ideological and political education in Chinese universities. By leveraging big data and machine learning, educators can tailor content to students’ interests and learning styles, increasing engagement and effectiveness. However, she warns against over-reliance on automation, stressing that human educators remain essential for fostering critical thinking and ideological depth.

Nie Zhi from Hunan University of Technology and Business explored how AI-driven information dissemination affects the spread of socialist values. Algorithmic recommendation systems can enhance the precision and reach of ideological messaging, but they also risk creating echo chambers and filter bubbles that limit exposure to diverse perspectives. To counteract this, Nie advocates for cultivating high-quality digital content ecosystems, embedding ethical values into commercial platforms, and improving media literacy among the public.

From a technological standpoint, Zhang Xueyi from Southeast University discussed brain-computer interface (BCI) technologies and their philosophical implications. BCIs enable direct communication between brains and machines, blurring the boundaries between biological and artificial cognition. Applying Actor-Network Theory, Zhang suggests that such hybrid systems necessitate a non-anthropocentric ontology, epistemology, and axiology—one that recognizes the agency of both humans and non-humans in knowledge production and value formation.

Wang Chao, a professor at Shanghai University and council member of the Chinese Association for Artificial Intelligence, shared insights from his work on “city brain” systems—AI platforms that optimize urban management in transportation, healthcare, and public safety. These systems demonstrate AI’s potential to enhance efficiency and resilience in complex urban environments. However, Wang stresses that such advancements require interdisciplinary collaboration, urging universities to break down silos between technical and social sciences in AI education and research.

Looking ahead, the symposium concluded that the future of AI hinges not only on technological breakthroughs but on the development of appropriate institutional arrangements. As Dai Yibin and Yu Mingyan summarize in their synthesis of the event, AI must be guided by principles that ensure it serves humanity’s highest aspirations. This includes establishing regulatory frameworks that anticipate risks, promote accountability, and uphold fundamental rights. At the same time, existing institutions—from labor laws to educational curricula—must evolve to accommodate the realities of an AI-augmented world.

Crucially, the authors emphasize that institutional design must be informed by deep technical understanding and ethical reflection. Just as international bans on human cloning stem from informed scientific and moral consensus, so too must AI governance be grounded in rigorous interdisciplinary inquiry. Moreover, the ultimate goal of AI development must remain the enhancement of human well-being. Regardless of technological progress, AI should never act against human interests or compromise core values such as dignity, autonomy, and justice.

In conclusion, the Shanghai symposium reflects a maturing discourse on AI—one that moves beyond hype and fear to engage with the complex interplay between technology and society. As AI becomes embedded in the fabric of everyday life, its trajectory will be shaped not only by engineers and entrepreneurs but by philosophers, educators, policymakers, and citizens. The path forward requires a balanced, inclusive, and forward-thinking approach—one that embraces innovation while safeguarding the ethical foundations of human coexistence.

Dai Yibin, Yu Mingyan, Shanghai University, Yuejiang Academic Journal, 10.14163/j.cnki.11-5547/r.2021.03.021