Artificial Intelligence and the Future of Human Value
In the evolving landscape of modern technology, few developments have captured public imagination and academic scrutiny as intensely as artificial intelligence (AI). Once a niche domain confined to laboratories and academic circles, AI has surged into the global spotlight, transforming not only industries but also the very foundations of human society. What was once an invisible infrastructure, like electricity, is now impossible to ignore. Unlike other mature technologies that fade into the background, AI remains highly visible—precisely because it challenges the core of human self-understanding.
For decades, AI operated on the margins of technological innovation. From its inception in the 1950s through much of the late 20th century, it was a field of promise rather than practicality. Breakthroughs were incremental, and real-world applications limited. Yet today, AI is everywhere: in search algorithms, autonomous vehicles, medical diagnostics, financial modeling, and even creative domains such as music and visual art. The reason for its heightened visibility, however, extends beyond functionality. As Wu Guanjun, Professor at the Department of Political Science, East China Normal University, argues in a recent essay published in Yuejiang Academic Journal, AI forces humanity to confront fundamental questions about intelligence, agency, and value.
The term Homo sapiens, meaning “wise man,” has long served as the cornerstone of human identity. It signifies more than biological classification—it embodies a philosophical assertion that humans are uniquely endowed with reason, creativity, and moral agency. This self-conception underpins modern civilization, from legal systems to educational institutions, from political theories to artistic expression. But when machines defeat world champions in complex games like Go, or generate coherent, contextually appropriate text, they disrupt this anthropocentric worldview.
Wu emphasizes that while AI may mimic certain aspects of human cognition, its underlying mechanisms are fundamentally different. Contemporary AI, particularly systems based on artificial neural networks and machine learning, operates through pattern recognition and statistical inference rather than conscious reasoning. These systems do not “understand” in the human sense; they optimize outcomes based on vast datasets. Yet, despite this ontological difference, their performance in specific domains rivals—and sometimes surpasses—human capability.
This divergence between technical reality and public perception is critical. Media narratives often anthropomorphize AI, attributing intentionality and autonomy where none exists. A chess-playing algorithm does not “desire” victory; it calculates probabilities and selects moves that maximize winning chances. Nevertheless, the symbolic impact of AI defeating human experts is profound. It signals a shift in the distribution of cognitive labor and, by extension, social power.
What makes AI distinct from previous technological revolutions is its direct engagement with human intelligence. Electricity extended physical capabilities—lighting cities, powering machines—but left cognitive functions untouched. The printing press democratized knowledge but did not replicate thought. AI, by contrast, encroaches upon domains previously considered exclusively human: decision-making, prediction, even creativity. As such, it does not merely augment human ability; it redefines what it means to be valuable in a society organized around labor and contribution.
Historically, the rise of industrial capitalism transformed human value. In pre-modern societies, worth was tied to skill, lineage, or land ownership. The artisan, the warrior, the noble—all derived status from specialized roles. But with the advent of mass production, the value of individual labor shifted. As Wu notes, in the early 20th century, a factory worker needed no special talent—only the ability to perform repetitive tasks. This standardization enabled the expansion of labor markets and, crucially, the political recognition of universal rights.
The Enlightenment ideals of thinkers like Hobbes, Locke, and Rousseau posited that individuals possess intrinsic worth. Kant’s dictum that humans should be treated as ends in themselves, not mere means, became a moral foundation for modern democracies. Yet, as Wu observes, these philosophical principles took centuries to materialize in legal and social practice. It was only when industrial economies made every worker functionally indispensable that the idea of universal human rights gained traction.
Now, AI threatens to reverse this trajectory. Automation is rapidly displacing routine jobs across sectors—manufacturing, transportation, customer service, data entry. Machine learning models can analyze medical images, draft legal documents, and manage supply chains with increasing accuracy. As these systems become more capable, the economic rationale for human involvement in many tasks diminishes. Energy efficiency alone tilts the balance: AI systems run on electricity, which can be sourced renewably, whereas human workers require food, rest, healthcare, and wages.
Yuval Noah Harari’s concept of the “useless class” looms large in this context. If large segments of the population no longer contribute economically, how will they be integrated into the social fabric? Will they retain rights, dignity, and political voice? Or will they be relegated to the margins, their existence tolerated but not valued?
Wu introduces the concept of Homo sacer, a term from Roman law revived by Italian philosopher Giorgio Agamben to describe those whose lives are stripped of legal and political protection—persons who can be killed without it constituting murder. In ancient societies, such figures included slaves and sacrificial victims. They were instrumentalized, reduced to tools rather than recognized as subjects with rights.
Drawing this parallel, Wu warns that as AI renders human labor redundant, we risk creating a new form of Homo sacer—not through explicit legal exclusion, but through economic obsolescence. When robots can perform all essential functions, the justification for universal rights weakens. If value is derived from contribution, and contribution is measured in productivity, then those who cannot compete with machines may lose their claim to full membership in the political community.
This is not a dystopian fantasy but a plausible trajectory grounded in current trends. Consider the rise of platform economies and gig work, where algorithms manage labor with minimal human oversight. Workers are rated, ranked, and replaced based on performance metrics, often without recourse or appeal. There is little room for dignity, solidarity, or collective bargaining. The human becomes a node in a computational network, optimized for efficiency.
Moreover, the concentration of AI development in a handful of tech giants exacerbates inequality. Control over data, infrastructure, and algorithmic design rests with a small number of corporations and governments. This creates a new digital aristocracy, capable of shaping behavior, influencing elections, and determining access to resources. The power asymmetry between those who govern AI and those governed by it poses a fundamental challenge to democratic governance.
Zhang Aijun, also a contributor to the same issue of Yuejiang Academic Journal, explores this dimension through the lens of political communication. His analysis focuses on the 2020 U.S. presidential election, where social bots—automated accounts designed to mimic human users—played a significant role in shaping online discourse. Unlike traditional media, which operates under editorial standards, social bots can generate and amplify content at scale, often spreading misinformation, polarizing debates, and manipulating public opinion.
Prior to the proliferation of such technologies, democratic theory assumed a relatively stable public sphere. Citizens received information from identifiable sources, engaged in deliberation, and made choices based on reasoned judgment. Elections were seen as mechanisms for aggregating preferences and holding leaders accountable. But when artificial agents flood digital spaces with synthetic content, the integrity of this process erodes.
Zhang raises a profound question: Where does the “soul” of democracy reside in an age of algorithmic influence? Democracy is not merely a set of procedures—voting, representation, separation of powers. It is also a cultural and ethical commitment to equality, participation, and shared truth. When bots impersonate citizens, distort facts, and engineer outrage, they undermine trust, fragment consensus, and hollow out democratic legitimacy.
The problem is not just that misinformation spreads faster, but that the boundaries between human and machine agency blur. A tweet may appear to come from a concerned voter, but it could be generated by a script running on a server farm. A grassroots movement may be artificially inflated by coordinated bot activity. In such an environment, authentic civic engagement becomes difficult to distinguish from engineered spectacle.
This phenomenon is not confined to the United States. From Brazil to India, from the Philippines to Europe, automated accounts have been deployed to sway elections, suppress dissent, and promote authoritarian narratives. The tools are cheap, scalable, and hard to regulate. Unlike human operatives, bots do not tire, require payment, or leave incriminating evidence. They operate across borders, exploiting the global nature of social media platforms.
Regulatory responses have been uneven. Some countries mandate transparency for political advertising online, requiring disclosure of funding sources and targeting criteria. Others have attempted to ban bot activity outright. But enforcement remains challenging, especially when actors use decentralized networks or exploit jurisdictional loopholes.
More fundamentally, existing legal frameworks struggle to accommodate non-human actors. Can a bot commit defamation? Should platforms be liable for content generated by algorithms they design? If an AI system influences voter behavior, who bears responsibility—the developer, the deployer, or the machine itself?
These questions point to a deeper crisis in political theory. Modern democracies were built on the assumption of human autonomy. Citizens were presumed to act freely, form opinions independently, and engage in rational debate. But when algorithms curate news feeds, suggest connections, and predict preferences, they shape cognition in subtle, often invisible ways. Personalization, intended to improve user experience, can create filter bubbles that reinforce biases and reduce exposure to diverse viewpoints.
The result is a paradox: never have people had access to more information, yet never has shared reality seemed more elusive. In a world where every individual inhabits a customized informational universe, consensus becomes harder to achieve. Polarization intensifies, not because people are inherently divided, but because the technological environment amplifies division.
Zhang’s concern about the “soul” of democracy reflects this anxiety. If democracy depends on a collective will formed through open, inclusive dialogue, then the intrusion of artificial agents threatens its very essence. The soul, in this metaphorical sense, is the animating principle—the belief that ordinary people, through reasoned discourse, can shape their common future. When that process is hijacked by automated systems, the soul risks extinction.
Yet, the situation is not hopeless. Recognizing the problem is the first step toward mitigation. Scholars like Wu and Zhang are part of a growing interdisciplinary field—sometimes called techno-politics or digital political theory—that seeks to understand how technology reconfigures power, identity, and justice.
One possible response is to rethink the basis of human value outside of economic productivity. If AI renders labor redundant, societies may need to adopt models such as universal basic income (UBI), ensuring that all individuals receive material support regardless of employment status. Such policies would decouple survival from work, affirming human dignity as intrinsic rather than instrumental.
Another avenue is the development of AI ethics frameworks that prioritize transparency, accountability, and fairness. Initiatives like the EU’s Artificial Intelligence Act aim to classify AI systems by risk level and impose strict regulations on high-stakes applications, such as facial recognition or autonomous weapons. While imperfect, these efforts represent attempts to align technological development with democratic values.
Education also plays a crucial role. Digital literacy must become a core competency, enabling citizens to recognize manipulation, evaluate sources, and navigate algorithmic environments critically. Civic education should include training in media analysis, data awareness, and ethical reasoning, preparing individuals to participate meaningfully in a digitized public sphere.
Furthermore, there is a need for international cooperation on AI governance. Given the borderless nature of digital technologies, unilateral regulations are insufficient. Global agreements on data privacy, algorithmic transparency, and cyber sovereignty could help prevent a race to the bottom in surveillance and control.
Ultimately, the challenge posed by AI is not merely technical but existential. It forces us to ask: What kind of society do we want to live in? One where efficiency and optimization reign supreme, or one that safeguards human flourishing, diversity, and freedom? The answer will depend not on algorithms, but on collective choices made through democratic processes.
As Wu concludes, the so-called “singularity”—a hypothetical point at which AI surpasses human intelligence—may or may not arrive. But even without reaching that threshold, AI is already reshaping civilization’s inner structure. The transformations are subtle, incremental, and often invisible—until they are not.
The visibility of AI today is not a sign of its maturity, but of its disruptive potential. It remains in the foreground because it unsettles our deepest assumptions about who we are and what we value. To ignore these questions is to surrender agency to code. To confront them is to reaffirm the human capacity for reflection, choice, and change.
Artificial Intelligence and the Future of Human Value
Wu Guanjun, Department of Political Science, East China Normal University; Zhang Aijun, Department of Political Science, East China Normal University; Yuejiang Academic Journal, 2021, Issue 4, DOI: 10.13878/j.cnki.yjxl.2021.04.003