Deepfakes Reshape Political Discourse, Demand New Countermeasures
The rapid evolution of artificial intelligence has introduced a transformative force into the digital landscape—deepfake technology. Once confined to niche applications in entertainment and digital art, deepfakes have now permeated the political sphere, fundamentally altering how public opinion is formed, disseminated, and manipulated. As AI-generated audio, video, and images become increasingly indistinguishable from reality, the integrity of political discourse faces unprecedented challenges. A comprehensive study by Zhang Aijun and Wang Fang from the School of Journalism and Communication at Northwest University of Political Science and Law examines how deepfakes are reshaping political public opinion and calls for a multifaceted response to counter their destabilizing effects.
The research, published in Journal of Hohai University (Philosophy and Social Sciences Edition), highlights that deepfakes are no longer merely a technological novelty but a powerful instrument capable of distorting truth, amplifying misinformation, and influencing democratic processes. Unlike traditional forms of disinformation, which rely on text-based propaganda or selectively edited media, deepfakes exploit the human tendency to trust visual and auditory evidence. This psychological vulnerability makes them particularly effective in shaping public perception, especially in politically charged environments.
Zhang and Wang identify three primary dimensions through which deepfakes exert influence: the political, economic, and social. While these domains may appear distinct, the authors argue they are deeply interconnected, with economic and social manipulations often spilling over into the political realm. This “spillover effect” underscores the complexity of regulating deepfakes, as interventions in one domain can have cascading consequences across others.
In the political dimension, deepfakes pose a direct threat to national security and ideological stability. The authors note that malicious actors—ranging from domestic extremists to foreign adversaries—can use AI to generate convincing videos of political leaders making inflammatory statements, engaging in illegal activities, or expressing views contrary to their public positions. Such fabricated content can trigger public outrage, erode trust in institutions, and polarize societies. The study references the concept of “frame conflict,” a cognitive theory suggesting that individuals interpret information through pre-existing mental frameworks. When deepfakes disrupt these frameworks—by presenting plausible yet false narratives—public discourse becomes fragmented, making consensus and rational debate increasingly difficult.
One of the most insidious aspects of deepfake-driven political manipulation is its ability to foster ideological polarization. By targeting emotionally charged issues such as national identity, governance legitimacy, and social justice, deepfakes amplify existing divisions. The authors warn that this can lead to a breakdown in shared reality, where opposing factions no longer agree on basic facts, undermining the foundation of democratic discourse. In extreme cases, deepfakes may even incite social unrest by fabricating events that never occurred, such as staged protests, police brutality, or diplomatic incidents.
The economic dimension of deepfakes is equally concerning. The authors observe that the technology is being integrated into the digital economy, particularly within the entertainment and advertising industries. While some applications—such as digital resurrection of deceased actors or personalized virtual influencers—offer commercial benefits, they also normalize the consumption of synthetic media. As users become accustomed to manipulated content, their ability to discern truth from fiction diminishes. This erosion of media literacy creates fertile ground for political deepfakes to thrive.
Moreover, the commodification of attention in the digital economy incentivizes the creation of sensational and emotionally charged content. Deepfakes, with their high engagement potential, fit perfectly into this model. Platforms driven by algorithmic recommendation systems prioritize content that generates clicks, shares, and reactions—regardless of authenticity. As a result, deepfakes that provoke outrage or amusement are more likely to go viral, further distorting public discourse. The study emphasizes that this attention-driven economy not only rewards deception but also marginalizes nuanced, fact-based political discussion.
The social dimension reveals how deepfakes are transforming human interaction in the digital age. The authors describe a shift toward “digital existence,” where individuals navigate online spaces through curated and often artificial identities. Deepfake technology enables users to adopt virtual personas, manipulate their appearance, and engage in performative communication. While this may enhance self-expression, it also blurs the line between authenticity and fabrication.
This phenomenon, termed “performative, embodied, and nodal existence,” reflects a broader trend in which individuals construct their online identities through a combination of real and synthetic elements. In political contexts, this can lead to the proliferation of fake accounts, bot networks, and AI-driven personas that mimic human behavior. These synthetic actors can infiltrate social networks, amplify specific narratives, and create the illusion of widespread public support for certain ideologies or policies.
Zhang and Wang argue that this digital transformation has profound implications for political participation. On one hand, it democratizes content creation, allowing marginalized voices to be heard. On the other hand, it enables large-scale manipulation by those with access to advanced AI tools. The result is a paradox: while more people can participate in public discourse, the authenticity and reliability of that discourse are increasingly compromised.
The study further explores how deepfakes are changing the mechanisms of political public opinion generation. Traditionally, public opinion emerged from organic discussions among citizens, mediated by journalists, scholars, and policymakers. Today, however, AI systems play an active role in shaping public opinion through what the authors call “machine-generated” and “composite-generated” discourse.
Machine-generated public opinion refers to content produced entirely by AI systems, such as social bots or automated news generators.These systems can create and disseminate deepfakes at scale, bypassing human oversight. The authors distinguish between single-machine generation, where one AI system produces content, and multi-machine collaboration, where multiple AI agents coordinate to amplify a message. In both cases, the speed and volume of dissemination far exceed human capabilities, making it difficult for fact-checkers and regulators to respond in real time.
Human-machine co-creation represents a more subtle form of influence. In this model, AI does not replace human actors but augments their capabilities. For example, political operatives may use AI tools to generate persuasive speeches, design campaign visuals, or analyze voter sentiment. While this can enhance efficiency, it also introduces a layer of algorithmic bias, as AI systems are trained on historical data that may reflect existing inequalities or ideological preferences.
The concept of “composite generation” further illustrates the complexity of modern public opinion formation. The authors identify three forms: vertical, horizontal, and three-dimensional generation. Vertical generation refers to the lifecycle of a public opinion event—from its emergence and development to potential reversal. Deepfakes accelerate this process by enabling rapid fabrication and dissemination, often leading to premature conclusions before facts are verified.
Horizontal generation describes the spread of public opinion across different platforms and communities. Deepfakes, due to their visual and emotional impact, are highly shareable, allowing them to transcend linguistic, cultural, and geographic boundaries. This cross-platform diffusion makes containment difficult, as a single fabricated video can spark parallel discussions in multiple digital ecosystems.
Three-dimensional generation captures the interplay between individual, institutional, and systemic levels of public opinion. At the individual level, deepfakes influence personal beliefs and attitudes. At the institutional level, they challenge the credibility of media organizations and government agencies. At the systemic level, they alter the structural dynamics of public discourse, favoring sensationalism over substance.
The content of political public opinion is also undergoing a radical transformation. The authors identify three key shifts: the creation of something from nothing, the blurring of truth and falsehood, and the imbalance between memory and forgetting.
“Creation from nothing” refers to the fabrication of events, statements, or behaviors that never occurred. Unlike traditional rumors, which may contain partial truths, deepfakes present fully synthetic realities. Once disseminated, these fictions can gain traction, especially if they align with pre-existing biases or fears. The psychological impact is profound: individuals may begin to doubt their own perceptions, leading to confusion, anxiety, and disengagement from civic life.
The “blurring of truth and falsehood” reflects a growing epistemological crisis. In the past, truth was often determined through consensus, evidence, and institutional verification. Today, deepfakes undermine these mechanisms by presenting false information with the appearance of authenticity. The authors invoke Plato’s “allegory of the cave” to illustrate this phenomenon: just as prisoners in the cave mistake shadows for reality, modern audiences may accept deepfakes as truth, especially when they are not equipped with the tools to detect deception.
The imbalance between memory and forgetting is another critical concern. In the digital age, information is rarely erased. Once a deepfake is uploaded, it can be copied, shared, and archived indefinitely. This permanence contradicts the natural human tendency to forget, which serves as a psychological and social reset mechanism. When false memories are preserved and reinforced online, they become difficult to dislodge, even after being debunked. The authors warn that this can lead to a form of “digital colonization,” where powerful actors use deepfakes to rewrite history, shape collective memory, and control narratives.
Given these challenges, the study calls for a “deep countermeasure” strategy—a comprehensive response that addresses the ethical, practical, and philosophical dimensions of deepfake regulation. The authors propose three pathways: value-based intervention, practical platform development, and conceptual reframing.
The value-based pathway emphasizes the integration of ethical principles into AI systems. This includes embedding ethical boundaries and guidelines into machine learning algorithms, ensuring that AI development prioritizes truth, accountability, and social well-being. The authors stress that technology should serve humanity, not exploit it. By institutionalizing ethical norms within AI design, developers can create systems that resist misuse and promote transparency.
The practical pathway involves the creation of intelligent countermeasure platforms. These platforms would serve as hubs for dialogue, agenda-setting, and political communication. They would enable stakeholders—including governments, tech companies, civil society, and citizens—to collaborate on detecting and responding to deepfakes. Key functions would include real-time verification tools, public awareness campaigns, and policy coordination. The goal is to foster a resilient information ecosystem capable of withstanding synthetic media attacks.
The conceptual pathway advocates for an “intelligent appropriation” mindset. Based on the Chinese concept of “appropriation” (referring to the selective adoption of foreign ideas and technologies), the authors argue that societies should critically evaluate existing deepfake countermeasures from around the world.Rather than blindly adopting foreign models, they should adapt best practices to local contexts, ensuring that solutions are both effective and culturally appropriate. This includes studying regulatory frameworks in the European Union and the United States, where laws on misinformation and digital authenticity are being developed.
The study concludes with a cautionary note: while deepfakes present significant risks, they also offer opportunities for innovation in media, education, and governance. The challenge lies in harnessing their potential while mitigating their harms. The authors emphasize that humans must remain the stewards of technology, not its subjects. This requires a collective effort to strengthen digital literacy, reinforce institutional safeguards, and uphold ethical standards in the age of artificial intelligence.
As deepfake technology continues to advance, the need for proactive governance becomes ever more urgent. The research by Zhang Aijun and Wang Fang provides a timely and rigorous analysis of the political implications of synthetic media. It serves as both a warning and a roadmap, urging policymakers, technologists, and citizens to act before the boundaries of truth and reality become irreparably blurred.
Zhang Aijun, Wang Fang, School of Journalism and Communication, Northwest University of Political Science and Law. “Artificial Intelligence and the Transformation of Political Discourse: Deepfakes and Public Opinion in the Digital Age.” Journal of Hohai University (Philosophy and Social Sciences Edition), 2021, 23(4): 29-36. DOI: 10.3876/j.issn.16714970.2021.04.005