Global AI Policy Research: A Deep Dive into Trends and Gaps
The race to harness artificial intelligence is no longer confined to laboratories and corporate R&D departments; it has exploded onto the global policy stage. Governments from Washington to Beijing are scrambling to draft blueprints, not merely to foster innovation, but to manage the profound societal, economic, and ethical tremors this technology promises to unleash. In this high-stakes environment, understanding the intellectual landscape—the key questions being asked, the risks being debated, and the policy frameworks being proposed—is critical. A groundbreaking analysis, meticulously mapping the scholarly conversation across continents, reveals not just the current state of play, but also stark contrasts and crucial gaps that will shape the future of AI governance.
This isn’t just an academic exercise. The policies being formulated today will determine whether AI becomes a force for widespread prosperity or a source of unprecedented disruption and inequality. They will dictate the rules of engagement for trillion-dollar industries, redefine the nature of work, and challenge the very foundations of privacy and human agency. To navigate this complex terrain, policymakers, industry leaders, and concerned citizens need a clear, evidence-based map of the intellectual territory. That’s precisely what this comprehensive bibliometric study delivers, offering an unprecedented comparative view of how the world’s leading thinkers are grappling with the AI policy challenge.
The research, which analyzed 395 seminal papers from top-tier international (SSCI) and Chinese (CSSCI) academic journals, paints a vivid picture of two distinct, yet increasingly interconnected, intellectual ecosystems. The findings are both illuminating and sobering, highlighting areas of global consensus, critical divergences in focus, and a pressing need for more robust theoretical frameworks to guide real-world policy decisions.
One of the most striking revelations is the sheer velocity of the field. While foundational work on AI policy can be traced back to the 1980s in the West, the real explosion of scholarly activity is a phenomenon of the last decade, particularly the last five years. This surge mirrors the rapid maturation of AI technologies themselves—from theoretical curiosities to engines powering everything from medical diagnostics to autonomous vehicles. The bibliometric data shows a clear inflection point around 2016-2017, coinciding with landmark reports from the Obama administration in the United States and the subsequent unveiling of national AI strategies by major powers like China, the UK, Germany, and Japan. It was no longer a question of if governments should act, but how.
The most frequently cited, and therefore most influential, works in the global corpus form a kind of intellectual canon for the field. These are the papers that have fundamentally shaped the debate, setting the agenda for researchers and policymakers alike. Topping the list is “The Second Machine Age” by Erik Brynjolfsson and Andrew McAfee. This seminal work doesn’t just celebrate the potential of AI to drive progress and prosperity; it delivers a powerful, data-driven warning. The authors argue that while AI will create immense wealth, it will also radically reshape the labor market, rendering many human skills obsolete. Their core message is one of urgent adaptation: to thrive, societies must reinvent education, forge new human-machine partnerships, and implement proactive policies to manage the transition. It’s a clarion call for societal transformation, not just technological adoption.
Another cornerstone of the literature is the work of philosopher Nick Bostrom, particularly his book “Superintelligence.” Bostrom takes the conversation to an even more existential level, exploring the potential emergence of artificial general intelligence (AGI) that could surpass human intellect. He doesn’t shy away from the “control problem”—the terrifying prospect of creating an entity so powerful that its goals might not align with human survival. His work has been instrumental in moving the conversation beyond near-term economic impacts to the long-term, potentially civilization-altering risks of AI, advocating for rigorous safety research and international cooperation long before such systems are even feasible.
Complementing these broad, philosophical works are studies focused on concrete, near-term impacts. The 2017 paper by Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment,” provided one of the first systematic analyses of which jobs are most susceptible to automation. Their findings, suggesting that nearly half of U.S. jobs could be at high risk, sent shockwaves through policy circles and continue to inform workforce development strategies globally. Similarly, research into the “black box” nature of AI algorithms, exemplified by Frank Pasquale’s “The Black Box Society,” has been crucial in highlighting issues of transparency, accountability, and bias. As AI systems make increasingly consequential decisions—from loan approvals to parole hearings—their opacity becomes a fundamental threat to fairness and justice, demanding new regulatory approaches.
The thematic analysis of the international literature reveals a remarkably diverse and mature field. Researchers are not just asking what AI can do, but where it is being applied and what the consequences are. The study identifies twelve major thematic clusters, demonstrating the pervasive influence of AI across virtually every sector of society.
One major cluster revolves around “knowledge management” and the future of education. Scholars are examining how AI-driven data analytics and automated systems are transforming educational governance. The focus is on how algorithms can predict student outcomes, personalize learning, and even shape educational policy itself. This raises profound questions about equity, data privacy, and the role of human judgment in nurturing young minds. Are we creating a more efficient education system, or one that is more rigid and potentially discriminatory?
Another significant area of research is “epidemiology” and healthcare. Here, the focus is overwhelmingly positive, exploring how AI can revolutionize disease prevention, diagnosis, and treatment. Researchers are developing intelligent systems to track disease outbreaks like dengue fever, analyze medical images with superhuman accuracy, and provide personalized health advice through online support groups. The potential to save lives and reduce healthcare costs is immense, making this one of the most ethically straightforward and rapidly advancing applications of AI policy.
The cluster labeled “dynamic programming” delves into the nuts and bolts of AI implementation, particularly in business and public administration. Studies here examine how machine learning can optimize complex logistical problems, such as inventory management in volatile supply chains. They also explore the challenges of deploying data science in the public sector, where siloed agencies and legacy systems often hinder innovation. The research emphasizes the need for cross-departmental collaboration and new skill sets to unlock AI’s potential for improving government efficiency and service delivery.
The theme of “smart cities” represents perhaps the most visible and ambitious application of AI policy. Researchers are developing frameworks to integrate AI and big data into urban planning, aiming to create cities that are not just more efficient, but also more sustainable and livable. This involves using AI to manage traffic flow, optimize energy consumption, and improve public safety. However, the literature also sounds a note of caution, warning that an over-reliance on technology could erode human-centric urban design and exacerbate social inequalities if not carefully managed.
Perhaps the most contentious and widely studied cluster is “employment.” The impact of AI on the labor market is a central concern for policymakers everywhere. The research presents a nuanced picture. Some studies, like those by Vermeulen and Kesselhut, argue that while AI will displace certain jobs, it will also create new ones, representing a “structural change” rather than an outright “end of work.” Others, like the work of Frank and Autor, paint a more alarming picture, suggesting that AI could lead to massive labor market polarization, where a small group of highly skilled workers reaps enormous benefits while a large swath of the middle class is left behind. This debate is far from settled and remains a primary driver of policy experimentation, from universal basic income trials to massive reskilling initiatives.
Finally, the cluster on “autonomous vehicles” highlights the unique policy challenges posed by a single, high-profile AI application. The promise is clear: reducing accidents caused by human error and alleviating traffic congestion. But the research delves into the complex web of new risks, from ethical dilemmas in split-second decision-making (the infamous “trolley problem”) to cybersecurity threats and the need for entirely new liability and insurance frameworks. Public acceptance is also a major hurdle, with studies showing that while many consumers are intrigued by the technology, significant safety and privacy concerns remain.
In stark contrast to the broad, established international field, the landscape of AI policy research in China, while rapidly growing, is more focused and closely aligned with the state’s strategic priorities. The analysis of Chinese literature reveals three primary thematic clusters, all orbiting around the central axis of national governance.
The first and most prominent cluster is the “development path of AI policies under the background of national governance.” This reflects a top-down, state-centric approach. Chinese scholars are deeply engaged in exploring how AI can be harnessed as a tool to enhance the efficiency and effectiveness of state power—a concept termed “shan zhi” or “good governance through AI.” This includes using AI for predictive policing, optimizing public service delivery, and managing vast populations with unprecedented precision. However, scholars like Mei Lirun also grapple with the “dark side” of this power, warning of the need to balance good AI with skilled AI to mitigate negative impacts like mass unemployment and social unrest. The research acknowledges that while AI can boost national competitiveness, it also poses significant challenges to social stability that must be proactively managed.
The second cluster focuses on “application fields,” examining how AI policy is being implemented in specific sectors like education, social governance, and taxation. In education, the research often looks outward, studying models like Stanford University’s AI talent development programs for lessons that can be adapted to the Chinese context. Domestically, the focus is on how AI can optimize teaching content and improve learning outcomes. In social governance, AI is seen as a tool to reduce the cost of analyzing complex social problems and to formulate more “scientific” public policies. The research on taxation is particularly forward-looking, recognizing that the AI-driven economy may require a complete overhaul of existing tax theories and policies to ensure fairness and generate adequate government revenue in a world where traditional labor and capital are being redefined.
The third cluster is methodological, centered on “quantitative analysis of AI policy texts.” This reflects a strong emphasis on empirical, data-driven policy evaluation. With over 21 Chinese provinces having issued AI development plans by early 2019, scholars have a rich corpus of official documents to analyze. Using content analysis and policy tool frameworks, they meticulously dissect these texts to assess their alignment with national strategy, their balance between supply-side (e.g., funding R&D) and demand-side (e.g., creating markets) incentives, and their focus on different stages of the AI innovation lifecycle. This body of work is crucial for identifying gaps and inefficiencies in the current policy regime, such as the identified need for more demand-side policies to stimulate commercialization as the industry matures.
The evolution of these research fields tells an even more compelling story. Internationally, the trajectory of AI policy research can be traced through three distinct waves. The first wave, from the 1980s to the early 2000s, was characterized by foundational exploration. Key terms like “simulation” and “expert systems” dominated, reflecting an era focused on building the basic technological capabilities and finding initial, often niche, applications, particularly in healthcare.
The second wave, emerging around 2012, saw a profound shift toward human-centric concerns. Terms like “mental health,” “public health,” and “care” surged to prominence. This reflected a growing recognition that AI’s most profound impacts would be on human well-being. Researchers began to explore how AI could be used to deliver mental health services, manage chronic diseases, and provide elder care, fundamentally changing the relationship between technology and the human body and mind.
The third and current wave, gaining momentum from 2009 onward, is defined by a focus on systemic impact and intervention. Keywords like “model,” “society,” “intervention,” and “impact” highlight a maturing field that is no longer just observing AI’s effects but actively seeking to model them and design policy interventions to steer outcomes. The emphasis is on understanding AI’s broader societal consequences and developing governance mechanisms to mitigate risks and maximize benefits.
In China, the evolution is more compressed but equally dynamic. The research has moved rapidly from initial theoretical exploration to deep, practical application. This evolution is tightly coupled with the national strategic agenda. As the central government has elevated AI to a matter of national security and economic supremacy, the scholarly focus has followed suit, delving into the practical challenges of implementation in governance, industry, and education. The trajectory is one of increasing sophistication and alignment with state objectives, moving from “what is AI policy?” to “how can AI policy best serve the goals of national rejuvenation?”
The comparative analysis yields several critical insights for the future of AI policy, particularly for the Chinese research community. First and foremost, there is an urgent need to accelerate the construction of a comprehensive theoretical framework for AI policy. While empirical studies of policy texts and specific applications are valuable, they risk being fragmented and reactive without a unifying theory. Such a framework should map the causal chain from policy motivations and design, through implementation pathways, to societal and economic outcomes. It should provide a common language and set of principles to guide policymakers across different sectors and levels of government.
Second, there is a critical need for deeper, on-the-ground policy research. Much of the current analysis, particularly in China, relies on secondary data from official documents and websites. While this provides a useful macro-level view, it often misses the messy reality of policy implementation. Future research must involve rigorous fieldwork—interviews with policymakers, industry leaders, and citizens; case studies of pilot programs; and ethnographic observations of how AI systems actually function in real-world settings. This granular, qualitative data is essential for identifying unforeseen risks, understanding local resistance, and designing policies that are not just technically sound but also socially and politically viable.
Third, the research must achieve a deeper, more sophisticated coupling between AI policy and the broader project of national governance. AI is not just a tool for economic growth; it is a transformative force that will reshape the relationship between the state and its citizens, between public and private spheres, and between humans and machines. Research must move beyond simply applying AI to existing governance problems and start reimagining governance itself for an AI-driven future. This means exploring how AI can be used to enhance democratic participation, ensure algorithmic accountability, and protect fundamental rights in an era of pervasive surveillance and automated decision-making.
The global competition in AI is often framed in terms of technological supremacy—who can build the most powerful algorithms or the fastest chips. But this bibliometric analysis reveals that the true battleground is in the realm of ideas and governance. The nations that will ultimately lead in the AI era are not just those with the best engineers, but those with the most thoughtful, adaptive, and ethically grounded policy frameworks. The research by Zheng Ye, Ren Mudan, and Jane E. Fountain provides an indispensable map of this intellectual terrain, highlighting the paths taken, the challenges ahead, and the critical work that remains to be done. The future of AI is not preordained; it will be shaped by the policies we choose to enact today, and the quality of the research that informs those choices.
By Zheng Ye, Ren Mudan, Jane E. Fountain. Published in JOURNAL OF INTELLIGENCE, 2021,40(1):48-55. DOI:10.3969/j.issn.1002-1965.2021.01.007