AI Security in Focus: A Global Research Landscape Unveiled
In an era where artificial intelligence (AI) is no longer a futuristic concept but a driving force behind technological, industrial, and societal transformation, the question of safety has taken center stage. As AI systems become increasingly embedded in critical infrastructure—from autonomous vehicles and smart grids to healthcare and national defense—the risks associated with their deployment are growing in complexity and consequence. A groundbreaking study published in the Journal of National University of Defense Technology offers one of the most comprehensive assessments to date of the global research landscape in AI security, combining scientific rigor with forward-looking policy insights.
Led by Wu Ji, Liang Jianghai, and Liu Shulei from the College of Advanced Interdisciplinary Studies at the National University of Defense Technology in Changsha, China, the research applies scientometric methods to analyze over 18,000 core academic papers sourced from the Web of Science database, spanning the decade from 2007 to 2017. The goal: to map the evolving contours of AI security research, identify dominant players, uncover emerging trends, and propose a structured framework for evaluating future technologies designed to safeguard intelligent systems.
The findings paint a picture of a field in transition—one that has matured significantly in its understanding of traditional cybersecurity challenges but remains in its infancy when it comes to addressing the unique risks posed by next-generation AI, such as deep learning models, synthetic media, and autonomous decision-making agents.
The Rise of AI and the Imperative for Security
Artificial intelligence is no longer confined to research labs or niche applications. It powers recommendation engines, drives self-driving cars, enables facial recognition, and even assists in medical diagnosis. However, with each advance comes a new vector for risk. High-profile incidents—such as the fatal crash involving an Uber self-driving vehicle, Boeing’s flawed automated flight control system, and the proliferation of deepfake videos—have underscored the urgent need for robust AI security frameworks.
As Wu Ji and his colleagues emphasize, AI possesses a dual-use nature: it can be a powerful tool for societal benefit, but it also carries the potential for systemic disruption if not properly governed. The challenge lies not only in preventing malicious use but also in mitigating unintended consequences arising from technical flaws, design oversights, or ethical misalignments.
“The rapid development of AI demands a parallel evolution in our approach to security,” said Liang Jianghai, one of the co-authors. “We can no longer rely solely on traditional cybersecurity measures. The autonomy, adaptability, and opacity of modern AI systems require new paradigms for verification, accountability, and resilience.”
Mapping the Global Research Effort
To understand how the scientific community has responded to these challenges, the team employed CiteSpace, a widely used visualization tool in bibliometric analysis, to generate network maps of research collaboration, institutional influence, and thematic evolution.
The results reveal a clear hierarchy in global research output. The United States leads the field with 512 publications on AI security during the study period, followed by China with 310. India and Germany trail with approximately 150 papers each. This distribution reflects broader trends in AI research investment, with the U.S. and China dominating both public and private sector innovation.
But publication volume is only one measure of influence. A more telling metric is centrality—a network science concept that indicates how well-connected a node (in this case, a country or institution) is within the overall research ecosystem. In this regard, the United States not only leads in output but also serves as the most central hub, acting as a bridge between disparate research communities worldwide. China ranks second in centrality, highlighting its growing role as a connector in global AI discourse.
At the institutional level, the Chinese Academy of Sciences emerges as the most prolific and central contributor, followed closely by the University of Illinois and Tsinghua University. Notably, the National University of Defense Technology, where the study was conducted, appears in the top 15 for both publication volume and network centrality, underscoring its position as a key player in China’s AI security research agenda.
These findings suggest that while the U.S. maintains a strong lead, China is rapidly closing the gap—not just in quantity but in the quality and connectivity of its research. This competitive dynamic is likely to shape the future of AI governance, particularly as both nations pursue strategic advantages in military and civilian applications of intelligent systems.
Identifying Research Hotspots and Emerging Themes
Beyond who is doing the research, the study delves into what is being studied. By analyzing keywords from the 18,762 papers, the authors identified a cluster of recurring themes that define the current state of AI security research.
Top keywords include algorithm, privacy, network security, intrusion detection, authentication, machine learning, and smart grid. These reflect a strong emphasis on technical safeguards—ensuring that AI systems are resilient against cyberattacks, that data is protected, and that decisions can be verified and trusted.
For instance, intrusion detection systems (IDS) and cybersecurity protocols remain central concerns, especially as AI is increasingly deployed in networked environments like the Internet of Things (IoT) and industrial control systems. Similarly, privacy-preserving techniques such as differential privacy and homomorphic encryption are gaining traction as tools to prevent AI from exploiting sensitive personal data.
However, the analysis also reveals a critical gap: while traditional IT security concerns are well-represented, research on the ethical, societal, and existential risks of AI remains underdeveloped. Topics like algorithmic bias, deepfakes, autonomous weapons, and AI alignment—issues that have sparked intense public debate—are not yet dominant in the academic literature.
“This doesn’t mean researchers aren’t aware of these challenges,” noted Liu Shulei. “But our data shows that the bulk of scholarly effort is still focused on immediate, technical problems rather than long-term, systemic risks.”
This imbalance has important implications. As AI systems grow more capable, their impact extends beyond code and circuits into the fabric of society. An algorithm that discriminates in hiring, a deepfake that manipulates public opinion, or a military drone that operates without human oversight—all represent security threats that cannot be solved by encryption or firewalls alone.
Toward a Holistic Framework for AI Security
Recognizing this limitation, the authors propose a qualitative analysis framework that expands the scope of AI security beyond technical vulnerabilities. Drawing inspiration from established cybersecurity models like the OSI and CC standards, they structure their framework around three interconnected dimensions: technical, application, and governance.
The technical dimension addresses the foundational components of AI: data, algorithms, computing infrastructure, and networking. Here, the authors highlight persistent challenges such as the lack of formal verification methods for deep learning models, the vulnerability of training data to poisoning attacks, and the opacity of AI decision-making processes. “We still don’t have reliable ways to prove that a neural network will behave as intended under all conditions,” Wu Ji observed. “This is a fundamental security gap.”
The application dimension focuses on how AI is deployed in real-world systems. From autonomous vehicles to financial trading platforms, the integration of AI introduces new failure modes. A self-driving car might misinterpret a stop sign due to adversarial perturbations; a medical AI could recommend incorrect treatment based on biased training data. The authors stress that system design must account for uncertainty, resilience, and fail-safe mechanisms—especially in safety-critical domains.
Finally, the governance dimension confronts the broader societal implications of AI. This includes ethical questions about privacy, fairness, and accountability, as well as policy challenges related to regulation, liability, and international cooperation. As AI begins to blur the lines between human and machine agency, traditional legal and moral frameworks may no longer suffice.
“This is where AI security becomes more than just an engineering problem,” Liang Jianghai explained. “It’s about ensuring that intelligent systems align with human values, that they are transparent and controllable, and that their benefits are distributed equitably.”
Policy Signals and Global Principles
The study also examines how national and international policies are shaping the discourse on AI security. In 2017, China released its New Generation Artificial Intelligence Development Plan, which explicitly called for the construction of a “safe and convenient intelligent society” and emphasized the need for legal, ethical, and regulatory safeguards.
Around the same time, the Asilomar AI Principles, formulated at a conference in California, outlined 23 guidelines for beneficial AI, covering areas such as transparency, safety, and peaceful use. The European Union followed with its Ethics Guidelines for Trustworthy AI, emphasizing human oversight, robustness, and accountability.
In 2019, China’s National Governance Committee for New-Generation Artificial Intelligence issued its own set of principles, advocating for “responsible AI” that is harmonious, fair, inclusive, and secure. These documents, while non-binding, signal a growing consensus among policymakers that AI must be developed with safety as a core design principle.
Yet, as the authors caution, there remains a significant gap between high-level principles and practical implementation. “Many of these guidelines are aspirational,” Liu Shulei said. “The real challenge is translating them into technical standards, audit procedures, and enforcement mechanisms.”
Charting the Future: 15 Critical AI Security Technologies
Building on their analysis, the research team identifies 15 emerging technologies that they believe will be crucial for securing the next generation of AI systems. These are organized according to the three dimensions of their framework.
In the enabling technologies category—focused on the technical layer—the authors highlight innovations such as privacy-preserving data encryption, adversarial training for machine learning models, and secure hardware supply chains. For example, homomorphic encryption allows data to be processed in encrypted form, preventing unauthorized access even during computation. Similarly, DARPA’s “Hardware Integrity for Exploitation” (HI-FIVE) program aims to detect counterfeit or tampered chips in critical systems—a growing concern as global semiconductor supply chains become more fragmented.
In the system and application domain, the focus shifts to human-AI collaboration frameworks, biometric authentication resistant to deepfakes, and safety-by-design architectures for autonomous systems. As AI takes on more decision-making roles, ensuring that humans remain in the loop—and that they can understand and override AI actions—is paramount. Techniques like explainable AI (XAI) and interpretable models are seen as essential tools in this effort.
Finally, in the security and governance space, the authors point to human performance enhancement technologies—such as brain-computer interfaces and augmented cognition—as a way to maintain human superiority in an age of increasingly capable machines. They also advocate for human-centered governance models that integrate ethical considerations into the design and deployment of AI systems.
To assess the maturity and importance of these technologies, the team applied a dual evaluation metric: technology readiness level (TRL) and technology importance degree (TID). TRL, based on the NASA scale from 1 (basic principles observed) to 9 (proven in operational environment), provides a measure of how close a technology is to real-world deployment. TID, meanwhile, reflects the urgency and impact of the risks a technology addresses.
For instance, formal verification of AI algorithms scores high on importance due to the critical need for trustworthy decision-making in autonomous systems, but remains at a relatively low TRL, indicating that much foundational research is still needed. In contrast, intrusion detection systems for AI networks are more mature but may address less existential risks.
One particularly forward-looking technology is human enhancement for AI oversight, which involves using neurotechnologies to improve human reaction times, cognitive processing, and situational awareness. While still in early stages, the authors argue that such capabilities may become necessary to keep pace with rapidly evolving AI agents.
Implications for Industry, Academia, and Policy
The study’s findings carry significant implications for multiple stakeholders. For researchers, it underscores the need to broaden the scope of AI security beyond traditional cybersecurity to include ethical, societal, and existential dimensions. For industry leaders, it highlights the importance of investing in safety-by-design approaches and adopting transparent development practices.
For policymakers, the research serves as a call to action. “Regulation must evolve alongside technology,” Wu Ji emphasized. “We need international standards for AI safety testing, certification, and incident reporting—similar to what exists in aviation or pharmaceuticals.”
The authors also stress the importance of interdisciplinary collaboration. AI security cannot be the sole domain of computer scientists; it requires input from ethicists, legal scholars, sociologists, and behavioral scientists. Only through such convergence can we hope to build intelligent systems that are not only powerful but also safe, fair, and aligned with human values.
Moreover, the study highlights the strategic importance of AI security in national competitiveness and defense. As AI becomes a key enabler in military applications—from autonomous drones to cyber warfare—the ability to secure these systems will be a decisive factor in future conflicts.
A Call for Proactive, Not Reactive, Security
One of the central messages of the paper is that AI security must be proactive, not reactive. Waiting for a catastrophic failure—a hacked autonomous weapon, a manipulated election, or a runaway AI system—to spur action would be a grave mistake.
“We’ve learned from the history of the internet that security is often an afterthought,” Liang Jianghai said. “With AI, we have a chance to get it right from the beginning. But that requires foresight, investment, and global cooperation.”
The authors conclude by calling for the establishment of a comprehensive AI safety research agenda, supported by sustained funding, international collaboration, and public-private partnerships. They also advocate for the creation of independent AI safety institutes—similar to nuclear regulatory bodies—that can conduct audits, set standards, and monitor compliance.
As artificial intelligence continues to reshape the world, the stakes could not be higher. The choices we make today—about how we design, deploy, and govern these systems—will determine whether AI becomes a force for liberation or a source of unprecedented risk.
This study, with its rigorous data analysis and forward-looking framework, provides a crucial roadmap for navigating that uncertain terrain. It reminds us that while AI may be intelligent, it is up to humans to ensure it is also safe.
Wu Ji, Liang Jianghai, Liu Shulei, College of Advanced Interdisciplinary Studies, National University of Defense Technology, Journal of National University of Defense Technology, doi:10.11887/j.cn.202103010