Securing the Future of AI-Driven Infrastructure in China
As the global digital economy accelerates, artificial intelligence (AI) has emerged as a cornerstone of national development strategies, particularly within the context of China’s ambitious “New Infrastructure” (New Infra) initiative. Spearheaded by the Chinese government to stimulate post-pandemic economic recovery and long-term technological competitiveness, the New Infra program encompasses next-generation technologies such as 5G, the Internet of Things (IoT), data centers, and, most critically, AI. Unlike traditional infrastructure focused on physical construction, New Infra emphasizes digital, intelligent, and interconnected systems that serve as the backbone for future industrial transformation.
At the heart of this transformation lies artificial intelligence—not merely as a standalone technology, but as a foundational layer that enables automation, predictive analytics, and intelligent decision-making across sectors ranging from healthcare and manufacturing to transportation and finance. The integration of AI into core infrastructure systems amplifies its societal impact, making it both a powerful engine for innovation and a potential vector for systemic risk. Recognizing this dual nature, researchers from the Security Research Institute at the China Academy of Information and Communications Technology (CAICT) have issued a comprehensive analysis on the imperative to build a robust security framework to safeguard AI-driven infrastructure.
In a recent publication in Information and Communications Technology and Policy, Wei Wei, Niu Jinhang, and Jing Huiyun outline the multifaceted challenges confronting China’s AI New Infra ambitions and propose a strategic roadmap for ensuring its secure and sustainable development. Their work, grounded in literature review and industry consultations, provides a timely and authoritative assessment of the risks and opportunities in one of the world’s most dynamic technological landscapes.
The authors begin by contextualizing AI within the broader New Infra agenda. They define AI New Infra not simply as the deployment of AI tools, but as the establishment of a comprehensive ecosystem comprising hardware (such as AI chips and computing clusters), software (including machine learning frameworks and algorithm libraries), and data infrastructure (encompassing data collection, storage, and sharing mechanisms). This ecosystem is designed to provide public, scalable, and accessible AI services—akin to utilities—that can be leveraged by businesses, governments, and individuals across diverse domains.
This shift toward AI as a public utility introduces new expectations regarding reliability, fairness, and safety. The authors emphasize that AI systems integrated into critical infrastructure must meet higher standards than those used in consumer applications. A malfunction in an AI-powered traffic management system or a misdiagnosis by an AI-assisted medical platform could have far-reaching consequences, affecting public safety and trust in technology.
However, the path to realizing this vision is fraught with challenges. The paper identifies four major dimensions of risk: geopolitical competition, technical limitations, security vulnerabilities, and governance gaps. Each of these areas presents unique obstacles that must be addressed through coordinated policy, technological innovation, and international cooperation.
First, the authors highlight the intensifying geopolitical contest over AI leadership. While China has achieved global prominence in AI applications—particularly in computer vision and speech recognition—it remains heavily dependent on foreign technologies for foundational components such as high-performance computing chips and deep learning frameworks. This dependency creates strategic vulnerabilities, especially as the United States and its allies tighten export controls and restrict technology transfers.
Since 2019, the U.S. Department of Commerce has placed dozens of Chinese AI companies on its Entity List, effectively blocking their access to American-made semiconductors and software. These measures are part of a broader effort to contain China’s technological rise and maintain Western dominance in critical technologies. The formation of the Global Partnership on Artificial Intelligence (GPAI), which promotes AI development aligned with democratic values and human rights, further signals a geopolitical bifurcation in AI governance.
The authors caution that such developments could fragment the global AI ecosystem, limiting China’s ability to collaborate on international standards and participate in global AI markets. To counter this, they advocate for a dual strategy: strengthening domestic innovation capabilities while seeking selective international partnerships, particularly with countries in Southeast Asia, the Asia-Pacific region, and along the Belt and Road Initiative corridors.
Second, the paper examines the intrinsic technical limitations of current AI systems. Despite rapid progress, most AI technologies today are based on narrow, data-intensive machine learning models that lack robustness, interpretability, and generalization ability. These limitations pose significant barriers to widespread deployment in real-world environments.
One major issue is data dependency. AI models require vast amounts of high-quality, labeled data to function effectively. However, many traditional industries—such as agriculture, manufacturing, and healthcare—have yet to undergo full digital transformation, resulting in fragmented, siloed, or incomplete data sets. Without sufficient data, AI systems cannot be trained accurately, undermining their utility and reliability.
Another concern is algorithmic fragility. Deep learning models, while powerful, are often brittle when exposed to novel or adversarial inputs. For example, slight perturbations in input data—such as changing a few pixels in an image—can cause an AI system to misclassify objects entirely. This vulnerability, known as adversarial attacks, is particularly dangerous in safety-critical applications like autonomous driving or industrial control systems, where errors can lead to catastrophic outcomes.
Moreover, the complexity of AI systems creates a severe talent shortage. The authors cite estimates indicating a deficit of over 5 million AI professionals in China, with a particular scarcity of individuals who possess both technical expertise and domain-specific knowledge. This skills gap hampers the pace of innovation and limits the ability of organizations to implement and maintain AI systems securely.
Third, the paper delves into the growing security threats associated with AI infrastructure. As AI platforms become more open and interconnected, they expose new attack surfaces for malicious actors. The authors identify two primary categories of risk: internal security flaws within AI systems and external cyberattacks targeting AI-enabled networks.
Internal risks stem from the very design of AI systems. For instance, pre-trained models and algorithm libraries shared through open platforms may contain hidden backdoors or malicious code inserted by untrustworthy contributors. These “Trojan models” can be activated remotely to manipulate system behavior, steal sensitive data, or disrupt operations. Because such attacks are often undetectable through conventional security tools, they represent a stealthy and persistent threat.
Additionally, data privacy is a major concern. AI systems rely on massive data inputs, often including personal or sensitive information. When users upload data to shared AI platforms, there is a risk of unauthorized access, data leakage, or misuse. Current data protection mechanisms are insufficient to address these risks, especially in cross-border data flows where jurisdictional boundaries complicate enforcement.
External threats are equally pressing. As AI is embedded into cloud platforms, edge devices, and IoT networks, the overall attack surface expands dramatically. Cybercriminals can exploit vulnerabilities in low-security endpoints—such as smart sensors or industrial controllers—to gain access to central AI systems. Once inside, they can manipulate training data, alter model parameters, or exfiltrate intellectual property. The distributed nature of AI infrastructure makes it difficult to monitor and defend against such attacks in real time.
Fourth, the authors address the governance challenges posed by AI integration. Traditional regulatory frameworks are ill-suited to the cross-sectoral and adaptive nature of AI technologies. For example, regulating autonomous vehicles involves multiple agencies—transportation, public security, industry, and telecommunications—each with its own mandate and approach. This fragmentation can lead to overlapping jurisdictions, regulatory gaps, or inconsistent enforcement.
Furthermore, the opacity of AI decision-making complicates accountability. When an AI system makes an erroneous or biased decision, it is often difficult to trace the root cause due to the “black box” nature of deep learning models. This lack of transparency undermines public trust and poses legal and ethical dilemmas, especially in high-stakes domains like criminal justice or credit scoring.
To address these multifaceted challenges, the authors propose a holistic approach to building a national AI security assurance system. Their recommendations span four key pillars: technological innovation, governance reform, technical protection, and international engagement.
The first pillar focuses on enhancing core technological capabilities. The authors stress the importance of self-reliance in critical areas such as semiconductor design, AI chip manufacturing, and foundational software development. They recommend leveraging national and regional industrial funds to support collaborative R&D projects among leading enterprises, research institutions, and startups. By fostering innovation ecosystems, China can reduce its dependence on foreign technologies and accelerate the domestic production of AI-enabling components.
Equally important is talent development. The authors call for reforms in education and human resource policies to cultivate a new generation of AI professionals. This includes expanding university programs in AI and data science, promoting industry-academia partnerships, and creating incentives for overseas experts to return or work in China. They also suggest granting state-owned enterprises greater flexibility in compensating high-skilled foreign talent, thereby improving China’s competitiveness in the global talent market.
The second pillar centers on establishing a multi-stakeholder governance framework. The authors argue that AI security cannot be managed by the government alone; it requires active participation from enterprises, civil society, and technical communities. They propose the creation of a national AI safety governance body that brings together representatives from different sectors to develop common standards, share best practices, and coordinate incident response.
Within organizations, the authors emphasize the need for internal governance structures. Companies deploying AI should establish dedicated AI ethics and safety committees responsible for auditing algorithms, assessing risks, and ensuring compliance with legal and ethical norms. These committees should implement technical safeguards—such as bias detection tools, explainability modules, and robustness testing—to mitigate potential harms before deployment.
The third pillar involves strengthening technical defenses. The authors advocate for the development of a comprehensive AI security standardization system, covering data security, model integrity, system resilience, and evaluation methodologies. These standards should be piloted in leading application areas—such as smart cities, intelligent manufacturing, and digital healthcare—to validate their effectiveness and refine implementation strategies.
They also recommend increased investment in AI security research and development. This includes funding for technologies that detect and defend against adversarial attacks, secure multi-party computation, differential privacy, and federated learning. Additionally, the establishment of national AI security testing platforms would enable independent verification of AI products and services, enhancing transparency and consumer confidence.
The fourth and final pillar calls for greater international cooperation. While geopolitical tensions persist, the authors believe that collaboration on AI safety and governance remains possible and necessary. They encourage China to participate actively in global AI standard-setting bodies, such as the International Telecommunication Union (ITU) and the Institute of Electrical and Electronics Engineers (IEEE), to ensure that Chinese perspectives are represented.
Moreover, they suggest promoting bilateral and multilateral dialogues on AI ethics, data governance, and cybersecurity. By sharing China’s experiences in deploying AI for public services—such as pandemic response, urban management, and poverty alleviation—the country can build trust and foster mutual understanding with other nations. Strategic partnerships with emerging economies could also help establish alternative innovation networks that are less susceptible to geopolitical disruption.
In conclusion, the paper underscores that the success of China’s AI New Infra initiative hinges not only on technological advancement but also on the ability to manage associated risks responsibly. As AI becomes increasingly embedded in the fabric of society, the stakes for security, reliability, and ethical integrity grow ever higher. The framework proposed by Wei Wei, Niu Jinhang, and Jing Huiyun offers a pragmatic and forward-looking blueprint for navigating this complex landscape.
Their analysis serves as a critical reminder that technological progress must be accompanied by parallel advancements in governance, education, and international relations. Only through a balanced and integrated approach can China realize the full potential of AI while safeguarding its economic stability, national security, and social well-being.
The insights from this study are not only relevant to policymakers and industry leaders in China but also offer valuable lessons for other nations grappling with the dual challenges of innovation and regulation in the age of artificial intelligence. As the world moves toward an increasingly intelligent future, the principles of security, accountability, and inclusivity must remain at the forefront of technological development.
Wei Wei, Niu Jinhang, Jing Huiyun, Security Research Institute, China Academy of Information and Communications Technology, Information and Communications Technology and Policy, doi:10.12267/j.issn.2096-5931.2021.05.003