AI Security Framework Proposed Amid Rising Cyber Threats
As artificial intelligence (AI) becomes increasingly embedded in critical sectors such as autonomous vehicles, smart cities, and financial services, concerns over its security vulnerabilities are intensifying. A recent study published in Information and Communications Technology and Policy outlines a comprehensive security framework designed to address the growing risks associated with AI deployment across industries.
The research, conducted by Ning Tingyong, Xiong Jie, and Hu Yongbo from Inesa Intelligent Tech Inc. in Shanghai, highlights the dual nature of AI advancement—while it drives innovation and efficiency, it simultaneously exposes systems to novel cyber threats that traditional security models are ill-equipped to handle. The paper, titled Research on Security Threats of AI Application Landing, presents a structured approach to securing AI throughout its lifecycle, emphasizing the need for proactive risk management in both training and operational environments.
The authors argue that the rapid adoption of AI technologies has outpaced the development of robust security protocols. From facial recognition systems susceptible to spoofing attacks to self-driving cars making fatal errors due to adversarial inputs, real-world incidents underscore the urgency of establishing standardized safeguards. “AI security is not an afterthought—it must be integrated from the outset,” said Ning, lead architect at Inesa’s technology division and one of the primary contributors to the study.
The team’s analysis begins with an assessment of the current threat landscape. Drawing on data from industry reports and academic literature, they note a significant rise in AI-related security incidents over the past five years. Sectors including internet services, biometrics, cybersecurity, and autonomous systems have reported increasing vulnerabilities, many stemming from flaws in machine learning models or compromised training data. According to the researchers, more than 60% of widely used machine learning models contain at least one exploitable vulnerability, often due to poor coding practices, insufficient validation, or hidden backdoors.
One of the most alarming findings centers around adversarial attacks—subtle manipulations of input data that can deceive AI systems into making incorrect decisions. For instance, adding imperceptible noise to an image can cause a deep learning model to misclassify a stop sign as a speed limit sign, posing serious risks in autonomous driving scenarios. These attacks exploit the lack of model interpretability and the overreliance on pattern recognition without contextual understanding.
“The black-box nature of many AI models makes them particularly vulnerable,” explained Xiong Jie, a senior urban planner involved in smart city initiatives at Inesa. “When decision-making processes are opaque, it becomes difficult to trace how a model arrived at a conclusion, let alone detect malicious interference.” She emphasized that transparency and explainability should be core components of any trustworthy AI system, especially when deployed in public infrastructure or government services.
To counter these challenges, the researchers propose a four-phase AI security lifecycle framework inspired by established models such as the NIST Cybersecurity Framework and Gartner’s Adaptive Security Architecture. This approach provides a systematic methodology for identifying, protecting, detecting, and responding to threats throughout the AI deployment pipeline.
The first phase, identification, involves a thorough inventory of all AI assets—including models, datasets, cloud platforms, and third-party vendors—followed by threat modeling and risk assessment. By mapping potential attack vectors and evaluating the impact of various breach scenarios, organizations can prioritize their security investments effectively. The authors stress that asset management should not be a one-time exercise but an ongoing process, given the dynamic nature of AI systems and their dependencies.
In the protection phase, the focus shifts to implementing preventive controls. This includes strengthening model resilience against evasion and poisoning attacks, enforcing secure development practices, and embedding privacy-preserving techniques such as differential privacy and federated learning. The researchers advocate for a “security-by-design” mindset, where safeguards are integrated during the model development stage rather than retrofitted later.
Model hardening, a key component of this phase, involves techniques like adversarial training—where models are exposed to perturbed inputs during training to improve robustness—and input sanitization to filter out potentially malicious data. Additionally, access controls, encryption, and secure APIs help protect the integrity of both the model and the data it processes.
The detection phase is centered on continuous monitoring and anomaly detection. Given that AI systems operate in complex, high-dimensional spaces, traditional rule-based intrusion detection methods may fail to identify subtle deviations indicative of an attack. Instead, the authors recommend deploying behavioral analytics and automated threat detection tools capable of flagging unusual patterns in model queries, inference results, or system performance metrics.
Regular penetration testing and red team exercises are also encouraged to simulate real-world attack scenarios and evaluate the effectiveness of existing defenses. “You can’t defend what you can’t see,” noted Hu Yongbo, deputy general manager at Inesa and co-author of the study. “Continuous monitoring allows organizations to detect anomalies early, before they escalate into full-blown incidents.”
Finally, the response phase prepares organizations for inevitable security breaches. This includes establishing incident response plans, conducting post-incident forensics, and improving organizational policies to prevent recurrence. The researchers emphasize the importance of cross-functional coordination between data scientists, security teams, legal advisors, and business leaders during crisis situations.
A notable aspect of the proposed framework is its emphasis on multi-model architectures for critical applications. Rather than relying on a single AI model, the authors suggest deploying ensembles of diverse models to reduce the risk of systemic failure. If one model behaves abnormally or is compromised, others can serve as checks and balances, ensuring more reliable decision-making.
This redundancy principle was illustrated through a case study involving Tesla’s Autopilot system, which has been involved in several high-profile accidents linked to sensor misinterpretation and inadequate training data coverage. The researchers pointed out that had multiple independent perception models been used in parallel, some of these errors might have been caught before leading to dangerous outcomes.
Beyond technical solutions, the paper also addresses regulatory and standardization efforts underway in China and globally. Over the past few years, institutions such as the China Academy of Information and Communications Technology (CAICT) have launched third-party evaluation programs to assess AI systems for reliability, fairness, and security. Standards like ITU-T F.748.11 (2020) for AI chip benchmarking and JR/T 0221-2021 for financial AI algorithm evaluation reflect a growing consensus on the need for measurable criteria.
“These standards provide much-needed guidance for enterprises navigating the AI landscape,” said Ning. “They help establish baselines for performance and safety, enabling better vendor selection and reducing market fragmentation.”
However, the authors caution that compliance with standards alone does not guarantee security. Organizations must go beyond checkbox exercises and cultivate a culture of security awareness across all levels—from executives to developers. Regular training, clear accountability structures, and transparent communication channels are essential for sustaining long-term resilience.
Another critical consideration is supply chain security. As AI systems increasingly rely on pre-trained models, open-source libraries, and cloud-based infrastructure, the attack surface expands beyond internal boundaries. A compromised dependency or a malicious update could introduce vulnerabilities that propagate across multiple deployments.
To mitigate this risk, the researchers recommend rigorous vetting of third-party components, software bill of materials (SBOM) tracking, and runtime integrity checks. They also highlight the importance of secure model deployment pipelines, where every change undergoes automated testing and approval workflows before reaching production.
Data security remains a cornerstone of the proposed framework. The study details how attackers can exploit weaknesses in data collection, storage, and processing stages. For example, data poisoning attacks involve injecting malicious samples into training datasets to skew model behavior, while membership inference attacks allow adversaries to determine whether specific individuals’ data was used in training—posing serious privacy risks.
In response, the authors advocate for strict data governance policies, including data minimization, anonymization, and role-based access controls. They also support the use of synthetic data generation and homomorphic encryption in sensitive applications where raw data exposure must be avoided.
For inference-time protection, the paper recommends implementing query rate limiting, input validation, and output consistency checks to prevent model extraction and evasion attacks. Model extraction, where attackers repeatedly query a system to reverse-engineer its parameters, threatens intellectual property and enables the creation of adversarial clones.
The researchers further stress the importance of human oversight in high-stakes decisions. Even in highly automated systems, maintaining a human-in-the-loop mechanism ensures accountability and allows intervention when confidence levels fall below acceptable thresholds. This hybrid approach balances automation with control, particularly in domains like healthcare, law enforcement, and transportation.
Looking ahead, the authors anticipate that AI will continue to reshape the cybersecurity landscape—both as a target and as a tool. On one hand, AI-powered systems are becoming prime targets for cybercriminals seeking to manipulate decisions or steal sensitive information. On the other hand, AI itself is being leveraged to enhance threat detection, automate incident response, and predict emerging attack patterns.
This duality underscores the need for a balanced perspective: AI is neither inherently secure nor inherently dangerous. Its safety depends on how it is designed, deployed, and maintained. The framework proposed by Ning, Xiong, and Hu offers a pragmatic roadmap for achieving trustworthy AI—one that aligns technical rigor with organizational discipline and regulatory compliance.
Industry experts have welcomed the study as a timely contribution to the evolving discourse on AI safety. “What sets this work apart is its practical orientation,” said a senior AI ethics consultant at a leading tech firm who reviewed the paper independently. “It doesn’t just diagnose problems—it provides actionable steps that organizations can implement immediately.”
As AI adoption accelerates worldwide, the lessons drawn from this research could serve as a foundation for national and international standards. The integration of AI into critical infrastructure demands nothing less than a holistic, lifecycle-oriented approach to security—one that anticipates threats, adapts to new challenges, and maintains public trust.
In conclusion, the path toward secure and reliable AI requires collaboration across disciplines, sectors, and borders. While technological innovation will continue to push boundaries, it must be matched by equally rigorous efforts in governance, accountability, and resilience engineering. The framework outlined by the Inesa team represents a significant step forward in that journey.
Ning Tingyong, Xiong Jie, Hu Yongbo, Inesa Intelligent Tech Inc., Information and Communications Technology and Policy, doi: 10.12267/j.issn.2096-5931.2021.08.010