AI-Powered Fire Control System Marks Leap in Autonomous Air Defense
In an era where warfare is rapidly evolving toward autonomy, speed, and intelligent coordination, traditional air defense systems are struggling to keep pace. Conventional fire control methods—relying heavily on static mathematical models and pre-defined motion assumptions—are increasingly inadequate against modern threats like high-maneuverability drones, swarm attacks, and time-sensitive aerial targets. Addressing this critical gap, a team of researchers from the North Automatic Control Technology Institute in Taiyuan, China, has unveiled a groundbreaking approach: a self-learning, data-driven fire control system that integrates deep learning, neural networks, and big data analytics to revolutionize how air defense platforms detect, track, and engage hostile targets.
Published in the July 2021 issue of Fire Control & Command Control, the study led by Liu Jiansheng and colleagues presents a comprehensive framework that shifts the paradigm from “model-centric” to “data-plus-model” fire control. This innovation not only enhances real-time accuracy in target state estimation but also enables continuous self-correction and autonomous decision-making—key attributes for next-generation intelligent weapon systems.
At the heart of this advancement lies the recognition that future battlefields will be dominated by unpredictable, agile, and often networked threats. Traditional fire control algorithms, built decades ago for predictable ballistic trajectories or steady-state aircraft motion, falter when confronted with erratic maneuvers or sudden changes in target behavior. Moreover, these legacy systems lack mechanisms to learn from past engagements or adapt to new operational data. The new system, by contrast, treats every engagement—whether real-world or simulated—as a learning opportunity, dynamically refining its internal models and decision logic.
The researchers’ approach is built on three core technological pillars: data-driven target state space modeling, multi-sensor fusion identification, and convolutional neural network (CNN)-based ballistic correction. Each component addresses a specific weakness in conventional fire control while synergistically reinforcing the others to create a cohesive, intelligent loop.
First, the team tackled the problem of target motion modeling. Instead of relying on a limited set of predefined kinematic equations—such as constant velocity or constant acceleration—they constructed an adaptive target state space model library. This library initially incorporates historical data and expert knowledge to represent a wide spectrum of aerial behaviors, including hovering, diving, sharp turns, and variable acceleration. Crucially, it is not static. Using a hybrid architecture that combines classical estimation models with a Deep Belief Network (DBN), the system continuously compares predictions from both pathways. Discrepancies between the two are analyzed in real time, allowing the DBN to adjust its weights and improve future predictions. Over time, the system learns which motion models are most effective under specific combat conditions, effectively evolving its understanding of target dynamics without human intervention.
This self-learning capability extends beyond mere tracking. The system also archives successful prediction strategies as structured knowledge, enriching its internal database for future reference. Offline training further enhances this process, using sparse learning techniques on vast repositories of historical engagement data to pre-train the neural components before deployment. The result is a fire control system that doesn’t just react—it anticipates, adapts, and remembers.
The second pillar addresses sensor fusion—a perennial challenge in modern air defense. Platforms today are equipped with heterogeneous sensors: radar for long-range detection, electro-optical/infrared (EO/IR) systems for precision tracking, and sometimes RF detection for electronic signatures. Each sensor operates in a different domain, with varying update rates, noise profiles, and spatial resolutions. Traditional fusion methods often force these disparate data streams into a common framework too early, leading to information loss or conflicting interpretations.
Liu and his team adopted a more nuanced strategy. They first extract high-level features from each sensor modality using specialized neural networks tailored to the data type—convolutional layers for image-based EO/IR inputs, recurrent structures for time-series radar returns, and so on. These networks output probabilistic “beliefs” about the target’s class (e.g., drone, fighter jet, cruise missile) and behavioral state. These beliefs are then treated as evidential inputs within a modified Dempster-Shafer (D-S) evidence theory framework at the decision layer.
Unlike simple averaging or voting schemes, D-S theory allows the system to quantify uncertainty and resolve conflicts between sensors. For instance, if radar suggests a high-speed jet while EO/IR indicates a slow-moving UAV, the fusion engine can assess the reliability of each source based on environmental conditions (e.g., fog degrading EO performance) and historical sensor accuracy. It then computes a combined belief that reflects both the evidence and its confidence level. This layered fusion approach significantly improves target identification accuracy, especially in cluttered or deceptive environments—a critical advantage when seconds count.
The third and perhaps most operationally impactful innovation is the use of convolutional neural networks to correct firing solutions in real time. In conventional systems, ballistic calculations rely on physics-based models that account for variables like muzzle velocity, wind, air density, and target motion. However, these models often fail to capture complex, nonlinear interactions—especially when engaging highly maneuverable targets at close range. Small errors in initial estimates can compound into large miss distances.
The researchers reframed this problem as a high-dimensional regression task. They identified 18 key input variables—including target position, velocity, acceleration, angular rates, environmental factors, and prior correction values—that collectively influence the final miss distance (expressed as azimuth and elevation errors). A CNN was then trained on thousands of historical engagement records to learn the implicit mapping between these inputs and the required correction. Because CNNs excel at capturing spatial and temporal hierarchies in data, they can model subtle error patterns that analytical methods miss—such as the cumulative effect of crosswind on a spinning projectile over time, or the delayed response of a servo mechanism under thermal stress.
During live operations, the CNN ingests real-time sensor data and instantly outputs refined aiming adjustments. This “self-correction” loop operates continuously, allowing the system to compensate not only for external disturbances but also for internal hardware imperfections. Over successive engagements, the network’s performance improves as it incorporates new data, effectively turning every shot into a calibration opportunity.
Perhaps the most forward-looking aspect of the research is its integration of reinforcement learning (RL) for autonomous interception decisions. Traditional fire control systems follow rigid engagement protocols: detect, track, assign, fire. But in complex scenarios—such as defending against a drone swarm or coordinating multiple weapon platforms—static rules are insufficient. The optimal response may involve dynamic task allocation, feint maneuvers, or selective engagement based on threat priority.
Here, the team employed an inverse reinforcement learning approach. Instead of hand-crafting a reward function (which is notoriously difficult in military contexts), they used expert demonstration data—recorded decisions from seasoned operators during simulated engagements—to infer the underlying reward structure. The RL agent then trained in a simulated environment, iteratively refining its policy to maximize this inferred reward. The result is a decision engine that doesn’t just follow orders—it reasons strategically, balancing risk, resource conservation, and mission objectives.
This autonomous decision layer is tightly coupled with the perception and correction subsystems. Accurate target identification informs threat assessment; precise state estimation enables reliable prediction; and ballistic correction ensures lethality. Together, they form a closed-loop intelligent architecture capable of end-to-end autonomous engagement—from initial detection to post-shot assessment.
The implications extend far beyond air defense. The principles outlined in this work—data-driven adaptation, multi-modal fusion, and learning-based control—are directly applicable to other domains of autonomous warfare, including naval point defense, ground-based counter-UAS systems, and even directed-energy weapons like lasers or railguns, where thermal blooming and beam jitter introduce new layers of complexity.
Critically, the researchers emphasize that their system is designed for real-world deployment. The training dataset includes data from over ten different weapon platforms—ranging from anti-aircraft guns to missile systems—and real flight records from diverse targets, including S70 low-speed drones, S300 high-speed targets, and J-7B fighter jets. This ensures that the learned models are not just theoretically sound but operationally robust.
Moreover, the architecture supports both online and offline learning. Offline training allows for extensive pre-deployment validation using historical and simulated data, ensuring safety and reliability. Online learning enables continuous improvement in the field, with safeguards to prevent catastrophic forgetting or adversarial manipulation. This dual-mode approach strikes a crucial balance between stability and adaptability—a key requirement for military AI systems.
From a strategic perspective, this research aligns with global trends in military AI. The U.S. Department of Defense’s 2017–2042 Unmanned Systems Roadmap explicitly identifies machine learning as a foundational technology for future autonomy. Similarly, China’s military-civil fusion strategy has prioritized AI integration across defense sectors. Liu and his team’s work demonstrates a mature, systems-level implementation that moves beyond isolated algorithmic improvements to deliver a holistic, deployable solution.
While ethical and operational challenges remain—particularly around human oversight and fail-safe mechanisms—the technical foundation laid out in this study represents a significant leap forward. It transforms fire control from a deterministic, reactive process into an intelligent, anticipatory one. In doing so, it not only enhances lethality and survivability but also reduces the cognitive burden on human operators, allowing them to focus on higher-level command decisions.
As warfare continues its inexorable shift toward speed, complexity, and autonomy, the ability to learn, adapt, and act intelligently will define the next generation of military advantage. The self-learning fire control system developed by Liu Jiansheng, Cheng Xiaomin, Ding Shuai, Song Liqiong, and Hou Yuchen at the North Automatic Control Technology Institute stands as a compelling testament to that future—one where data doesn’t just inform decisions, but shapes the very fabric of combat systems.
Fire Control & Command Control, 2021, 46(7): 76–80. DOI: 10.3969/j.issn.1002-0640.2021.07.016.