The Evolution and Future of Intelligent Control: A 50-Year Journey from Vision to Reality
In the rapidly evolving landscape of modern technology, few fields have undergone as profound a transformation as intelligent control. Over the past half-century, what began as an ambitious theoretical framework has matured into a cornerstone of automation, robotics, and artificial intelligence (AI) integration across industries ranging from manufacturing to aerospace. At the heart of this evolution lies a pivotal moment in 1971 when Professor K.S. Fu introduced the term “intelligent control” in a seminal paper published in the IEEE Transactions on Automatic Control. This concise yet visionary article not only coined a new discipline but also laid the conceptual foundation for merging AI with classical control theory—a fusion that would redefine how machines perceive, decide, and act.
Fifty years later, Fei-Yue Wang, a leading figure in systems and control science at the Chinese Academy of Sciences, reflects on this transformative journey in a comprehensive retrospective published in Acta Automatica Sinica. His work revisits the intellectual lineage of intelligent control, tracing its roots back to ancient philosophical inquiries about knowledge and cognition, through the birth of cybernetics and automata theory, and forward into today’s era of parallel intelligence and knowledge automation. The article is more than a historical account; it is a forward-looking analysis grounded in decades of interdisciplinary research and practical application.
Wang’s narrative begins by situating intelligent control within the broader context of human intellectual history. He draws connections between early Greek philosophers such as Thales, Heraclitus, and Parmenides—who debated the nature of being (Being) versus becoming (Becoming)—and later developments in logic, computation, and machine reasoning. These metaphysical questions, he argues, are not distant relics but foundational to understanding the dual trajectories of symbolic (logic-based) and connectionist (neural network–based) approaches in AI. The tension between formal deduction and adaptive learning continues to shape the design of intelligent systems today.
The emergence of intelligent control as a distinct field was catalyzed by two towering figures: Fu and his colleague George N. Saridis. While Fu provided the initial spark with his definition of intelligent control as the intersection of AI and automatic control, Saridis expanded the vision by introducing hierarchical architectures rooted in information theory and decision-making under uncertainty. Their collaborative environment at Purdue University during the 1970s and 1980s became a crucible for innovation, fostering a generation of researchers who sought to endow machines with capabilities once thought exclusive to humans—adaptation, self-organization, and goal-directed behavior.
Saridis’s contribution, particularly his development of the “organize-coordinate-execute” three-layer model, offered one of the first systematic frameworks for structuring intelligent systems. By applying entropy as a measure of uncertainty across levels of control hierarchy, he established a principled way to balance complexity and performance. This approach influenced applications in robotics, space exploration, and industrial automation, where robustness and adaptability are paramount. Wang, who worked closely with Saridis during his doctoral studies, credits him not only with shaping the theoretical underpinnings of intelligent control but also with creating institutional momentum through conferences, journals, and funding initiatives that elevated the field’s visibility and credibility.
Despite these advances, the path of intelligent control was neither linear nor uninterrupted. The field experienced periods of stagnation, especially during the so-called “AI winters,” when overpromising and underdelivering led to reduced funding and skepticism from both academia and industry. One notable setback occurred in the late 1960s when Marvin Minsky and Seymour Papert’s critique of perceptrons cast doubt on the viability of neural networks. Their mathematical demonstration that single-layer networks could not solve non-linear problems like XOR temporarily derailed interest in connectionist models. It wasn’t until the resurgence of backpropagation algorithms and deep learning in the 2000s that this trajectory regained momentum.
Parallel to these technical challenges were deeper epistemological debates about what constitutes “intelligence” in machines. Is it sufficient for a system to mimic human behavior, or must it possess internal representations akin to cognition? These questions echo earlier distinctions made by pioneers like Norbert Wiener, Warren McCulloch, and Walter Pitts, whose work on feedback mechanisms and neural modeling gave rise to cybernetics. Wiener’s concept of purposeful behavior governed by negative feedback loops anticipated many ideas now central to reinforcement learning and autonomous agents.
However, personal conflicts and disciplinary silos hindered collaboration. The abrupt severing of ties between Wiener and his colleagues McCulloch and Pitts, reportedly due to interpersonal tensions, disrupted a promising line of inquiry into biologically inspired computing. This vacuum allowed other paradigms—particularly those championed by John McCarthy and Alan Newell—to dominate the AI agenda. McCarthy’s preference for symbolic logic and formal languages positioned AI as a branch of mathematics rather than physiology, steering research toward rule-based expert systems and away from neuromorphic designs for several decades.
It was against this backdrop that Fei-Yue Wang began formulating his own contributions to intelligent control. Starting in the 1980s, his work focused on bridging abstract architectures with implementable algorithms. Drawing from Petri nets, game theory, and distributed control, he developed models capable of coordinating complex tasks in dynamic environments. His efforts culminated in the creation of coordination structures and dispatchers that enabled real-time reconfiguration of robotic systems—an essential capability for missions in unpredictable settings such as outer space or disaster zones.
A key insight from Wang’s research was the recognition that traditional control methods, while effective for well-defined linear systems, falter when confronted with open-ended, socially embedded processes. Traffic management, healthcare delivery, and urban planning involve human actors whose behaviors cannot be fully captured by differential equations alone. To address this limitation, Wang pioneered the concept of Cyber-Physical-Social Systems (CPSS), which integrates physical devices, digital models, and social dynamics into a unified analytical framework.
This shift from purely technical control to socio-cyber-physical governance reached its most mature expression in the development of the ACP methodology—Artificial Societies, Computational Experiments, and Parallel Execution. Unlike conventional simulation, which seeks to replicate reality as accurately as possible, ACP embraces divergence. It posits that multiple virtual worlds can coexist alongside the real one, each serving different purposes: training AI agents, testing policies, or exploring counterfactual scenarios. When synchronized through data exchange and adaptive feedback, these parallel systems enable proactive control rather than reactive regulation.
One of the most compelling applications of ACP has been in intelligent transportation. In cities like Beijing and Qingdao, digital twins of traffic networks run continuously in the cloud, simulating congestion patterns and evaluating signal timing strategies before they are deployed on actual roads. By running thousands of computational experiments daily, operators can anticipate bottlenecks, optimize routing, and even simulate emergency evacuations without disrupting real-world operations. This represents a paradigm shift—from controlling isolated components to managing entire ecosystems.
Moreover, the implications of parallel control extend beyond engineering into philosophy and ethics. As AI assumes greater responsibility in decision-making, questions arise about transparency, accountability, and value alignment. Wang proposes expanding the classical philosophical triad of Being and Becoming to include a third dimension: Believing. This addition acknowledges that intelligent systems do not merely reflect reality or evolve within it—they also project beliefs about future states and act upon them. Just as humans operate based on trust in institutions, models, and predictions, so too must machines function within a framework of justified belief.
This “3B” philosophy—Being, Becoming, Believing—forms the basis of what Wang calls parallel epistemology. It suggests that truth is no longer confined to correspondence with facts but emerges from interaction between real and artificial worlds. In this view, knowledge is not static but generated through continuous dialogue between observation and intervention. For instance, a smart grid does not simply respond to demand fluctuations; it anticipates them using predictive analytics and adjusts supply accordingly, thereby influencing the very conditions it measures.
Such capabilities point toward a future where automation transcends mechanization and enters the realm of knowledge creation. Wang terms this next stage “knowledge automation,” wherein AI systems not only execute predefined rules but also generate, validate, and refine knowledge autonomously. This goes far beyond current notions of machine learning, envisioning systems that conduct scientific discovery, formulate hypotheses, and test theories—all without direct human oversight.
To realize this vision, however, requires more than better algorithms. It demands new infrastructures, standards, and governance models. Cloud-edge computing architectures must support seamless synchronization between physical plants and virtual counterparts. Data sovereignty, privacy, and interoperability become critical concerns, especially as CPSS platforms integrate personal, corporate, and governmental information flows. Furthermore, regulatory bodies will need to adapt to a world where decisions emerge from distributed, opaque processes rather than centralized authorities.
China has taken significant steps in this direction. With the release of the New Generation Artificial Intelligence Development Plan in 2017, the government committed substantial resources to advancing intelligent control technologies. Initiatives in smart manufacturing, autonomous vehicles, and city-wide AI deployment have accelerated R&D and created fertile ground for experimentation. Institutions such as the State Key Laboratory of Management and Control for Complex Systems, where Wang leads research efforts, serve as hubs for cross-sector collaboration between academia, industry, and public agencies.
Yet global cooperation remains essential. Challenges such as climate change, pandemic response, and sustainable development require coordinated action across borders. Intelligent control systems could play a vital role in optimizing energy use, managing supply chains, and monitoring environmental indicators—but only if designed with openness, equity, and resilience in mind.
Looking ahead, Wang envisions a world shaped by what he calls the “6S” paradigm: Safety, Security, Sustainability, Sensitivity, Service, and Smartness. Each S represents a domain where intelligent control can contribute to societal well-being. For example, sensitivity refers not just to sensor accuracy but to ethical responsiveness—ensuring that AI respects individual rights and cultural contexts. Service emphasizes user-centric design, moving away from one-size-fits-all solutions toward personalized, adaptive interfaces.
Ultimately, the success of intelligent control will depend not only on technological prowess but on our ability to align it with human values. As machines gain increasing autonomy, the question shifts from “Can we build it?” to “Should we?” Philosophical reflection, public engagement, and inclusive policymaking must accompany technical progress. The legacy of pioneers like Fu and Saridis reminds us that breakthroughs begin with bold ideas, but their lasting impact depends on responsible stewardship.
As the field celebrates fifty years since Fu’s landmark publication, it stands at a threshold. From learning control to parallel control, from isolated subsystems to integrated socio-technical networks, intelligent control has evolved into a discipline capable of addressing some of humanity’s most pressing challenges. And as Fei-Yue Wang articulates in his sweeping review, the journey is far from over—it is entering a new phase defined by convergence, complexity, and conscious design.
The story of intelligent control is not merely about smarter machines. It is about reimagining the relationship between humans and technology, between data and wisdom, between prediction and purpose. In navigating this frontier, the insights of past visionaries continue to illuminate the path forward—not as rigid blueprints, but as living principles guiding innovation toward a more intelligent, equitable, and sustainable world.
Fei-Yue Wang, State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Qingdao Academy of Intelligent Industries, School of Artificial Intelligence, University of Chinese Academy of Sciences; Acta Automatica Sinica, DOI: 10.16383/j.aas.c210400