Three Pillars of AI: Strengths, Limits, and the Path Forward
In the rapidly evolving landscape of digital innovation, artificial intelligence (AI) stands as a cornerstone of the new technological era. As nations race to harness its transformative potential, a deeper understanding of how AI works—and where it falls short—has become essential. A comprehensive analysis by Chengping Cheng, a professor at Wuhan University’s School of Economics and Management, Development Research Institute, and Research Center for Chinese Characteristic Political Economy, offers a timely and insightful exploration into the foundational mechanisms of AI and their inherent limitations.
Published in the January 2021 issue of Academics, a peer-reviewed monthly journal, Cheng’s article dissects the three core paradigms that underpin AI development: symbolicism, connectionism, and behaviorism. Each represents a distinct philosophical and technical approach to simulating intelligence, and each carries its own set of strengths and constraints. By examining these frameworks in detail, Cheng provides a roadmap for more rational, effective, and ethically grounded AI development.
The article begins with symbolicism, one of the earliest and most influential schools of thought in AI. Also known as logicism or the computer school, symbolicism rests on the idea that human intelligence can be replicated through formal logic and rule-based systems. This approach assumes that knowledge can be reduced to discrete symbols and that reasoning follows a deterministic, deductive path. The intellectual roots of symbolicism trace back to Alan Turing’s conceptualization of the Turing machine in 1936, which laid the groundwork for computational theory. In 1956, at the historic Dartmouth Conference, pioneers such as John McCarthy, Herbert Simon, and Marvin Minsky formalized the field of AI around the notion that intelligence emerges from the manipulation of physical symbols.
For decades, symbolicism dominated AI research. Its achievements include early theorem-proving programs like the Logic Theorist, which successfully proved 38 mathematical theorems, and later developments such as expert systems and knowledge engineering. These systems were designed to encode human expertise into rule-based frameworks, enabling machines to perform complex decision-making tasks in specialized domains.
However, Cheng highlights the fundamental limitations that led to symbolicism’s decline by the late 1980s. One major issue is its reliance on reductionist rationalism—the belief that complex phenomena can be broken down into simpler, logical components. While this works well in closed, rule-bound environments, it fails in the messy, ambiguous world of everyday human experience. The inability to handle “common sense” problems—such as understanding context, interpreting social cues, or making intuitive judgments—exposed the fragility of purely symbolic systems.
Moreover, symbolicism’s dependence on deductive logic introduces another layer of vulnerability. As Gödel’s incompleteness theorems demonstrate, no formal system can be both complete and consistent. There will always be truths that cannot be proven within the system itself. This implies that AI systems built on rigid logical frameworks are inherently limited in their capacity to capture the full scope of human reasoning. Cheng notes that while AI may outperform humans in speed and volume of logical operations, its depth and flexibility remain inferior.
The rigidity of symbolic systems also leads to practical inefficiencies. To improve performance, developers must continuously expand rule sets and databases, leading to exponential growth in computational demands. This “combinatorial explosion” makes such systems increasingly unwieldy and resource-intensive. Furthermore, attempts to overcome semantic barriers—such as in machine translation—require massive datasets, which ironically reduce efficiency and scalability.
Perhaps most critically, symbolicism overlooks the role of embodied experience and tacit knowledge in human cognition. As philosopher Michael Polanyi observed, “We know more than we can tell.” Much of human understanding is intuitive, context-dependent, and rooted in physical and social interaction—elements that cannot be easily formalized or digitized. Cheng cites the example of a three-year-old child recognizing a dog after seeing just a few images, compared to an AI system requiring thousands of labeled examples. This phenomenon, known as Moravec’s paradox, underscores the mismatch between human and machine learning.
As symbolicism’s shortcomings became apparent, a new paradigm gained prominence: connectionism. Also referred to as the bionics or physiological school, connectionism draws inspiration from the structure and function of the human brain. Instead of relying on explicit rules, it uses artificial neural networks—layers of interconnected processing units that learn patterns from data through iterative training.
Connectionism experienced a resurgence in the 2000s, driven by breakthroughs in deep learning. Two key enablers made this possible: the exponential increase in computational power, particularly through graphics processing units (GPUs), and the availability of vast datasets generated by the internet and mobile devices. These advances allowed researchers to train neural networks with dozens of layers and billions of parameters, leading to unprecedented performance in tasks such as image and speech recognition.
Cheng points to several landmark achievements. Microsoft Research Asia’s deep residual learning model achieved a 3.57% error rate in image classification, surpassing human accuracy. Baidu’s speech recognition system reached a 3.7% error rate, outperforming a panel of human transcribers. In creative domains, neural networks have demonstrated the ability to separate content and style in artworks, enabling novel forms of digital art. Perhaps most famously, Google DeepMind’s AlphaGo defeated world champion Go players, a feat once thought impossible due to the game’s complexity.
Despite these successes, connectionism is not without its limitations. Cheng emphasizes that deep learning remains a “black box” from a human interpretability standpoint. While the system can produce accurate outputs, the internal processes that lead to those results are often opaque and difficult to explain. This lack of transparency poses challenges for accountability, especially in high-stakes applications such as healthcare or criminal justice.
Another limitation lies in the biological plausibility of artificial neural networks. While they are inspired by the brain, they are only a crude approximation of its complexity. Current models do not fully replicate the dynamic, adaptive, and energy-efficient nature of biological neurons. Moreover, human cognition involves more than just neural activity—it is deeply intertwined with sensory perception, motor control, and emotional states, all of which are difficult to simulate in silico.
Connectionist systems also struggle with generalization. Most deep learning models are trained in a supervised manner, meaning they require large amounts of labeled data to learn specific tasks. As a result, they tend to be highly specialized and lack the flexibility to transfer knowledge across domains. While unsupervised and reinforcement learning methods aim to address this, they are still in early stages and face significant technical hurdles.
Cheng illustrates this with a compelling example: when a cat walks under a bed and only its tail is visible, a human can instantly recognize it as the same animal based on context and prior experience. An AI system, however, would likely fail unless explicitly trained on similar scenarios. This highlights the gap between statistical pattern recognition and genuine understanding.
The third paradigm, behaviorism—also known as evolutionism or the cybernetics school—takes a different approach altogether. Rather than focusing on internal representations or neural structures, behaviorism emphasizes the interaction between an agent and its environment. Intelligence, in this view, emerges from adaptive behavior in response to sensory input.
Rooted in Norbert Wiener’s 1948 foundational work on cybernetics, behaviorism gained traction in the late 20th century with the development of autonomous robots and adaptive control systems. A key figure in this movement was Rodney Brooks, whose six-legged robot demonstrated that complex behaviors could arise from simple “sense-act” loops without the need for centralized planning or symbolic reasoning.
Behaviorist systems are particularly effective in dynamic, real-world environments where unpredictability is the norm. Applications include self-driving cars, robotic vacuum cleaners, and industrial automation systems that adjust to changing conditions in real time. By prioritizing functionality over internal modeling, these systems can operate efficiently with minimal computational overhead.
Yet, Cheng cautions that behaviorism also has its drawbacks. The primary challenge is the non-linear relationship between perception and action. The same sensory input can lead to different behaviors depending on context, and the same behavior can stem from different perceptual states. This complexity makes it difficult to design systems that can reliably generalize across diverse situations.
Additionally, behaviorist models often lack higher-order reasoning capabilities. While they excel at reactive tasks, they struggle with long-term planning, abstract thinking, and goal-directed problem solving. Like connectionist systems, they are largely black boxes, offering little insight into the decision-making process.
What makes Cheng’s analysis particularly valuable is his emphasis on integration. Rather than treating symbolicism, connectionism, and behaviorism as competing ideologies, he argues for a hybrid approach that leverages the strengths of each. He references emerging theoretical frameworks such as mechanismism and the AORBCO model, which attempt to unify the three paradigms under a common architecture.
Mechanismism, for instance, proposes a universal principle of “information-knowledge-intelligence” transformation, suggesting that different forms of AI can complement each other in a hierarchical process. The AORBCO model integrates agents, objects, and relationships to support symbolic reasoning, neural network dynamics, and environmental interaction within a single framework.
In practice, convergence is already underway. Hybrid systems that combine fuzzy logic with neural networks have shown improved performance in control applications. Brain-computer interfaces, which link biological neurons with silicon-based processors, represent a fusion of behaviorist and connectionist principles. These developments suggest that the future of AI lies not in choosing one paradigm over another, but in synthesizing them into more robust and versatile systems.
Crucially, Cheng reminds readers that AI and human intelligence are fundamentally different. While machines can surpass humans in specific tasks—such as calculation, pattern recognition, or game-playing—they lack the holistic, embodied, and socially embedded nature of human cognition. As Pan Yunhe, an academician of the Chinese Academy of Engineering, observes, “Machine intelligence and human natural intelligence are two essentially different forms of intelligence.”
This distinction has profound implications. AI should not be seen as a replacement for human intelligence, but as a tool to augment it. The goal is not to replicate the human mind, but to extend human capabilities. For example, AI-powered vision systems can help people see in low-light conditions or detect microscopic anomalies, but they do not replace the richness of human visual experience.
The article concludes with a sobering reflection on the risks associated with AI. While the technology holds immense promise, it also poses significant economic, political, social, and cultural challenges. Job displacement, income inequality, privacy violations, algorithmic bias, and the concentration of power in the hands of a few tech giants are all real concerns that require proactive governance.
Cheng calls for a balanced approach—one that promotes innovation while ensuring safety, reliability, and controllability. He expresses confidence that humanity has the wisdom to create AI and, equally, the wisdom to manage its consequences. The path forward lies in collaboration: between disciplines, between humans and machines, and between societies.
As AI continues to reshape industries, economies, and daily life, Cheng’s work serves as a vital intellectual compass. By grounding the discussion in rigorous analysis and philosophical depth, he offers a vision of AI that is not only technically sound but ethically responsible. In an age of rapid change, such clarity is more important than ever.
The journey of AI is far from over. From the early dreams of symbolic logic to the neural networks of today and the integrated systems of tomorrow, the quest to understand and replicate intelligence remains one of humanity’s greatest challenges. And as Cheng’s article demonstrates, the answers may not lie in any single approach, but in the thoughtful synthesis of many.
Chengping Cheng, Wuhan University, Academics, DOI: 10.3969/j.issn.1002-1698.2021.01.019