Global AI Race Heats Up: Knowledge, Brain-Inspired Systems

Global AI Race Heats Up: Knowledge, Brain-Inspired Systems, and Explainability Take Center Stage

As the world transitions deeper into the digital era, artificial intelligence (AI) has emerged as a pivotal force shaping economic competitiveness, national security, and societal transformation. Over the past decade, AI has evolved from a niche academic pursuit into a cornerstone of national innovation strategies. Yet, despite remarkable progress in deep learning and data-driven systems, experts now agree that the current paradigm is approaching its limits. The next wave of AI advancement will not come from scaling up existing models but from rethinking the very foundations of machine intelligence. A recent comprehensive analysis published in Global Science, Technology and Economy Outlook sheds light on this critical juncture, identifying three converging technological frontiers—knowledge-driven AI, brain-inspired intelligence, and explainable AI—as the focal points of global research and strategic investment.

The study, conducted by Li Ruochen from the Institute of Intelligent Video Audio Technology in Shenzhen and Li Mengwei from the Institute of Scientific and Technical Information of China in Beijing, offers a rare comparative insight into how leading nations are positioning themselves for the next phase of AI development. By examining national strategies from the United States, China, Japan, and South Korea, the authors reveal both divergent priorities and striking areas of consensus. While each country leverages its unique industrial strengths—be it U.S. leadership in foundational algorithms, Japan’s dominance in robotics, South Korea’s semiconductor prowess, or China’s vast data ecosystems—there is a growing alignment around the need to transcend the limitations of purely data-driven models.

This convergence is not accidental. The first two decades of the 21st century witnessed an AI boom fueled by the synergy of big data, powerful computing hardware, and breakthroughs in deep neural networks. The 2012 victory of AlexNet in the ImageNet competition marked a turning point, demonstrating that convolutional neural networks could outperform traditional machine learning methods in visual recognition tasks. This success triggered an avalanche of innovation, leading to rapid advancements in speech recognition, natural language processing, and autonomous systems. However, as Li and Li point out, these systems, while impressive in narrow domains, suffer from fundamental shortcomings: they lack common sense, are brittle in unfamiliar environments, and operate as black boxes with little transparency.

These limitations have become increasingly problematic as AI is deployed in high-stakes domains such as healthcare, finance, defense, and transportation. A medical diagnosis system that cannot explain its reasoning, or a self-driving car that fails in edge cases, undermines trust and raises serious ethical and safety concerns. Moreover, the reliance on massive labeled datasets makes AI development resource-intensive and environmentally unsustainable. These challenges have prompted governments and research institutions worldwide to pivot toward more robust, adaptable, and trustworthy forms of intelligence.

The United States, long a leader in AI research, has taken a proactive stance through initiatives led by the Defense Advanced Research Projects Agency (DARPA). In 2018, DARPA launched its “AI Next” campaign, a multi-year effort aimed at advancing what it calls the “third wave” of AI. This new generation of systems is designed to move beyond the statistical pattern recognition of second-wave AI and incorporate contextual understanding, causal reasoning, and lifelong learning. Key projects under this umbrella include “Machine Common Sense,” which seeks to imbue machines with basic knowledge about the physical and social world; “Explainable AI” (XAI), focused on making AI decisions interpretable to human users; and “Lifelong Learning Machines,” which aim to develop systems that can continuously adapt to new tasks without forgetting previous knowledge.

What distinguishes the U.S. approach is its emphasis on hybrid architectures that integrate symbolic reasoning with neural networks. This reflects a broader intellectual shift away from the historical divide between symbolic AI and connectionist models. Symbolic AI, dominant in the 1960s and 1970s, relied on explicit rules and logic to represent knowledge and perform reasoning. Though powerful for structured problems, it struggled with uncertainty and real-world complexity. Connectionist AI, which gained prominence with the rise of neural networks, excels at learning from data but lacks transparency and generalization capabilities. The emerging consensus is that the future lies in combining the strengths of both paradigms—a vision echoed in DARPA’s “knowledge-directed artificial intelligence reasoning” and “automated knowledge extraction” programs.

South Korea, meanwhile, has adopted a more focused strategy centered on strengthening its technological infrastructure. Recognizing the strategic importance of semiconductors in AI development, the Korean government announced in its 2019 National AI Strategy a plan to invest 1 trillion won (approximately $850 million) over ten years in AI chip research and development. This investment targets next-generation processors optimized for AI workloads, including neuromorphic chips that mimic the brain’s architecture. Korea’s approach also emphasizes “explainable AI” and “few-shot learning,” aiming to reduce dependency on large datasets and enhance the transparency of AI decision-making. By aligning its AI strategy with its existing strengths in electronics and manufacturing, Korea is positioning itself as a key player in the global AI supply chain.

Japan’s AI ambitions are deeply intertwined with its leadership in robotics. Building on its legacy in industrial automation and humanoid robots, Japan’s 2015 “New Robot Strategy” and the 2019 “AI Strategy” prioritize the integration of AI with physical systems. The government has called for the development of “data- and knowledge-driven hybrid AI” and “brain-inspired intelligence,” reflecting a desire to create machines that can interact naturally with humans and adapt to dynamic environments. Japan’s focus on “modular AI” and “robot operating systems” suggests a long-term vision of interoperable, scalable robotic platforms that can be deployed across industries—from elder care to disaster response. With sustained increases in R&D funding, including a 1.4% boost in science and technology budgets in 2020, Japan is making a concerted effort to reclaim its position at the forefront of intelligent systems.

China, having declared AI a national priority in its 2017 “New Generation Artificial Intelligence Development Plan,” has pursued a comprehensive and ambitious agenda. The plan outlines a roadmap to make China the world leader in AI by 2030, supported by massive investments in research, talent development, and infrastructure. Unlike other nations that focus on specific niches, China’s strategy spans the entire AI ecosystem—from foundational theories like quantum intelligence computing and brain-inspired models to applied technologies such as autonomous vehicles, smart cities, and natural language processing. The country has made significant strides in building large-scale AI datasets, deploying facial recognition systems, and fostering a vibrant startup ecosystem. However, as Li and Li note, China still faces challenges in original innovation and core algorithm development, areas where the U.S. maintains a lead.

Despite their differing starting points and strategic emphases, all four nations converge on three critical technological directions. The first is knowledge-driven AI. As deep learning reaches its scalability limits, researchers are turning back to symbolic methods to inject machines with structured knowledge and reasoning capabilities. The evolution of knowledge graphs—from early semantic networks to modern systems like Google’s Knowledge Graph—demonstrates the enduring value of organizing information in machine-readable formats. By integrating knowledge bases with neural networks, AI systems can perform causal inference, answer complex queries, and exhibit common sense reasoning. This hybrid approach is seen as essential for moving beyond narrow AI toward more general forms of intelligence.

The second frontier is brain-inspired intelligence, also known as neuromorphic computing. Traditional AI models, while inspired by the brain, do not accurately replicate its biological mechanisms. Spiking neural networks (SNNs), considered the third generation of neural networks, offer a more biologically plausible alternative. Unlike conventional neural networks that process continuous values, SNNs communicate through discrete electrical pulses, much like real neurons. This allows for event-driven computation, which is more energy-efficient and better suited for real-time sensory processing. Hardware developments such as IBM’s TrueNorth and Intel’s Loihi chips, capable of simulating millions of artificial neurons, are paving the way for low-power, adaptive AI systems that can operate in resource-constrained environments. Although still far from replicating the full complexity of the human brain—estimated to contain over 80 billion neurons—these advances represent a paradigm shift toward more autonomous and resilient machines.

The third and perhaps most urgent direction is explainable AI (XAI). As AI systems are entrusted with increasingly consequential decisions, the demand for transparency and accountability has grown. A black-box algorithm that cannot justify its outputs is unacceptable in domains like criminal justice, loan approvals, or medical diagnostics. In response, researchers are developing methods to make AI decisions interpretable, either by designing inherently transparent models or by creating post-hoc explanation tools. The U.S. National Institute of Standards and Technology (NIST) has taken a leading role in this effort, publishing in 2020 a draft framework outlining four principles for explainable AI: explanation, meaningfulness, explanation accuracy, and knowledge limits. This framework defines five types of explanations tailored to different stakeholders—from end-users to regulators to system developers—providing a common language for evaluating AI transparency.

One promising approach comes from Professor Song-Chun Zhu at the University of California, Los Angeles, who advocates for a “human-centric” AI model. His research integrates symbolic planning with deep learning to generate explanations that mirror human cognitive processes. In one experiment, a robot learning to open a childproof medicine bottle used a symbolic planner to articulate its intentions and a tactile predictor to justify its actions based on sensory feedback. This dual explanation mechanism not only enhances trust but also facilitates debugging and improvement of AI systems. While standardized benchmarks for explainability are still evolving, such efforts represent a crucial step toward building AI that is not only intelligent but also understandable and trustworthy.

The implications of these technological shifts extend beyond academia and industry. They raise profound questions about the future of work, privacy, and governance. As AI becomes more capable and pervasive, societies must grapple with how to ensure equitable access, prevent algorithmic bias, and maintain human oversight. The authors emphasize that technological advancement must be accompanied by robust ethical frameworks and international cooperation. China, in particular, is urged to balance innovation with regulation, fostering an environment where AI can thrive while safeguarding public interest.

Looking ahead, the next decade of AI will likely be defined not by incremental improvements in existing models but by breakthroughs in hybrid architectures, neuromorphic hardware, and explainable systems. The race is no longer just about who can train the largest neural network but who can build the most intelligent, adaptive, and trustworthy machines. As Li Ruochen and Li Mengwei conclude, the path forward requires not only technical ingenuity but also strategic foresight, cross-sector collaboration, and a commitment to responsible innovation. The nations that succeed in this endeavor will not only lead in AI but will shape the very fabric of the future.

Li Ruochen, Institute of Intelligent Video Audio Technology, Shenzhen; Li Mengwei, Institute of Scientific and Technical Information of China, Beijing. Global Science, Technology and Economy Outlook, DOI: 10.3772/j.issn.1009-8623.2021.06.003