Oriental Thinking Paves New Path for AI Theory

Oriental Thinking Paves New Path for AI Theory

In the rapidly evolving landscape of artificial intelligence, a groundbreaking theoretical shift is emerging from an unexpected source—not Silicon Valley or Cambridge, but from the academic corridors of Liaoning Technical University in China. A recent study published in Guangdong University of Technology Journal proposes a radical rethinking of the foundational principles guiding AI development, challenging the long-standing dominance of Western mechanistic paradigms. Led by Cui Tie-jun from the College of Safety Science and Engineering and Li Sha-sha from the School of Business Administration, the research introduces a novel framework rooted in traditional Chinese philosophy, advocating for a “factor-driven” approach over the prevailing “data-driven” model.

This work does not merely offer an alternative methodology; it signals a potential inflection point in the global AI race, where cultural epistemologies may determine technological supremacy. As nations invest billions into AI supremacy, the paper argues that the future of intelligent systems lies not in scaling data volume, but in understanding the interconnectedness of causal factors—a concept deeply embedded in Eastern thought.

For decades, artificial intelligence has been shaped by what scholars refer to as the “mechanical reductionist” paradigm—a Western scientific tradition that breaks complex systems into smaller, analyzable components. This approach, historically effective in engineering and classical physics, underpins major AI milestones such as expert systems, neural networks, and symbolic AI. However, as systems grow more complex and interdependent, the limitations of this reductionist mindset are becoming increasingly evident.

Cui and Li’s paper critiques this dominant framework, highlighting its inability to account for emergent behaviors arising from interactions between subsystems—interactions that often fall outside predefined functional boundaries. They cite the work of Nancy Leveson, a professor at the U.S. National Academy of Engineering, who observed that real-world system failures frequently stem from unanticipated energy, material, and information exchanges between components—phenomena invisible to traditional modular analysis.

The authors argue that this fragmented approach has led to a crisis in modern science: a patchwork of solutions that fix isolated problems while generating new ones. Software updates, product recalls, and cascading infrastructure failures exemplify this growing burden. In the context of AI, the consequence is systems that excel at pattern recognition but lack genuine understanding—a limitation that becomes critical in high-stakes domains like autonomous driving, medical diagnosis, and industrial safety.

Against this backdrop, the researchers propose a paradigm shift: artificial intelligence should not be data-driven, but factor-driven. The distinction is both subtle and profound. Data-driven AI treats information as raw input, seeking correlations within massive datasets. It assumes that with enough data, patterns will emerge, and predictive models can be trained. This approach powers today’s deep learning systems, from image recognition to language models. Yet, as the authors point out, it suffers from fundamental flaws—inefficiency in storage and computation, vulnerability to missing logical cases, and an inability to generalize beyond observed data.

Factor-driven AI, by contrast, begins with the identification of relevant factors—conceptual variables that influence system behavior. These factors are not numerical values, but qualitative dimensions such as context, environment, intention, and relationship. The goal is not to process more data, but to build a comprehensive conceptual network that mirrors the structure of reality. This network allows for reasoning, inference, and adaptation based on logical relationships rather than statistical correlations.

The idea is not entirely new. Scholars like Zhong Yixin have long advocated for an “information ecology” methodology, emphasizing the dynamic interplay between information, cognition, and environment. Wang Peizhuang’s “factor space theory” provides a mathematical framework for representing concepts and their relationships. He Huacan’s “generalized logic,” Cai Wen’s “extenics,” Feng Jiali’s “attribute theory,” and Zhao Keqin’s “set pair analysis” are all examples of Chinese-originated theories that align with this holistic worldview.

What Cui and Li contribute is a synthesis of these ideas into a coherent critique of mainstream AI and a compelling vision for an alternative path. They argue that the human mind does not operate on data alone. When we perceive the world, we first identify meaningful factors—objects, intentions, relationships—then assign qualitative attributes, and only later, if necessary, quantify them. Our intelligence arises from the ability to form conceptual networks, to make analogies, and to reason about unseen scenarios. True artificial intelligence, they assert, must replicate this process.

The implications are far-reaching. A factor-driven AI would not require petabytes of training data to recognize a cat. Instead, it would understand the concept of “cat” through its relationships—its biological classification, behavioral patterns, ecological role, and cultural significance. It could then recognize a cat in any context, even if never explicitly trained on that image. This kind of generalization is precisely what current AI lacks.

Moreover, such a system would be inherently more interpretable. Because its decisions are based on identifiable factors and logical relationships, its reasoning can be traced and validated. This stands in stark contrast to the “black box” nature of deep neural networks, where even their creators cannot fully explain how outputs are generated—a major barrier to trust and adoption in critical applications.

The paper positions this shift not just as a technical improvement, but as a civilizational opportunity. The authors note that while Western science excelled during the Industrial Revolution, driven by a need to control and optimize physical systems, the Information and Intelligence Revolutions demand a different kind of thinking—one that embraces complexity, interdependence, and emergence.

China, they argue, possesses a unique advantage in this new era. Its philosophical tradition, rooted in Daoist and Confucian thought, has long emphasized the interconnectedness of all things—the idea that “the Dao gives birth to one, one gives birth to two, two gives birth to three, and three gives birth to all things.” This worldview, they suggest, is better suited to understanding complex adaptive systems than the reductionist models of the West.

This is not to say that China has ignored Western science. On the contrary, its rapid technological development has been built on the adoption of mechanistic methodologies. But the authors warn that continuing down this path will only perpetuate dependency and vulnerability, especially as Western nations tighten restrictions on technology transfer.

Instead, they call for the development of an original Chinese AI theory—one that integrates traditional wisdom with modern scientific rigor. Such a theory would not reject data or computation, but subordinate them to a higher-level conceptual framework. It would prioritize understanding over prediction, meaning over correlation, and adaptability over optimization.

The transition from data-driven to factor-driven AI is not without challenges. The authors acknowledge that current technological and theoretical limitations make a full implementation difficult. Identifying all relevant factors in a complex system remains a daunting task. Formalizing qualitative relationships into computable models requires new mathematical tools. And training researchers to think in this holistic way demands a fundamental shift in education.

Yet, the potential rewards are immense. In safety science—a field central to Cui’s research—factor-driven models could predict system failures not by analyzing past incidents, but by understanding the dynamic interactions between components, environment, and human behavior. In business management—Li’s domain—such models could optimize organizational performance by mapping the invisible factors that influence decision-making, morale, and innovation.

Beyond specific applications, the broader significance lies in the reassertion of epistemological diversity in science. For too long, the global scientific community has equated “rationality” with Western logic, dismissing alternative ways of knowing as unscientific. This paper challenges that assumption, demonstrating that non-Western philosophies can offer not just cultural insights, but rigorous theoretical frameworks capable of solving modern problems.

The response to this work within the international AI community has been cautious but intrigued. Some Western scholars acknowledge the validity of the critique of reductionism, particularly in complex systems engineering. Others remain skeptical about the feasibility of operationalizing such abstract concepts. But few can deny that the questions raised are urgent and profound.

As AI systems become more integrated into society, the need for robust, reliable, and understandable intelligence grows. The current trajectory—scaling ever-larger models on ever-larger datasets—may be hitting diminishing returns. Energy consumption, environmental impact, and ethical concerns are mounting. A paradigm shift may not be desirable, but necessary.

Cui and Li’s vision offers a compelling alternative: an AI that thinks more like a wise elder than a super-fast calculator. One that sees patterns not in pixels, but in principles. One that understands context, consequence, and connection.

This does not mean abandoning data. Data remains essential—but as evidence for factors, not as the foundation of intelligence. The future, they suggest, belongs to systems that can ask not just “what is the pattern?” but “why is this happening?” and “what does it mean?”

The paper concludes with a call to action: the Intelligence Revolution should not be a repetition of the Industrial Revolution, dominated by a single methodological paradigm. Instead, it should be a pluralistic endeavor, drawing on the full spectrum of human wisdom. In this new era, the countries and institutions that embrace diverse ways of thinking—especially those that honor the interconnectedness of all things—will lead the way.

While the road ahead is uncertain, one thing is clear: the conversation about AI’s future is no longer confined to algorithms and hardware. It has expanded into philosophy, culture, and the very nature of understanding. And in this broader dialogue, the voices from the East are no longer whispers—they are becoming a chorus.

The implications extend beyond technology. If factor-driven AI proves viable, it could reshape how we approach global challenges—from climate change to public health—by emphasizing systemic thinking over siloed solutions. It could foster a new kind of scientific humility, recognizing that some truths cannot be reduced to equations, but must be understood through relationships.

For researchers, the message is clear: the next breakthrough in AI may not come from a faster GPU or a larger dataset, but from a deeper philosophy. It may come not from optimizing parameters, but from redefining the problem itself.

In a world increasingly defined by complexity and uncertainty, the ability to see the whole—not just the parts—may be the most valuable intelligence of all.

Cui Tie-jun, Li Sha-sha. Oriental Thinking and Factor-Driven AI Theory. Guangdong University of Technology Journal. doi: 10.12052/gdutxb.200123