Research on Design Method of Artificial Intelligence Product Based on Product Semantics

AI Product Design Evolves Through Semantic Framework

In an era where artificial intelligence (AI) is no longer a speculative future but a present reality shaping everyday life, the intersection of design theory and intelligent systems has become a critical frontier. A groundbreaking study by Ran Bei from the Shanghai Academy of Fine Arts at Shanghai University redefines how AI-driven products can be designed—not merely as tools, but as intuitive extensions of human cognition and behavior. Published in Design, the research introduces a novel methodology that fuses product semantics with advanced AI technologies such as big data analytics and deep learning, offering a comprehensive framework for creating more empathetic, responsive, and context-aware intelligent products.

The paper, titled Research on Design Method of Artificial Intelligence Product Based on Product Semantics, presents a paradigm shift in industrial design. Rather than treating AI as a mere functional upgrade to existing devices, Ran proposes a foundational redesign of the design process itself—one rooted in the symbolic and perceptual dimensions of human-product interaction. This approach moves beyond traditional usability metrics to embrace a deeper understanding of how users interpret, relate to, and emotionally engage with smart technologies.

At the heart of this new methodology lies the concept of product semantics, a design philosophy that emerged in the 1980s to address the symbolic meanings embedded in physical forms. Originally developed to help users intuitively understand the function of objects through visual and tactile cues—such as the shape of a handle indicating how to grip or press—it has now been reimagined for the digital age. In the context of AI, product semantics is no longer about static form-language but dynamic meaning-making, where machines learn to anticipate user needs, adapt behaviors, and communicate intentions through subtle, context-sensitive interactions.

Ran’s work builds on decades of theoretical development in semiotics and human-computer interaction. Drawing from the foundational ideas of Charles Sanders Peirce and Ferdinand de Saussure, the study emphasizes the triadic relationship between perception, meaning, and action. According to this model, users do not simply react to stimuli; they interpret them within a web of personal, cultural, and situational contexts. A successful AI product must therefore not only respond correctly to inputs but also align with the user’s internal cognitive framework—what Ran refers to as the “mental model.”

This alignment is achieved through a dual-layered data strategy that integrates both small data and big data. Small data, gathered through traditional ethnographic methods such as interviews, observations, and focus groups, captures the qualitative nuances of user experience—the unspoken habits, emotional triggers, and subconscious preferences that cannot be quantified easily. Big data, on the other hand, leverages real-time behavioral logs from connected devices, cloud platforms, and sensor networks to reveal large-scale usage patterns, predictive trends, and systemic anomalies.

What sets Ran’s approach apart is the synthesis of these two data streams into a unified user profiling system. While conventional AI models often rely solely on aggregated behavioral data, which risks overlooking individuality and emotional depth, this hybrid method ensures that personal meaning is preserved even as scalability is achieved. By combining qualitative insights with algorithmic precision, designers can create AI systems that are both statistically robust and personally resonant.

One of the most transformative aspects of the proposed framework is its emphasis on feedforward interaction, a departure from the traditional feedback loop that dominates current AI interfaces. In standard feedback models, the machine responds to user input after an action has occurred—such as adjusting room temperature after detecting a change in ambient conditions. Feedforward, by contrast, anticipates user needs before they are explicitly expressed. This proactive service model requires AI to construct a predictive user model based on historical behavior, contextual awareness, and inferred intentionality.

To enable this level of foresight, Ran integrates deep learning architectures—specifically, multi-layered neural networks capable of hierarchical abstraction—into the core of the design process. Inspired by the structure of the human brain, these networks process sensory data through successive layers of increasing complexity, extracting features, recognizing patterns, and ultimately generating predictions. For instance, a smart home system might learn not just when a user typically turns on the lights, but also under what emotional states, weather conditions, or social settings such behavior occurs.

A compelling example cited in the study is the Deep Learning Insole, a collaborative project between MIT’s Design Lab and Puma. This wearable device embeds microbial sensors, circuitry, and microcontrollers within the sole of a shoe to monitor biochemical changes in sweat during physical activity. The collected data—such as pH levels and electrical conductivity—are transmitted to a smartphone app, where a personalized algorithm learns the user’s fatigue thresholds over time. Instead of waiting for the user to report exhaustion, the system proactively alerts them when optimal performance is about to decline, effectively preventing injury through preemptive intervention.

This case illustrates how AI product design is evolving from passive tools to cognitive partners. The insole does not merely collect data; it interprets physiological signals within the broader context of athletic performance, lifestyle habits, and long-term health goals. It speaks a language of subtle cues and anticipatory gestures—what Ran describes as a “cyborg-like” integration of biological and digital intelligence.

The implications of this research extend far beyond wearables. In healthcare, AI-powered diagnostic devices could interpret patient symptoms not just against clinical databases but also in relation to personal medical histories and psychosocial factors. In urban planning, smart infrastructure could adjust traffic flows, lighting, and public services based on real-time crowd behavior and environmental conditions. In education, adaptive learning platforms could tailor content delivery to individual cognitive styles and emotional engagement levels.

However, Ran cautions that technological capability must be balanced with ethical responsibility. As AI systems grow more adept at modeling human behavior, they also gain the power to influence it—raising concerns about autonomy, privacy, and manipulation. The paper calls for a design ethic grounded in transparency, consent, and user agency, ensuring that intelligent products serve as enablers rather than controllers of human decision-making.

This ethical dimension is particularly crucial in the context of ubiquitous computing, where AI is embedded invisibly into everyday environments. Unlike smartphones or laptops, which require deliberate interaction, ambient intelligence operates in the background, shaping experiences without explicit commands. Without careful design, such systems risk becoming opaque and uncontrollable. Ran advocates for what she terms “meaningful interaction”—a design principle that prioritizes clarity, interpretability, and mutual understanding between humans and machines.

To achieve this, the study revisits the semiotic triangle of signifier, signified, and interpreter. In AI product design, every interface element—from voice prompts to haptic feedback—must function as a clear signifier that accurately conveys the system’s internal state and intended action. Users must be able to trust that what they perceive reflects what the machine intends, and vice versa. Misalignment between these layers leads to confusion, frustration, and loss of control.

Ran’s framework addresses this challenge through a contextual validation mechanism, where AI-generated actions are continuously cross-checked against user expectations and environmental constraints. For example, a voice assistant suggesting a route change during navigation should not only consider traffic data but also assess whether the user is in a hurry, carrying heavy luggage, or accompanied by children. The system’s reasoning process should be explainable, allowing users to understand why a particular recommendation was made.

Another key innovation is the integration of temporal dynamics into semantic modeling. Traditional product semantics often treats meaning as static—once learned, always applicable. But in the age of AI, meaning evolves over time as users and systems co-adapt. A gesture that signifies approval today may be reinterpreted tomorrow due to changing habits or cultural shifts. Therefore, the design methodology incorporates continuous learning loops, enabling products to update their semantic mappings in response to lived experience.

This temporal sensitivity is especially important in long-term human-AI relationships, such as companion robots for the elderly or mental health chatbots. These systems must not only recognize emotional states but also track affective trajectories over weeks or months. They must distinguish between transient moods and enduring conditions, adapting their responses accordingly. The paper suggests that such capabilities require not just technical sophistication but also a deep appreciation for the fluidity of human meaning-making.

The research also highlights the role of designers as mediators between technological possibility and human need. As AI systems grow more autonomous, the designer’s role shifts from crafting fixed forms to orchestrating adaptive processes. Instead of defining a single user journey, they must anticipate multiple pathways, contingencies, and edge cases. This demands a new kind of design literacy—one that combines technical fluency with psychological insight and philosophical reflection.

Ran envisions a future where AI product design becomes a collaborative practice involving not only engineers and designers but also sociologists, ethicists, and end-users themselves. Through participatory design methods and open data ecosystems, stakeholders can co-create intelligent systems that reflect diverse values and lived realities. This democratization of design aligns with the broader vision of “everyone designs, design for everyone,” a principle echoed throughout the paper.

Moreover, the study underscores the importance of cultural specificity in AI design. Meaning is not universal; it is shaped by language, tradition, and social norms. An AI assistant trained primarily on Western datasets may misinterpret gestures, tones, or priorities in non-Western contexts. To avoid cultural bias, the framework advocates for localized training data, inclusive design teams, and culturally sensitive evaluation metrics.

The paper also explores the philosophical underpinnings of AI design, drawing from posthumanist thought and cybernetics. As machines begin to simulate aspects of human cognition, the boundary between human and machine blurs. Ran refrains from claiming that AI possesses consciousness, but she acknowledges that intelligent products can exhibit behaviors that feel intentional, responsive, and even empathetic. This raises profound questions about agency, identity, and the nature of interaction.

In response, the study proposes a relational ontology—the idea that meaning emerges not from isolated entities but from the dynamic interplay between user, product, and environment. An AI thermostat is not just a device regulating temperature; it is a participant in a domestic ecosystem involving comfort, energy efficiency, family routines, and seasonal rhythms. Designing for such complexity requires systems thinking, interdisciplinary collaboration, and a willingness to embrace ambiguity.

Ultimately, Ran’s work represents a maturation of AI product design—from a focus on functionality and efficiency to one centered on meaning, intentionality, and shared understanding. It challenges the assumption that smarter technology automatically leads to better experiences, arguing instead that intelligence must be coupled with semantic clarity and emotional resonance.

As AI continues to permeate every facet of life, the need for thoughtful, human-centered design will only grow. This research offers a rigorous, principled approach to building intelligent products that do not merely compute but comprehend; that do not just respond but relate. It is a call to move beyond automation toward augmentation, beyond optimization toward empathy.

The implications are vast. If widely adopted, this semantic-driven methodology could transform how we conceive of everything from consumer electronics to public infrastructure. It suggests that the future of AI is not determined solely by algorithms and data, but also by the stories we tell, the meanings we assign, and the relationships we cultivate with the machines around us.

Ran Bei’s contribution stands as a landmark in the evolution of design theory, bridging the gap between abstract semiotics and practical engineering. Her framework provides not only a set of tools but also a mindset—one that sees technology not as a replacement for human intelligence, but as a mirror and amplifier of it. In doing so, she reclaims design as a deeply human practice, even in the age of artificial minds.

Ran Bei, Shanghai Academy of Fine Arts, Shanghai University. Design, DOI: 10.11924/j.issn.1003-0069.2021.06.032