AI in Elder Care Sparks Ethical Debate as Home-Based Solutions Rise
As global populations age at an unprecedented rate, countries around the world are grappling with the growing demand for elderly care. In China, where over 264 million people are aged 60 and above—accounting for 18.7 percent of the national population—home-based elder care has become a central pillar of social policy. With more than 90 percent of seniors preferring to age in place, traditional models of family-centered caregiving are being stretched thin by workforce shortages, rising health complexities, and geographic disparities in service access. In response, artificial intelligence (AI) is emerging as a transformative force in home care, promising enhanced monitoring, personalized health management, and improved quality of life. Yet, as AI systems move into private homes, they bring with them a host of ethical dilemmas that challenge the very values underpinning elder care.
Recent research published in Chinese Medical Ethics by Zhao Nan, Liu Shuangling, and Sun Xiangna from Heilongjiang University of Chinese Medicine highlights the dual-edged nature of AI in home-based elder care. While the technology offers significant advantages in health monitoring, daily assistance, and rehabilitation support, it also raises critical concerns about autonomy, emotional authenticity, privacy, and equity. As governments and tech companies race to deploy smart sensors, wearable devices, and companion robots, the study calls for a more deliberate and ethically grounded approach to ensure that innovation serves human dignity—not undermines it.
The integration of AI into home care is not merely a technical upgrade but a fundamental reconfiguration of how society supports its aging members. At its core, AI-enabled home care relies on interconnected systems: sensors track movement and vital signs, wearable devices collect biometric data, voice assistants manage schedules, and robotic companions provide interaction. These components are linked via 5G networks and cloud platforms, where machine learning algorithms analyze patterns and trigger alerts or interventions. For example, a smart home system might detect a sudden fall, automatically notify emergency services, and guide a family caregiver through first aid steps—all without human oversight.
In Shanghai, Nanjing, and Qingdao, pilot programs such as “home-based elderly care beds” have already demonstrated the potential of this model. By installing intelligent monitoring systems in private residences, these initiatives extend professional care into the home environment, particularly benefiting individuals with dementia, mobility impairments, or chronic conditions. Devices like the Weila 3.0 service robot—developed by Shanghai Freeway Intelligent Robotics—act as digital caregivers, offering reminders for medication, answering questions, detecting unusual behavior, and even providing companionship through conversational AI.
Similarly, Japan’s Panasonic has introduced the Resyone, a hybrid device that transforms from an electric wheelchair into a hospital-style bed, enabling seamless transitions for users with limited mobility. Another notable innovation is Paro, a therapeutic robot designed to resemble a baby harp seal. Equipped with tactile sensors, Paro responds to touch, sound, and light, mimicking the behavior of a living pet. Clinical studies have shown that interactions with Paro can reduce anxiety, improve mood, and decrease the need for psychotropic medications among dementia patients. These technologies represent a shift from reactive to proactive care, where risks are anticipated and managed before they escalate.
Beyond emotional support, AI is proving instrumental in medical management. Wearable glucose monitors, for instance, allow real-time tracking of blood sugar levels in diabetic patients. When integrated with AI-driven analytics, these devices can predict hypoglycemic episodes and recommend dietary adjustments or insulin dosage changes. Cloud-based health platforms aggregate data from multiple sources—sleep patterns, heart rate variability, physical activity—and generate personalized wellness plans. In some cases, remote clinicians receive automated alerts when anomalies are detected, enabling timely intervention without requiring in-person visits.
Rehabilitation is another area where AI outperforms traditional methods. Toyota’s Welwalk WW-1000, a robotic leg brace used in gait training, adapts to a patient’s progress by adjusting resistance and providing real-time feedback. Based on motor learning theory, the device helps stroke survivors regain mobility faster than conventional physical therapy alone. Such advancements suggest that AI is not just assisting caregivers but, in some domains, surpassing human capabilities in precision, consistency, and scalability.
However, the promise of technological empowerment is shadowed by profound ethical tensions. The research team identifies four major areas of concern: autonomy, emotional authenticity, privacy, and distributive justice. Each reflects a deeper conflict between efficiency and humanity, between data-driven optimization and personal dignity.
The first dilemma centers on autonomy. While AI systems are designed to enhance safety and independence, their very design can inadvertently erode the decision-making power of older adults. For example, a robot programmed to administer medication at fixed intervals may override a senior’s choice to skip a dose due to nausea or loss of appetite. Similarly, AI-powered dietary assistants might prohibit the consumption of certain foods—like wasabi in sushi—based on pre-programmed health rules, disregarding personal preferences or cultural practices. Over time, such rigid enforcement can lead to feelings of infantilization, helplessness, and resentment.
Zhao Nan and colleagues argue that AI should function as a supportive tool rather than a controlling authority. They emphasize the importance of “human-centered design,” where algorithms are calibrated to respect user preferences and allow for opt-out mechanisms. Adaptive AI systems could learn individual habits and adjust their behavior accordingly, offering suggestions rather than commands. Moreover, ethical frameworks should be embedded into the development process, ensuring that AI respects the principle of informed consent and preserves the user’s right to self-determination.
The second ethical challenge lies in the realm of emotional connection. In Chinese culture, filial piety—xiao—is a foundational value, emphasizing intergenerational responsibility and familial love. The introduction of AI companions, no matter how advanced, risks weakening these bonds by shifting caregiving responsibilities from family members to machines. While robots like Paro may simulate affection, they lack genuine empathy, intentionality, and moral agency. Their responses are pre-scripted, their interactions repetitive, and their understanding of human emotion fundamentally limited.
Some researchers suggest that AI can evolve to become emotionally intelligent by analyzing speech patterns, facial expressions, and behavioral cues. However, the authors caution against overestimating this capability. Human emotions are complex, nonlinear, and often ambiguous—even individuals struggle to articulate their own feelings. Reducing emotional experience to quantifiable data risks oversimplification and misinterpretation. More troubling is the possibility that reliance on artificial companions could lead to social withdrawal, reducing opportunities for authentic human interaction.
Rather than replacing human caregivers, AI should be positioned as a supplement—one that frees up time for family members to engage in meaningful activities rather than routine chores. For instance, if a robot handles medication management or housekeeping, adult children may have more energy to visit, converse, or share meals with their parents. The goal should not be to automate care but to rehumanize it by alleviating burdens that detract from emotional connection.
Privacy constitutes the third major concern. AI systems depend on vast amounts of personal data to function effectively. Sensors monitor sleeping patterns, microphones capture conversations, cameras record movements, and wearables track physiological states. While this data enables predictive analytics and early warnings, it also creates significant vulnerabilities. Unauthorized access, data breaches, or misuse by third parties could expose sensitive information about an individual’s health, habits, and relationships.
The researchers point to China’s Civil Code, which came into effect in 2021 and explicitly protects an individual’s right to privacy. However, enforcement remains inconsistent, especially in the rapidly evolving field of AI. Unlike traditional healthcare settings governed by strict confidentiality protocols, consumer-grade smart devices often lack robust encryption, transparent data policies, or user control over data sharing. Once collected, personal data may be used for purposes beyond caregiving—such as targeted advertising or insurance risk assessment—without explicit consent.
To address these risks, the authors recommend establishing clear regulatory standards for data collection, storage, and usage in AI-driven elder care. Developers should implement end-to-end encryption, anonymize data wherever possible, and provide users with granular control over what information is shared and with whom. Furthermore, older adults must be educated about digital literacy and privacy risks, empowering them to make informed decisions about the technologies they adopt.
The final and perhaps most systemic issue is equity. AI-powered elder care is not evenly distributed across regions or socioeconomic groups. In China, smart home technologies and AI services are concentrated in affluent urban centers like Beijing, Shanghai, and Shenzhen. Rural areas and smaller cities, particularly in central and western provinces, face significant barriers: limited broadband access, underfunded healthcare systems, and a shortage of trained personnel. As a result, many elderly individuals—especially those with lower incomes or limited digital literacy—are excluded from the benefits of AI.
This digital divide exacerbates existing inequalities and risks creating a two-tiered system of care: one for those who can afford smart homes and robotic assistants, and another for those reliant on overburdened public services or informal family support. The authors stress that equitable access must be a guiding principle in policy design. Governments should invest in infrastructure, subsidize AI devices for low-income seniors, and incentivize private sector participation in underserved regions.
Public-private partnerships could play a crucial role. High-tech companies with resources and expertise could collaborate with local governments and vocational schools to build smart care ecosystems and train a new generation of AI-savvy caregivers. Tax incentives and grants might encourage firms to expand into less profitable but socially vital markets. Additionally, open-source platforms could reduce development costs and promote interoperability between different systems, preventing vendor lock-in and fostering innovation.
Looking ahead, the successful integration of AI into home-based elder care will require more than technological prowess—it demands a holistic, ethically informed strategy. Policymakers must balance innovation with regulation, ensuring that AI enhances rather than replaces human connection. Developers must prioritize transparency, accountability, and inclusivity in their designs. Families must remain actively engaged, recognizing that no machine can substitute for love, presence, and shared history.
The study concludes with a call for multi-stakeholder collaboration. Only through coordinated efforts among government agencies, academic institutions, technology firms, healthcare providers, and civil society can AI be harnessed in ways that truly serve the well-being of older adults. The ultimate measure of success should not be how efficiently a robot delivers a pill, but how meaningfully a senior feels seen, heard, and valued in their final decades.
As societies age, the question is no longer whether AI will play a role in elder care—but how that role will be defined. Will technology be used to extend control, surveillance, and cost-cutting measures? Or will it be guided by compassion, dignity, and justice? The answer will shape not only the experience of aging but the moral character of the societies that embrace these tools.
The transition to AI-assisted home care is inevitable, but its trajectory is not predetermined. By grounding innovation in ethical reflection and social responsibility, nations can ensure that the golden years are not just safer or more efficient, but richer in connection, purpose, and respect. The future of elder care must be intelligent—not just in code, but in conscience.
Zhao Nan, Liu Shuangling, Sun Xiangna, Heilongjiang University of Chinese Medicine, Chinese Medical Ethics, DOI:10.12026/j.issn.1001-8565.2021.12.13