AI in Primary Care: A Stakeholder Blueprint

AI in Primary Care: A Stakeholder Blueprint for Ethical and Effective Deployment

The integration of artificial intelligence into the foundational layer of healthcare delivery is no longer a speculative future—it is an unfolding reality with profound implications. Across clinics and community health centers, AI-powered tools are being trialed to assist overburdened clinicians, standardize care, and bridge the chasm of medical resource inequality that separates urban hubs from rural outposts. Yet, beneath the gleaming promise of algorithmic efficiency lies a complex web of human interests, institutional mandates, and systemic challenges. A new analysis spearheaded by Chunji Lu, Minjiang Guo, Fangyuan Zhang, Jianli Zheng, and Yazi Li from the Centre for Health and Medical Research at the Institute of Medical Information, CAMS and PUMC, offers a meticulously detailed roadmap. Published in a leading medical informatics journal, their work moves beyond technical feasibility to dissect the political economy of AI in primary care, identifying who stands to gain, who bears the risk, and what structural reforms are non-negotiable for success.

The study’s core insight is disarmingly simple yet critically overlooked: technology does not deploy itself. Its adoption, efficacy, and sustainability are entirely contingent on the alignment—or misalignment—of stakeholders. Using the Mitchell Scoring Method and Clarkson Classification, the researchers systematically evaluated dozens of entities, from government ministries to individual patients, to pinpoint the primary actors whose cooperation is essential. The verdict is clear: the triumvirate of Health Administrators, Local Clinics, and AI Vendors, joined by Finance and Market Supervision authorities, Patients, and Educational bodies, form the indispensable coalition. Each brings not just resources, but distinct, often competing, agendas to the table. Understanding these is the first step toward building not just a functional system, but a fair and resilient one.

For health administrators, AI is a strategic lever. Their mandate is to improve population health, ensure equitable access, and demonstrate tangible progress to the public and political leadership. AI promises to elevate the quality of care in under-resourced settings, standardize the often-variable practice of family doctors, and shrink the glaring disparities between city and countryside. It’s a tool for achieving policy goals like “Healthy China 2030.” However, their enthusiasm is tempered by formidable roadblocks. At the planning level, initiatives are fragmented, lacking a unified national architecture. Pilot programs sprout like weeds, disconnected and uncoordinated, making it impossible to scale successful models. At the implementation level, the technology itself often falls short. Many AI diagnostic tools are designed for single diseases or specific imaging tasks, ill-suited for the broad-spectrum, holistic nature of primary care. A village doctor needs a generalist assistant, not a specialist for one organ system. Furthermore, the lifeblood of AI—data—is poisoned at the source. Clinical records are a chaotic mix of unstructured notes, inconsistent coding, and siloed systems. Crucially, the data is reactive, capturing only moments of illness, not the continuous stream of wellness data from wearables or routine check-ups that could enable true preventative care. Add to this the murky legal waters surrounding data ownership and patient privacy, and administrators find themselves navigating a minefield with inadequate maps.

The Finance Department, while not the primary driver, holds the purse strings and thus wields immense influence. Their interest is two-fold: to support the health sector’s modernization and to ensure that every dollar spent delivers measurable value and avoids waste. They are keenly aware that AI can be a force multiplier for public health investment. Yet, their caution is palpable. While some regions have allocated funds for AI pilots, the broader ecosystem of incentives and performance metrics is underdeveloped. Why should a cash-strapped rural clinic invest in expensive new hardware, or an AI vendor pour resources into a market with no clear path to profitability? The answer, from the finance perspective, is currently unclear. The initial capital outlay for infrastructure in remote areas with poor digital connectivity is substantial, creating a significant fiscal burden and raising questions about long-term return on investment. Without a robust business case and clear cost-benefit analyses, financial support remains hesitant and piecemeal.

Market Supervision and Drug Administration authorities face perhaps the most technically daunting challenge: regulating the unregulatable. Their role is to ensure the safety, efficacy, and ethical marketing of medical devices, including AI software. But traditional regulatory frameworks buckle under the weight of AI’s unique characteristics. How does one “approve” an algorithm that learns and evolves after deployment? Current standards for medical devices are static, designed for hardware with fixed functions, not dynamic software whose “black box” decision-making processes are often inscrutable even to their creators. This lack of transparency makes it nearly impossible to validate performance or assign liability in the event of an error. Compounding this is the intellectual property quagmire. Who owns the “invention” when an AI system, trained on millions of public and private medical records, generates a novel diagnostic insight? Existing copyright laws, designed for human-authored works, are woefully inadequate. The result is a Wild West market where subpar products can thrive, potentially driving out more rigorous, but costlier, competitors—a classic case of “bad money driving out good.”

The linchpin of the entire system, the Primary Healthcare Institutions (PHIs)—community health centers and village clinics—are where the rubber meets the road. For them, AI is not an abstract policy goal but a potential lifeline. Their demands are practical and urgent: to enhance the diagnostic capabilities of their often-overworked and under-trained staff, to reduce the risk of medical errors—errors that can have devastating consequences in isolated or resource-limited settings—and ultimately, to attract more patients and generate sufficient revenue to remain operational and sustainable.AI promises to be a tireless, knowledgeable assistant, compensating for gaps in continuous medical education. Yet, the current reality on the ground is frequently disillusioning. The financial burden of acquiring and maintaining the necessary hardware and software is a major deterrent for institutions operating on razor-thin margins. More insidiously, poorly designed AI tools can actually increase workload. An unstable system that crashes, a user interface that is overly complex for a doctor with limited tech literacy, or a knowledge base that misses common local diseases, forces clinicians to double-check every AI suggestion, turning an assistant into a hindrance. Integration is another nightmare; if the AI cannot seamlessly talk to the clinic’s existing electronic health record or lab system, it operates in a vacuum, potentially missing critical patient history and leading to dangerous misdiagnoses. Finally, a lack of comprehensive training leaves many clinicians skeptical or unable to use the tools effectively, rendering expensive investments useless.

At the heart of the entire endeavor are the patients—the ultimate beneficiaries—whose trust and cooperation are paramount. Their needs are fundamentally human: to receive care as competent and compassionate as that available in top-tier urban hospitals, to spend less time waiting and more time healing, and to avoid unnecessary, costly tests and medications that add burden without benefit.AI, in theory, can deliver this by providing their local doctor with expert-level decision support. But theory collides with perception and practicality. Many patients view the use of AI as a sign of their doctor’s inadequacy, a crutch for someone who doesn’t know enough. Overcoming this stigma requires a massive public education campaign. Others, particularly the elderly, may struggle with the digital interfaces that often accompany AI-driven care, such as apps for follow-up or remote monitoring. The most pervasive fear, however, is privacy. The very data that makes AI powerful—detailed personal health records, lifestyle information, genetic predispositions—is also the data most vulnerable to breach and misuse. Patients are rightly wary of becoming data points in a corporate or governmental database. Technologically, their disappointment stems from unmet promises. AI systems often fail to establish seamless referral pathways to specialists or cannot solve the fundamental problem of drug shortages in rural pharmacies, leaving patients frustrated despite the high-tech facade.

For the AI Vendors—the companies like iFlytek, Ping An Technology, and Baidu developing these tools—the calculus is a blend of profit, innovation, and reputation. Their interests and objectives are multifaceted: to generate sustainable revenue and secure favorable tax and regulatory policies; to retain ownership of the valuable intellectual property they develop; to forge long-term, trust-based partnerships with healthcare providers; and to leverage the primary care sector as a strategic entry point for building a comprehensive, integrated healthcare ecosystem. Many also harbor a genuine desire to fulfill a social responsibility and enhance their brand. Yet, they operate in a landscape fraught with uncertainty. The core technical challenge is data scarcity. Most AI models rely on “supervised learning,” which requires vast, high-quality, labeled datasets to achieve accuracy. Such datasets are precisely what the fragmented, privacy-conscious healthcare system lacks, leading to algorithms that are under-trained and prone to error. The absence of clear, government-issued guidelines on what features a primary care AI should have leaves vendors guessing at market needs. On the business side, the path to profitability is murky. Most companies are still in the exploratory, loss-leading phase, unsure how to monetize their services sustainably. The lack of standardized evaluation metrics means the market is flooded with products of wildly varying quality, making it hard for the best to rise to the top. And the unresolved IP issues create a chilling effect on investment and innovation, as companies cannot be sure they will reap the rewards of their R&D.

The research team doesn’t just diagnose the problems; they prescribe a comprehensive treatment plan.Their countermeasure research is a call for systemic, multi-stakeholder reform. First and foremost, they advocate for a robust institutional framework — one that establishes clear governance, aligns incentives across sectors, and provides the legal, financial, and operational scaffolding necessary to support the responsible and sustainable deployment of AI in primary care. This means moving beyond ad-hoc pilots to a coordinated national strategy, led by health authorities but actively involving finance, market supervision, and education departments. Policy must be designed to align the incentives of all key players. This includes direct financial subsidies and tax breaks for AI vendors developing core technologies, and for clinics adopting proven systems. Crucially, it also means establishing ironclad data governance. The enactment of comprehensive data security and personal information protection laws is non-negotiable. These laws must clearly define who owns health data, who can access it, and under what conditions, establishing accountability and redress mechanisms for breaches. Equally important is resolving the IP dilemma through legislation that clarifies ownership in cases of AI-assisted or AI-generated medical insights.

Another pillar of their strategy is infrastructure. They propose the construction of a national medical data cloud. This isn’t just a technical project; it’s a political and logistical one. It requires standardizing data formats across thousands of disparate systems, from electronic health records to population databases, and crucially, incorporating data from wearables and health apps. To ensure quality, they suggest forming expert medical teams to manually annotate and validate the data, creating a gold-standard, multi-disease terminology library. The use of blockchain technology is proposed to ensure data integrity and secure, auditable sharing, fostering trust among all participants.

The third major recommendation calls for a fundamental philosophical shift in product design: from supply-side to demand-side. Instead of building what’s technologically possible, vendors must build what clinicians and patients actually need. This means prioritizing stability, simplicity, and integration over flashy, single-disease applications. For a village clinic, the interface must be intuitive, perhaps offering voice input that can handle local dialects. A “drug transfer” function could allow a doctor to electronically order a medication not in stock for direct delivery to the patient. The design must be “value-sensitive,” embedding ethical considerations like privacy and patient autonomy into the core architecture, not as an afterthought. To drive adoption, the researchers suggest launching national “AI in Primary Care Demonstration Projects,” which would mandate the deep integration of AI tools with existing hospital information systems (HIS, PACS, LIS) and regional health platforms, creating a seamless flow of patient information. Alongside this, intensive, ongoing training for medical staff and public awareness campaigns for patients are essential to build competence and trust.

Finally, they call for the establishment of rigorous, adaptive standards and testing protocols for AI medical devices. This involves strengthening pre-market approval and post-market surveillance, learning from frameworks like the FDA’s digital health pre-certification program. Clear market-entry standards and clinical application guidelines are needed to prevent defective or unvalidated products from reaching clinics.They also propose the creation of specialized, third-party testing institutions that can evaluate AI products for compliance with these evolving standards, ensuring a level playing field and fostering innovation through clarity, not chaos.

This analysis, emerging from one of China’s premier medical research institutions, is a sobering yet hopeful blueprint. It acknowledges that the path to AI-augmented primary care is not a straight line paved with technological triumphs, but a winding road requiring careful negotiation, robust institutions, and above all, a human-centered approach. The future of equitable, high-quality healthcare depends not on the sophistication of the algorithms, but on the wisdom with which we manage the intricate dance of human interests that surrounds them.

By Chunji Lu, Minjiang Guo, Fangyuan Zhang, Jianli Zheng, and Yazi Li, Centre for Health and Medical Research, Institute of Medical Information, CAMS and PUMC, Beijing 100020, China. Published in Acta Academiae Medicinae Sinicae. DOI: 10.3881/j.issn.1000-503X.13118.