Ethical Review Framework for Medical AI Research Gains Momentum in China
Hangzhou, China — As artificial intelligence reshapes the global healthcare landscape, Chinese researchers are sounding the alarm on a critical gap: the ethical governance of medical AI. In a comprehensive analysis published in Chinese Medical Ethics, a team of scholars from Zhejiang Hospital has outlined a robust framework for the ethical review of medical artificial intelligence (AI) research, addressing the urgent need for standardized, actionable guidelines in a rapidly evolving technological domain.
The study, led by Xie Xiaoping, He Xiaobo, Zhang Lingxi, Li Wei, and Gao Yajie, comes at a time when AI is increasingly embedded in clinical workflows, from diagnostic imaging and drug discovery to personalized treatment planning and hospital administration. While the potential benefits are vast, the authors argue that existing regulatory and ethical oversight mechanisms in China are ill-equipped to handle the unique challenges posed by AI-driven medical research.
“Medical AI is not just another tool in the clinic—it represents a paradigm shift in how medicine is practiced and researched,” said He Xiaobo, corresponding author and ethics officer at Zhejiang Hospital. “Our current ethical review systems were designed for traditional clinical trials, not for algorithms trained on massive datasets that evolve over time. Without a tailored framework, we risk compromising patient privacy, perpetuating algorithmic bias, or deploying systems whose decision-making processes remain opaque.”
The research underscores a growing consensus among bioethicists, regulators, and technologists that the ethical scrutiny of AI must go beyond the foundational principles of beneficence, non-maleficence, autonomy, and justice. While these remain essential, the nature of AI—particularly its reliance on data, algorithms, and continuous learning—demands a new layer of ethical consideration.
One of the central arguments in the paper is that medical AI research is inherently interdisciplinary, involving collaborations between clinicians, data scientists, software engineers, and corporate entities. This complexity blurs traditional lines of accountability. “In a conventional clinical study, the principal investigator is clearly responsible,” explained Xie Xiaoping, lead author and ethics committee member. “But in an AI project, is the responsibility with the hospital that provided the data? The tech company that developed the algorithm? The clinician who uses the tool? Or the regulatory body that approved it? These questions must be answered before any research proceeds.”
The authors identify four core ethical review pillars that should guide institutional review boards (IRBs) and ethics committees: risk-benefit assessment, informed consent protocols, data security and privacy protection, and social equity and justice.
Redefining Risk in the Age of Algorithms
Traditional risk-benefit analysis in medical research focuses on physical harm, psychological distress, or breaches of confidentiality. In the context of AI, the definition of risk expands significantly. The paper emphasizes that risks now include algorithmic errors that could lead to misdiagnosis, data breaches that expose sensitive health information, and long-term societal harms such as systemic bias in healthcare delivery.
The team stresses that the risk-benefit ratio must be evaluated across the entire lifecycle of an AI system—from data collection and algorithm training to deployment and post-market surveillance. For instance, an AI model trained predominantly on data from urban populations may perform poorly when applied to rural or minority communities, leading to disparities in care. Such algorithmic bias, the authors note, is not merely a technical flaw but an ethical failure.
“Benefit is not just about diagnostic accuracy or efficiency gains,” said Zhang Lingxi, a co-author specializing in health informatics. “We must ask: Who benefits? Is the technology accessible to underserved populations? Does it reduce or exacerbate existing health inequities? These are central to ethical evaluation.”
The researchers advocate for a proactive approach to risk mitigation. This includes rigorous validation of training datasets for representativeness, transparency in algorithmic design, and continuous monitoring of real-world performance. They also call for the establishment of clear liability frameworks to address harm caused by AI errors, a domain where current legal systems remain ambiguous.
Informed Consent in the Era of Big Data
One of the most pressing challenges in medical AI research is the informed consent process. Traditional consent models assume a specific research purpose and a finite dataset. However, AI systems often rely on vast repositories of retrospective health data—medical images, electronic health records, genomic information—that were collected for clinical care, not research.
Obtaining individual consent for every potential future use of such data is impractical, if not impossible. Yet, bypassing consent undermines patient autonomy. The Zhejiang team proposes a tiered approach to consent that balances practicality with ethical rigor.
They distinguish between four models: specific consent, broad consent, opt-out mechanisms, and waiver of consent.
Specific consent is appropriate when the research purpose is well-defined and the data usage is limited. Broad consent, the authors argue, should be used when data may be used for multiple future studies within a defined scope—such as “cardiovascular disease research.” Crucially, broad consent must include clear information about data storage duration, potential uses, and the rights of participants to withdraw.
Opt-out mechanisms may be acceptable in certain contexts, such as using anonymized data from routine clinical care. However, the authors emphasize that opt-out systems must be transparent: patients must be informed of the program, understand their right to decline, and have a feasible way to exercise that right.
Waiver of consent is the most controversial and should only be permitted under strict conditions: when the research has significant societal value, poses no more than minimal risk, and obtaining consent would render the study impractical. Even then, the authors insist on robust oversight and data protection measures.
“Consent is not a one-time signature on a form,” said Li Wei, another co-author. “It’s an ongoing process of engagement and transparency. In the age of AI, we need dynamic consent models that allow individuals to update their preferences as new uses for their data emerge.”
Safeguarding Data in a Networked World
Data security and privacy are paramount in medical AI, where datasets often contain highly sensitive information. The paper highlights that AI systems are vulnerable not only to external cyberattacks but also to internal misuse, such as unauthorized access by employees or partners.
The authors recommend a comprehensive data governance strategy that includes technical, organizational, and legal safeguards. Key measures include data minimization—collecting only what is necessary—strong encryption, access controls based on role-based permissions, and audit trails that log all data interactions.
They also stress the importance of data anonymization and de-identification. However, they caution that true anonymization is increasingly difficult in the era of AI, where sophisticated re-identification techniques can reconstruct identities from seemingly anonymous data. “We can’t rely solely on anonymization,” said Gao Yajie, a librarian and data ethics specialist on the team. “We need layered protections: technical safeguards, strict access policies, and legal accountability.”
The study calls for the designation of a clear data steward or privacy officer within research teams, responsible for ensuring compliance with data protection regulations. It also recommends regular security audits and staff training in data ethics and cybersecurity.
Ensuring Equity in AI-Driven Healthcare
Perhaps the most profound ethical challenge is ensuring that AI promotes, rather than undermines, social justice in healthcare. The authors warn that without careful design and oversight, AI systems can perpetuate and even amplify existing biases.
Algorithmic bias can arise from multiple sources: unrepresentative training data, flawed model design, or socioeconomic factors embedded in the data. For example, an AI system trained on data from well-resourced hospitals may fail to recognize conditions prevalent in low-income communities. Similarly, voice recognition systems may struggle with non-standard dialects, disadvantaging certain patient groups.
To combat this, the researchers advocate for inclusive data collection practices, diverse development teams, and bias testing throughout the AI lifecycle. They also call for the integration of fairness metrics into model evaluation—assessing not just accuracy, but also performance across different demographic groups.
“Equity must be built into the system from the ground up,” said He Xiaobo. “This means involving ethicists, social scientists, and community representatives in the design process. It means conducting impact assessments before deployment. And it means being willing to halt a project if it risks causing harm.”
The paper also addresses the issue of benefit-sharing. As AI systems generate value from patient data, should patients have a right to share in the profits? While current laws do not recognize data ownership in this way, the authors suggest that mechanisms for benefit-sharing—such as reinvesting profits into public health or providing free access to AI tools for underserved populations—should be explored.
Global Context and National Strategy
The Zhejiang team’s work is part of a broader global conversation on AI ethics. They reference international frameworks such as the Asilomar AI Principles and the IEEE’s Ethically Aligned Design, which emphasize transparency, accountability, and human well-being.
In China, the government has recognized the strategic importance of AI, elevating it to a national priority through initiatives like the New Generation Artificial Intelligence Development Plan. However, as the authors note, policy has outpaced regulation. While there are guidelines for AI in healthcare, they remain largely aspirational, lacking enforceable standards.
The researchers welcome recent developments, such as the Ethical Guidelines for Trustworthy AI issued by the Ministry of Science and Technology in 2019, which emphasize principles like fairness, privacy, and responsibility. But they argue that these must be operationalized into concrete review criteria for ethics committees.
They also point to progress in medical device regulation. The National Medical Products Administration (NMPA) has approved several AI-based software products as Class II or III medical devices, requiring clinical validation and technical review. However, the ethical review of the underlying research remains inconsistent.
A Call for Institutional Reform
The authors conclude with a call for institutional reform. They recommend that ethics committees receive specialized training in AI ethics, develop standardized review checklists, and establish multidisciplinary review panels that include data scientists and legal experts.
They also advocate for greater public engagement. “Ethics is not just the domain of committees and regulators,” said Xie Xiaoping. “Patients, families, and the public must be part of the conversation. We need forums for dialogue, education, and feedback.”
The paper suggests that journals and funding agencies play a role by requiring detailed ethical disclosures in AI research submissions. This would create incentives for researchers to address ethical issues proactively.
Looking ahead, the team is working on a pilot program to implement their framework within Zhejiang Hospital’s ethics review process. They hope it will serve as a model for other institutions across China.
“AI has the potential to revolutionize medicine,” said He Xiaobo. “But technology alone is not enough. We must ensure that it is developed and used in ways that are ethical, equitable, and trustworthy. That is the foundation of sustainable innovation.”
As medical AI continues to advance, the work of Xie, He, Zhang, Li, and Gao offers a timely and rigorous roadmap for ethical governance. Their framework not only addresses immediate concerns but also lays the groundwork for a more responsible and inclusive future in AI-driven healthcare.
Xie Xiaoping, He Xiaobo, Zhang Lingxi, Li Wei, Gao Yajie, Zhejiang Hospital, Chinese Medical Ethics, DOI:10.12026/j.issn.1001-8565.2021.07.12