Research on the Ethical Risk and Governance System of AI

AI Ethics and Governance in the Digital Age: A Global Call for Action

As artificial intelligence (AI) continues to transform industries, reshape economies, and redefine human interaction, its rapid advancement has sparked a growing global conversation about the ethical risks and governance frameworks necessary to ensure its responsible development. A comprehensive study published in Information and Communications Technology and Policy by Wang Yifei and Han Kaifeng of the Policy and Economics Research Institute at the China Academy of Information and Communication Technology (CAICT) offers a timely and insightful analysis of the challenges and opportunities facing AI governance in the digital economy era.

The paper, titled Research on the Ethical Risk and Governance System of Artificial Intelligence in the Digital Economy Era, underscores the dual nature of AI: a powerful driver of innovation and efficiency, yet simultaneously a source of profound societal, legal, and ethical dilemmas. With machine learning algorithms becoming more sophisticated, data accumulation reaching unprecedented scales, and computational power expanding exponentially, the world is witnessing a new wave of AI development. This wave, while promising immense benefits across healthcare, transportation, manufacturing, and public services, also introduces complex risks that traditional regulatory models are ill-equipped to handle.

Wang and Han begin by outlining the multifaceted challenges posed by AI, categorizing them into three primary domains: safety, social equity, and legal accountability. The first, safety risks, stem from the potential misuse of AI technologies that can compromise individual privacy and digital integrity. One of the most alarming examples is the rise of deepfake technology, which enables the realistic manipulation of images and audiovisual content. Such tools, when deployed maliciously, can generate convincing fake news, impersonate public figures, or fabricate evidence, thereby undermining trust in digital media and threatening democratic processes. The authors cite instances where commercial entities have used facial recognition systems without consumer consent—such as real estate agencies tracking visitors to prevent commission losses—highlighting how data-driven AI applications can infringe upon personal rights and erode public trust.

Beyond privacy concerns, AI systems often rely on vast datasets that reflect historical and societal biases. When these datasets are used to train algorithms, the resulting models may perpetuate or even amplify existing inequalities. For instance, hiring algorithms trained on biased historical employment data may systematically disadvantage certain demographic groups, leading to discriminatory outcomes in recruitment, lending, or law enforcement. The opacity of many AI models—commonly referred to as the “black box” problem—further complicates the issue, as it becomes difficult to trace how decisions are made or to correct erroneous or unfair judgments.

The second major challenge lies in the social implications of AI-driven automation. As intelligent systems increasingly perform tasks once carried out by humans, there is a growing fear of widespread job displacement, particularly in sectors involving routine or repetitive work. Citing a McKinsey Global Institute report, Wang and Han note that up to 800 million jobs could be automated by 2030. While new roles will emerge, the transition may not be seamless, potentially exacerbating income inequality and creating a class of individuals marginalized by technological change. This shift could deepen the digital divide, where those with access to education, digital literacy, and technical skills benefit disproportionately, while others are left behind. The authors emphasize that AI’s impact is not merely economic but deeply social, influencing how wealth, opportunity, and power are distributed in society.

The third dimension of risk involves the legal and ethical ambiguities introduced by AI’s autonomous decision-making capabilities. Traditional legal frameworks are built on the premise of human agency and responsibility. However, when an AI system—such as a self-driving car or an autonomous medical diagnostic tool—makes a decision that results in harm, determining liability becomes highly complex. Is the manufacturer responsible? The software developer? The data provider? Or the end user? The lack of clear accountability mechanisms poses a significant challenge to existing legal systems and raises fundamental questions about justice, redress, and due process in an age of intelligent machines.

These challenges are compounded by the inherent characteristics of AI technology. Unlike conventional digital governance issues, where the boundaries between actors and actions are relatively clear, AI operates in a domain marked by algorithmic opacity, data dependency, and cross-sectoral diffusion. The “black box” nature of deep learning models means that even their creators may not fully understand how inputs are transformed into outputs. This lack of interpretability undermines transparency and makes it difficult to audit systems for fairness or safety. Moreover, AI’s ability to learn and adapt over time introduces unpredictability, as systems may evolve in ways not anticipated by their designers.

From an external governance perspective, traditional regulatory approaches often lag behind technological innovation. Policies based on static rules struggle to keep pace with the dynamic and iterative nature of AI development. Furthermore, many existing governance structures were not designed to address the interdisciplinary and transnational scope of AI. As AI applications span sectors—from finance to healthcare to defense—the need for coordinated, cross-domain regulation becomes evident. Yet, current institutional arrangements often operate in silos, lacking the agility and integration required for effective oversight.

In response to these challenges, governments, international organizations, and private institutions around the world have begun to develop AI governance frameworks. The United States, for example, has integrated ethical considerations into its national AI strategy, emphasizing the need to prepare the workforce for technological disruption through lifelong learning and skills training. The 2018 AI in Government Act and the establishment of the National Security Commission on Artificial Intelligence reflect a strategic focus on both economic competitiveness and national security implications.

The European Union has taken a more comprehensive and values-driven approach. With the release of the Ethics Guidelines for Trustworthy AI in 2019, the EU established seven key requirements for AI systems: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability. These principles are not merely aspirational; they are being translated into concrete regulatory actions. The 2020 EU Artificial Intelligence White Paper proposed a risk-based regulatory framework that would impose stricter requirements on high-risk AI applications, such as those used in healthcare, transportation, and law enforcement. The EU’s approach reflects a commitment to embedding ethical values into the fabric of technological development, ensuring that AI serves the public good.

Other nations have adopted specialized strategies tailored to their unique contexts. France, for instance, has engaged in extensive public deliberation on AI ethics, organizing over 45 academic debates involving thousands of citizens. This participatory approach led to the publication of a report titled How Can Humans Remain Superior? – Ethical Issues Raised by Algorithms and Artificial Intelligence, which introduced the principles of “loyalty” and “continuous vigilance.” The former demands that AI systems act in the best interest of users, while the latter calls for ongoing monitoring by all stakeholders to anticipate unintended consequences.

Germany and South Korea have focused on the ethical dimensions of autonomous vehicles. In 2017, Germany became the first country to issue Ethical Guidelines for Automated and Connected Driving, which established that in unavoidable accident scenarios, systems must prioritize human life over property and must not discriminate based on age, gender, or other personal characteristics. South Korea followed with its Safety Operation Guidelines for Self-Driving Vehicles, which includes provisions on cybersecurity, production standards, and ethical decision-making.

Meanwhile, international organizations are playing a crucial role in harmonizing global standards. The G20 has endorsed human-centered AI principles, while the OECD has developed guidelines for trustworthy AI that emphasize transparency, fairness, and accountability. Standardization bodies such as ISO/IEC and IEEE are working on technical specifications for AI systems, aiming to ensure interoperability, safety, and reliability. These efforts are critical in preventing a fragmented regulatory landscape that could hinder innovation and create compliance burdens for multinational companies.

In the private sector, leading technology firms—including Google, Microsoft, Baidu, Megvii, and Tencent—have established internal AI ethics boards and published corporate principles to guide development. These initiatives often include commitments to fairness, privacy, and safety, as well as investments in technical tools to mitigate risks. For example, Huawei has implemented differential privacy and data filtering techniques to protect user information during AI model training. Academic institutions are also contributing to the discourse: the Alan Turing Institute in the UK has published practical frameworks for responsible AI, while Tsinghua University established the Institute for AI International Governance to address global policy challenges.

Against this global backdrop, China has been steadily advancing its own AI governance agenda. The State Council’s New Generation Artificial Intelligence Development Plan outlines a “three-step” strategy for AI governance, aiming to establish a robust regulatory framework by 2030. In 2019, China released the New Generation Artificial Intelligence Governance Principles, which articulate eight core tenets, including harmony between humans and machines, fairness and justice, inclusiveness, and shared responsibility. These principles emphasize the need for AI to serve human well-being and social progress.

Domestically, China has taken concrete steps to operationalize these guidelines. The China Artificial Intelligence Industry Alliance issued an Industry Self-Discipline Convention to promote ethical conduct among developers and enterprises. Legislative efforts are also underway, with draft laws on data security and personal information protection being reviewed by the National People’s Congress. Regulatory focus has expanded to high-risk domains such as autonomous driving, smart healthcare, and biometric identification, where risk assessment and oversight mechanisms are being developed.

Despite these advancements, Wang and Han argue that AI governance remains a complex, evolving challenge that requires a systemic and adaptive approach. They propose a comprehensive governance framework built on three pillars: governance models, governance methods, and governance evaluation.

The first pillar emphasizes a multi-stakeholder, co-governance model. Rather than relying solely on top-down regulation, effective AI governance must involve governments, industry, academia, and the public in a collaborative ecosystem. Governments play a leadership role in setting strategic direction and enacting laws. Industry associations contribute by developing technical standards and best practices. Research institutions provide foundational knowledge and ethical guidance. Tech companies implement self-regulation and innovate in safety tools. Citizens, through public consultation and feedback mechanisms, ensure that governance remains accountable and socially grounded.

The second pillar calls for a diversified set of governance tools. Ethical guidelines serve as early warning systems, helping to shape norms before legal frameworks are established. Technical standards ensure interoperability and safety across systems. Technological solutions—such as explainable AI, bias detection algorithms, and privacy-preserving computation—offer practical means to embed ethical principles into code. Legal and regulatory measures provide enforceable rules, but must be designed with flexibility to accommodate rapid innovation. A balanced approach that combines soft and hard governance instruments is essential.

The third pillar focuses on continuous evaluation and feedback. Given the fast pace of AI development, governance mechanisms must be agile and responsive. Regular assessment of ethical principles, technical standards, and regulatory policies allows for timely adjustments based on real-world performance and societal feedback. Metrics such as transparency, fairness, robustness, and public trust can be used to evaluate the effectiveness of governance initiatives. This iterative process ensures that the system evolves in tandem with technological progress.

Looking ahead, the authors recommend that China strengthen its policy support, deepen research on forward-looking ethical issues, and enhance international cooperation. They advocate for targeted regulations in high-risk AI applications, greater investment in explainable and trustworthy AI research, and active participation in global standard-setting bodies. By fostering inclusive dialogue and sharing best practices, China can contribute to the development of a globally coherent and ethically sound AI governance regime.

In conclusion, as AI becomes increasingly embedded in the fabric of daily life, the need for thoughtful, inclusive, and adaptive governance has never been more urgent. The study by Wang Yifei and Han Kaifeng provides a compelling roadmap for navigating the ethical complexities of the digital age. It reminds us that technology should serve humanity, not the other way around. Through collaborative effort, informed policymaking, and a commitment to shared values, societies can harness the transformative potential of AI while safeguarding the rights, dignity, and well-being of all.

Wang Yifei, Han Kaifeng, China Academy of Information and Communication Technology. Information and Communications Technology and Policy, 2021, 47(2):32-36. doi: 10.12267/j.issn.2096-5931.2021.02.005