China AI ethics draft translated by Georgetown’s CSET

The Center for Security and Emerging Technology (CSET), a policy research organisation within Georgetown University’s Walsh School of Foreign Service, has published an English translation of China’s draft trial measures on ethics reviews for AI technology.

The translated draft says the measures would apply to AI-related scientific and technological activities conducted within China that may pose ethical risks to human health, human dignity, the ecological environment, public order, or sustainable development. It covers universities, research institutions, medical and health institutions, enterprises, and other organisations involved in AI research and development.

Under the draft, organisations with the necessary conditions would be expected to establish AI technology ethics committees, while others could commission specialised ethics service centres to conduct reviews. Review applications would need to include details on the AI activity, algorithms, data sources, data cleaning methods, testing and evaluation, expected applications, user groups, risk assessments, and risk prevention plans.

The review process would focus on fairness and impartiality; controllability and trustworthiness; transparency and explainability; accountability and traceability; and whether the activity has scientific and social value. Committees or service centres would generally have 30 days to approve, reject, or request revisions to an application.

Higher-risk activities would require expert reconsideration. The draft list includes human-computer fusion systems that strongly affect behaviour, psychological or emotional states, or health; AI models and systems able to mobilise public opinion or channel social consciousness; and highly autonomous automated decision-making systems used in safety or personal health-risk scenarios.

Approved AI activities would also be subject to follow-up reviews, generally at intervals of no more than 12 months, while activities requiring expert reconsideration would be subject to follow-up reviews at least every 6 months. Emergency ethics reviews would normally have to be completed within 72 hours.

CSET notes that China released a final trial version of the regulation in April 2026, which it is now translating. The newly published draft translation therefore provides insight into the regulatory structure that preceded the final version, including committee-based ethics review, external service centres, expert reconsideration, and oversight roles for the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and other departments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China unveils Hanyuan-2 dual-core quantum computer breakthrough

China’s CAS Cold Atom Technology has unveiled Hanyuan-2, a 200-qubit neutral atom quantum computer that Chinese state media described as the world’s first dual-core neutral atomic quantum computer.

Developed in Wuhan by a company affiliated with the Chinese Academy of Sciences, Hanyuan-2 is presented as a shift from single-core to dual-core quantum architecture. The system uses neutral-atom array technology and combines 100 rubidium-85 and 100 rubidium-87 atoms to form a 200-qubit system.

The dual-core architecture allows the two processing units to operate independently in parallel or to work together in a main-and-support configuration. Developers say the approach could improve computational efficiency, support error correction and help address challenges linked to stability, qubit interference and scalability.

Unlike many quantum systems that require highly specialised operating environments, Hanyuan-2 is described as using a compact integrated design with a simplified laser-cooling setup and power consumption below 7 kilowatts. The design is intended to reduce operating complexity and make quantum computing systems easier to deploy.

The announcement highlights China’s continued investment in quantum computing hardware, particularly neutral atom systems. However, the system’s practical performance remains difficult to assess publicly because detailed benchmarks such as gate fidelity, coherence time and error rates have not yet been released in peer-reviewed or standardised form.

Why does it matter?

Hanyuan-2 points to growing experimentation with quantum computing architectures designed to improve scalability, stability and efficiency. Dual-core designs could support more flexible processing and error-correction approaches, but their real significance will depend on independently verifiable performance metrics. For now, the announcement is best understood as a signal of China’s ambition in quantum hardware rather than proof of practical superiority over other systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

China launches AI ethics review pilot programme

A national pilot programme for AI ethics review and services has been launched by China, as authorities move to strengthen oversight of growing risks linked to advanced AI systems.

The initiative, announced by China’s Ministry of Industry and Information Technology, aims to establish practical mechanisms for AI ethics governance as concerns over algorithmic discrimination, emotional dependence, and broader societal risks continue to grow. Authorities said the initiative will initially operate in provincial-level regions hosting national AI industrial innovation pilot zones. It will focus on refining provincial AI ethics review rules, supporting the creation of ethics committees, and developing specialised ethics review and service centres. Chinese regulators also plan to transform the ethics review process into technical standards while improving mechanisms for reporting AI-related ethical concerns.

The Ministry of Industry and Information Technology has also called for the creation of a national AI ethics risk monitoring service network, along with training materials, ethics education courses, and early-warning systems to support pilot cities.

By embedding ethics reviews into AI development and deployment processes, China appears to be building a more institutionalised framework for managing the societal and technological risks associated with increasingly powerful AI systems.

Why does it matter?

China’s latest move signals a shift from broad AI governance principles towards operational enforcement mechanisms embedded directly into regional innovation ecosystems. The programme could influence how other governments approach AI oversight, particularly as global concerns grow over algorithmic bias, psychological manipulation, and accountability in frontier AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SHEIN faces Irish inquiry over EU data transfers to China

Ireland’s Data Protection Commission has opened an inquiry into Infinite Styles Services Co. Ltd. (known as SHEIN Ireland), over transfers of personal data of EU and EEA users to China.

The inquiry will examine whether SHEIN Ireland has complied with its obligations under the General Data Protection Regulation in relation to those transfers. The DPC said it will assess compliance with GDPR principles on personal data processing, transparency obligations under Article 13, and Chapter V requirements governing transfers of personal data to third countries.

The regulator said its decision to begin the inquiry was issued to SHEIN Ireland at the end of April. The case comes as data transfers to China face growing regulatory scrutiny in Europe, including through recent DPC enforcement action and complaints filed with other European supervisory authorities.

Deputy Commissioner Graham Doyle said: ‘When an individual’s personal data is transferred to a country outside the EU, the GDPR requires that this personal data is afforded essentially the same protections as it would within the EU.’

He added: ‘Recent regulatory action by the DPC, together with complaints to other European supervisory authorities, has brought data transfers to China, in particular, into focus. The inquiry is an important strategic priority for the DPC and we intend to cooperate closely with our peer European Supervisory Authorities as part of the investigation.’

Under the GDPR, transfers of personal data outside the EU and EEA must meet specific safeguards so that the level of protection provided under EU law is not undermined. Where no European Commission adequacy decision exists for a third country, organisations must rely on alternative mechanisms, such as standard contractual clauses, and demonstrate that equivalent protections are in place.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US and China reportedly weigh AI risk talks ahead of leaders’ summit

The United States and China are considering launching official discussions on AI risk management, The Wall Street Journal reported, citing people familiar with the matter.

According to the report, the White House and the Chinese government are also considering whether to place AI on the agenda for a planned summit in Beijing between US President Donald Trump and Chinese President Xi Jinping. If agreed, the talks would mark the first AI-specific engagement between the two governments under the current US administration.

The possible dialogue could focus on risks linked to advanced AI systems, including unexpected model behaviour, autonomous military applications and misuse by non-state actors using powerful open-source tools, people familiar with the discussions told the newspaper. The report said Washington is waiting for Beijing to designate a counterpart for the talks.

The WSJ reported that US Treasury Secretary Scott Bessent is leading the US side, while Chinese Vice Finance Minister Liao Min has been involved in discussions on setting up such a channel. The newspaper added that the two presidents would ultimately decide whether AI appears on the formal summit agenda.

Liu Pengyu, spokesperson for the Chinese Embassy in Washington, was cited as saying that China is ready to engage in communication on AI risk mitigation. Analysts have raised the possibility that any future dialogue could support crisis-management tools, including an AI hotline between senior leaders.

The report places the latest deliberations in the context of earlier US-China engagement on AI. In 2023, then US President Joe Biden and Xi launched a formal AI dialogue, and both sides later said humans, not AI, would retain authority over nuclear-launch decisions. The WSJ said the earlier process produced limited results, but AI has remained a high-level focus in bilateral relations.

Non-governmental discussions have also reportedly continued in parallel, including exchanges involving former Microsoft research executive Craig Mundie and Chinese counterparts from Tsinghua University and major AI companies. Participants cited by the newspaper said those exchanges have focused on frontier-model safety, technical guardrails and broader questions of strategic stability.

Why does it matter?

A formal AI risk channel between Washington and Beijing would signal that both governments see advanced AI as a strategic stability issue, not only an economic or technological race. Even brief talks could matter if they create channels for crisis communication about military AI, frontier-model failures, or misuse by non-state actors. However, because the discussions are still only reported as under consideration, the significance lies in the possibility of a risk-management mechanism, not in any confirmed diplomatic breakthrough.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China pushes AI self-reliance while expanding global cooperation

Chinese Vice Premier Ding Xuexiang has reiterated China’s emphasis on AI self-reliance while also calling for deeper international cooperation, underscoring a dual approach to technology policy amid rising global competition. Speaking at the opening of the 9th Digital China Summit, he presented AI as an important part of China’s wider modernisation agenda.

Ding said China should strengthen self-reliance and independent innovation in AI, arguing that the sector must be able to withstand external pressure and attempts at suppression. He also emphasised application-driven development, calling for faster integration of AI into the real economy to support productivity and industrial transformation.

Alongside those domestic priorities, he called for a more collaborative innovation ecosystem, including closer coordination across the AI industry chain. Internationally, he advocated open and mutually beneficial cooperation, with particular emphasis on computing power, data, and talent.

Regulation also featured prominently in the speech. Ding said AI development must remain safe and controllable, with stronger oversight to ensure the technology serves human interests and remains under human control. Taken together, the message reflects China’s broader effort to balance technological sovereignty with continued international engagement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New Chinese rules restrict digital promotion of financial products

China has introduced new online marketing rules for financial products, further tightening its long-standing restrictions on cryptocurrency-related activity. The new framework limits the promotion of financial products to licensed entities and treats digital currency trading and issuance as illegal financial activity.

Issued by the People’s Bank of China and seven other regulators, the Administrative Measures for Online Marketing of Financial Products will take effect on 30 September 2026. The rules extend responsibility to platforms, intermediaries, and content creators who promote or facilitate financial products online.

Any assistance in promoting or facilitating prohibited financial activity may now be treated as participation in illegal finance, expanding enforcement beyond direct trading bans. In practice, that broadens the focus from financial products themselves to the wider digital promotion layer, including online displays, traffic generation, and other forms of internet-based marketing support.

Authorities say the measures are intended to protect consumers by limiting misleading or aggressive online promotion, including livestream marketing and viral investment content. In that sense, the rules are not only about crypto, but about tighter control over how financial products are marketed in digital environments.

The policy also reinforces China’s existing position, dating back to 2021, when regulators declared all cryptocurrency transactions illegal, while pushing enforcement deeper into the digital advertising and distribution layers of financial markets.

Why does it matter?

Stronger oversight of online financial promotion shows that crypto-related advertising is increasingly being treated as a regulatory risk category, not just a marketing issue. The Chinese move also points to a broader trend in which regulators are extending scrutiny beyond financial products themselves to the digital channels, influencers, and platforms that help distribute them.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets standards for AI ethics review and algorithm accountability

The introduction of new AI ethics guidelines by China signals a structured attempt to formalise governance frameworks for rapidly expanding AI systems.

Coordinated by the Ministry of Industry and Information Technology of the People’s Republic of China and multiple state bodies, the policy integrates ethical oversight directly into technological development processes.

A central feature of the framework is the emphasis on operationalising ethical principles such as fairness, accountability, and human well-being through technical review mechanisms.

By focusing on data selection, algorithmic design, and system architecture, the guidelines move towards embedding ethical safeguards at the development stage and protecting intellectual property rights in AI ethics review technologies.

Such an approach reflects a broader shift towards anticipatory governance, where risks such as bias, discrimination, and algorithmic manipulation are addressed before deployment.

A policy by China that also highlights the role of infrastructure in ethical governance, including the development of auditing tools, risk assessment systems, and curated datasets.

Scenario-based evaluation mechanisms indicate an effort to tailor oversight to specific use cases, recognising that AI risks vary significantly across sectors. Instead of relying solely on static compliance rules, the framework promotes adaptive governance aligned with technological complexity.

Ultimately, the outcome is a governance model that seeks to maintain technological competitiveness while addressing societal risks, contributing to wider global debates on how states can regulate AI systems without constraining their development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!