China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FBI reports billions lost to crypto and AI scams

The Federal Bureau of Investigation reports that cyber-enabled crimes cost Americans nearly $21 billion in 2025, according to its latest Internet Crime Report. The Internet Crime Complaint Center recorded more than 1 million complaints, marking a rise from the previous year.

Investment fraud, phishing, extortion, and tech support scams remained the most common threats, with older adults reporting disproportionately high losses. Individuals over 60 accounted for approximately $7.7 billion in losses, reflecting a sharp year-on-year increase.

Cryptocurrency-related fraud was the most financially damaging category, with losses exceeding $11 billion across more than 180,000 complaints. The report also highlighted emerging risks linked to AI, including deepfake identities, voice cloning, and fabricated media used to manipulate victims.

The FBI has expanded initiatives such as Operation Level Up to identify ongoing scams and reduce losses, while emphasising early reporting and awareness measures. Officials say scammers increasingly use psychological pressure and realistic digital impersonation to deceive victims.

Rising losses highlight how rapidly evolving digital fraud techniques are outpacing public awareness, with crypto and AI tools making scams more scalable and convincing.

Strengthening detection, reporting, and education will be critical to reducing financial harm and improving resilience against increasingly sophisticated online crime networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Geneva Cyber Week to bring diplomacy, cyber policy, and AI security debates together

The United Nations Institute for Disarmament Research and the Swiss Federal Department of Foreign Affairs will co-host Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.

Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change, with organisers framing the gathering as a space for more practical cooperation across diplomatic, technical, operational, and policy communities.

“Cybersecurity is no longer a niche technical issue; it is a strategic policy challenge with implications for international peace, economic stability and public trust. At a moment of growing fragmentation and accelerating technological change, Geneva Cyber Week brings together the communities that need to be in the room — diplomatic, technical, operational and policy — to move from shared concern to practical cooperation,” said Dr Giacomo Persi Paoli, Head of Security and Technology Programme at UNIDIR.

The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance. Scheduled sessions include UNIDIR’s Cyber Stability Conference, Peak Incident Response organised by the Swiss CSIRT Forum, Digital International Geneva, the World Economic Forum Annual Meeting on Cybersecurity, and a Council of Europe session titled ‘Artificial Intelligence, Cybercrime and Electronic Evidence: Risks, Opportunities, and Global Cooperation’.

The week will also include partner-led panels, workshops, simulations, exhibitions, and networking events to connect specialist communities that do not always work in the same room. That broader structure reflects an effort to treat cyber issues not only as a technical or security matter but also as a governance, trust-building, and international-coordination challenge.

“At a time when digital threats know no borders, fostering inclusive discussions is essential to building trust, advancing common norms, and promoting a secure and open cyberspace for all. International Geneva provides an unparalleled multilateral environment to address these cybersecurity challenges collectively. Geneva Cyber Week’s diverse programme embodies this collaborative spirit,” said Marina Wyss Ross, Deputy Head of International Security Division and Chief of Section for Arms Control, Disarmament and Cybersecurity at the Swiss FDFA.

Across the city, Geneva will also mark the week visually, including flags on the Mont Blanc Bridge and special illumination of the Jet d’Eau on Monday evening. But beyond the symbolism, the event’s significance lies in how it seeks to bring cyber diplomacy, incident response, governance debates, and emerging technology risks into the same international conversation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

French data protection authority sets out 2026 GDPR and AI guidance agenda

The French data protection authority, the Commission nationale de l’informatique et des libertés (CNIL), has outlined the main guidance, consultations, and resources it plans to publish in 2026 to support compliance with the General Data Protection Regulation and certain provisions of the AI Act.

According to the CNIL, the programme is intended to help public and private sector actors prepare for upcoming consultations and anticipate regulatory developments. It says the programme is indicative and may evolve in response to current events.

The CNIL says it will begin work on ‘multi-property’ consent, covering the conditions for obtaining a single consent across several sites or media, particularly where they belong to the same group. It also says it will finalise work on the use of AI in the workplace and in health, including bias risks and safeguards to protect the rights of employees and patients.

The authority also plans to work on transcription and automated analysis tools used in call centres and videoconferencing software, operational content for data protection officers, and clarification of how the GDPR applies to non-anonymous AI models.

In the health sector, it says it will update research reference methodologies, publish its position on how people should be informed when data are reused for research, and issue a consolidated document on the electronic patient record.

On security, the CNIL says it will continue publishing recommendations to improve personal data security, publish the final updated version of its recommendation on remote electronic voting systems, and open public consultations on recommendations covering the security of personal data exchanges, remote identity verification, and end point detection and response services. It also says it will publish a recommendation on web filtering gateways.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government reviews regulatory options for enterprise connected devices

The UK government has said it will update and streamline its proposed code of practice for enterprise connected device security and assess further policy options, including regulation, certification, and other assurance mechanisms, following its call for views on enterprise connected device security.

The response, published by the Department for Science, Innovation and Technology, says enterprise-connected devices are often critical to business operations but can lack adequate security measures. It also states that the UK government’s call for views showed strong support for intervention to improve the cybersecurity of such devices, with 95% of respondents agreeing that the government should do more.

According to the response, 76% of respondents agreed or strongly agreed that the risks posed by enterprise-connected devices are sufficiently distinct from those of other connected devices to warrant an independent code of practice.

The UK government also reports that 78% agreed or strongly agreed with creating new legislation imposing obligations on manufacturers, while 71% agreed or strongly agreed with creating a new global standard based on the code of practice.

The UK government says it will ask manufacturers to use the National Cyber Security Centre’s existing device security principles while this work continues. It also says it will finalise the security principles, make them modular within the broader set of secure-by-design codes of practice, and explore the feasibility of a certification scheme for manufacturers.

The response also states that the UK government will assess options for regulatory measures, following feedback that it needs to go beyond voluntary adoption and include some form of assurance or enforcement mechanism. It adds that the government will review whether the scope of this work should be expanded beyond enterprise-connected devices as part of its broader analysis of technology security.

The document says the UK government will seek to align this work, where possible and necessary, with international developments, including European Union standards processes under the Cyber Resilience Act. It also notes repeated calls from respondents for implementation guides and clearer alignment with existing legislation and standards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Experts warn of potential quantum disruption to blockchain security

A survey by the Global Risk Institute has highlighted growing concern that quantum computing could undermine the cryptographic foundations of cryptocurrencies within the next decade.

Experts estimate a 28% to 49% probability that quantum machines capable of breaking current encryption standards could emerge within 10 years, with the probability rising further over a 15-year horizon.

Cryptocurrencies such as Bitcoin rely on public-key cryptography to secure transactions and verify ownership. Advanced quantum algorithms could reverse-engineer private keys from public data, exposing wallets and weakening blockchain security.

The risk is seen as particularly relevant for long-term stored assets and static addresses. Industry researchers and technology firms are already exploring post-quantum cryptography to mitigate potential disruption.

Efforts led by standards bodies such as the National Institute of Standards and Technology focus on developing encryption methods resistant to both classical and quantum attacks, although full migration across decentralised systems remains complex.

The findings place quantum readiness alongside broader digital security priorities, as financial systems, communications networks, and public infrastructure share similar cryptographic dependencies.

The evolving timeline is prompting early-stage preparation across the cryptocurrency ecosystem, where system upgrades must balance security, decentralisation, and continuity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

European Business Council in Japan holds first cybersecurity conference in Tokyo

Tokyo hosted a cybersecurity conference organised by the European Business Council in Japan (EBC) Digital Committee on 7 April. The event took place at the EU Delegation in Tokyo.

The conference was the EBC Digital Committee’s first event. It brought together experts from the public and private sectors to exchange views on cybersecurity challenges and policy developments.

Speakers included Luis Miguel Vega Fidalgo from the European Commission, Satoshi D. from Japan’s Ministry of Economy, Trade and Industry, and Amelia Alder from Knorr-Bremse. A question-and-answer session followed their presentations.

Participants continued discussions during a networking reception after the session. The Digital Committee co-chairs, Wataru Suzuki and Felix von Helden, thanked the speakers and organisers, including Peter Fatelnig from the EU Delegation to Japan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Singapore to update cybersecurity standards and vendor obligations amid AI-enabled threats

Singapore’s Ministry of Digital Development and Information said the government will review and update cybersecurity standards and obligations as part of its response to evolving cyber threats, including AI-enabled attacks.

In a written parliamentary reply, the ministry said Singapore’s position as a major financial hub and digital economy makes it an attractive target for malicious actors. It added that the Cyber Security Agency of Singapore regularly updates the public on cybersecurity threats through SingCERT advisories and the Singapore Cyber Landscape publication.

The ministry said critical systems are already subject to higher cybersecurity standards and obligations under the Cybersecurity Act. It also said the government has invested in capability development, citing initiatives such as the Cybersecurity Development Programme and national exercises including Exercise Cyber Star.

As the threat evolves, so must the response, the ministry said. It stated that the Cyber Security Agency of Singapore will review and update cybersecurity standards and obligations to strengthen security controls, and that the government will help owners of critical systems better detect threats, including those from advanced threat actors and AI-enabled threats, through proprietary threat detection systems.

For government systems, the ministry said GovTech has internal guidelines to safeguard systems that hold sensitive data and provide important government services. It added that GovTech will introduce more stringent cybersecurity and data protection obligations for government vendors, including requiring vendors that manage critical systems and sensitive government data to meet Cyber Trust Mark requirements.

The reply also pointed to measures for businesses and consumers. It said the Cyber Security Agency of Singapore has rolled out initiatives, including its CISO-as-a-Service programme for small and medium enterprises, while mandatory cybersecurity requirements for gateway devices such as home routers have already been introduced.

The ministry added that standards for home routers will be raised further and that Singapore will explore introducing similar standards for IP cameras.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK data reveals alarming growth in online child abuse cases

A sharp increase in online child abuse cases has been reported by the Internet Watch Foundation (IWF) and NSPCC’s Childline, based on data from the Report Remove service.

Nearly 1,900 UK children reported sexual imagery concerns in 2025, a 66 percent rise, with more than 1,100 confirmed cases involving abuse material. Weekly reports show a consistent pattern of coercion, threats, and financial pressure targeting minors.

The scale of the increase reflects structural changes in how abuse operates online. Offenders use fake identities and contact many victims simultaneously, turning exploitation into a repeatable activity.

Financial incentives reinforce the pattern, while teenage boys aged 14 to 17 represent the majority of cases, indicating targeted and adaptive behaviour by perpetrators.

Weaknesses in digital environments further sustain such growth. Platforms prioritise speed and interaction instead of prevention, while anonymity and cross-border activity reduce enforcement effectiveness.

Psychological pressure remains central, with threats designed to isolate victims and limit reporting, meaning recorded cases likely underestimate the real scale.

The IWF‘s findings highlight a policy gap between technological expansion and child safety protections in the UK.

While services like Report Remove improve response and mitigation, they do not address underlying risks. Without stronger platform accountability and preventive regulation, online child abuse is likely to continue expanding.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU digital identity strengthens after 20 years of .eu expansion

Two decades since the launch of the .eu domain, the EU has marked its role in establishing a unified digital identity across member states.

On 7 April 2006, the .eu top-level domain (TLD) was launched, offering businesses, citizens, and organisations a pan-EU online identity.

Over time, .eu has developed into one of the largest country-code domains globally, with millions of registrations and consistent growth.

Its technical stability and security record, including uninterrupted service since launch, have reinforced its reputation as a reliable digital infrastructure. Investments in fraud detection and data integrity have further strengthened trust in its ecosystem.

The domain has also evolved to reflect the EU’s linguistic diversity, with the introduction of internationalised domain names and additional scripts such as Cyrillic and Greek. These developments have expanded accessibility and reinforced inclusivity within the European digital space.

Looking ahead, .eu is positioned as a key instrument for advancing digital sovereignty and supporting the Single Market. Its role in global internet governance discussions is expected to grow, particularly as the EU institutions seek to shape a more open, secure, and rights-based digital environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!