Serpro joins Brazil-China AI cooperation protocol

Brazil’s Ministry of Science, Technology, and Innovation, Serpro, and the Chinese company iFlytek have signed a cooperation protocol on AI focused on building national capabilities for the functioning of the state.

According to Serpro, the protocol forms part of broader BrazilChina cooperation in science and technology. Acting Minister Luis Fernandes said the initiative aims to foster joint technology development and knowledge transfer with Brazil, with implications for digital sovereignty.

The protocol sets guidelines for cooperation in research, development, and capacity-building in AI, with a focus on large language models adapted to Brazilian Portuguese, translation and accessibility systems, cybersecurity applications, and AI infrastructure in Brazil. Serpro said the initiative also covers data centres, secure cloud, and interoperable data platforms.

Serpro will lead the technical execution of the initiative. The company said its role is to connect research, public policy, and delivery of public services, and added that it already has more than 300 AI-based solutions in its portfolio. The protocol also provides for training measures, including researcher exchanges, courses, technical visits, and scholarships.

The Serpro announcement states that initiatives under the protocol will depend on specific instruments to be concluded between the participants. It also presents the partnership as part of a broader effort to strengthen Brazil’s AI technical capacity through international cooperation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FBI reports billions lost to crypto and AI scams

The Federal Bureau of Investigation reports that cyber-enabled crimes cost Americans nearly $21 billion in 2025, according to its latest Internet Crime Report. The Internet Crime Complaint Center recorded more than 1 million complaints, marking a rise from the previous year.

Investment fraud, phishing, extortion, and tech support scams remained the most common threats, with older adults reporting disproportionately high losses. Individuals over 60 accounted for approximately $7.7 billion in losses, reflecting a sharp year-on-year increase.

Cryptocurrency-related fraud was the most financially damaging category, with losses exceeding $11 billion across more than 180,000 complaints. The report also highlighted emerging risks linked to AI, including deepfake identities, voice cloning, and fabricated media used to manipulate victims.

The FBI has expanded initiatives such as Operation Level Up to identify ongoing scams and reduce losses, while emphasising early reporting and awareness measures. Officials say scammers increasingly use psychological pressure and realistic digital impersonation to deceive victims.

Rising losses highlight how rapidly evolving digital fraud techniques are outpacing public awareness, with crypto and AI tools making scams more scalable and convincing.

Strengthening detection, reporting, and education will be critical to reducing financial harm and improving resilience against increasingly sophisticated online crime networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft outlines approach to scaling AI across organisational systems

A shift from early AI adoption towards what it terms ‘frontier transformation’ has been described by Microsoft, where AI is integrated into core organisational processes.

Such an approach reflects how AI is increasingly embedded within everyday workflows rather than used in isolated pilots.

According to Microsoft, scaling AI requires moving beyond experimentation and establishing structured operating models. It includes addressing practical challenges such as data integration, system reliability, and alignment with organisational objectives.

A framework that also highlights the importance of governance and execution, with AI systems expected to operate under defined standards similar to other critical infrastructure. Something that involves coordination across platforms, internal processes, and external partners.

Why does it matter?

Frontier transformation illustrates a broader transition in how organisations approach AI deployment, focusing on long-term integration, operational consistency, and scalable implementation across different sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU universities could anchor AI strategy

Universities could play a central role in strengthening AI sovereignty across the European Union, it was said at a Brussels forum organised by Udice. Higher education institutions are positioned as key contributors to research, talent development and technological capability.

Universities already underpin much of Europe’s AI ecosystem through fundamental research and industry collaboration. Their role extends to training skilled workers needed to sustain long-term innovation.

However, challenges remain, including fragmented funding, competition for global talent and limited scaling of research into commercial applications. These barriers may constrain the European Union’s ability to capitalise on its academic strengths fully.

Yet, stronger coordination, investment and policy support could enable universities to act as a backbone for AI development and strategic autonomy in the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human work roles shift alongside AI

Reporting by The Korea Herald highlights that AI is increasingly reshaping workplace expectations, with employees adapting how they approach tasks and productivity. The shift reflects broader changes in how work is organised and delivered.

The article indicates that workers are using AI tools to improve efficiency while also reassessing workloads and job design. This is leading to a growing focus on balancing automation with human input.

At the same time, organisations are being pushed to rethink management structures, accountability and skills development. The integration of AI is influencing both individual roles and wider organisational strategies.

The Korea Herald suggests that long-term success will depend on how effectively businesses align AI adoption with workforce needs and sustainable work practices globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Armenia plans AI road scanning system

Armenpress reports that the Government of the Republic of Armenia plans to acquire an AI-powered road-scanning device to improve infrastructure maintenance. The system is intended to assess road conditions and guide repair decisions.

According to the Ministry of Territorial Administration and Infrastructure of the Republic of Armenia, the device will scan roads and use AI to determine the type and depth of repairs required. This includes identifying whether partial repairs or full reconstruction are needed.

Minister of Territorial Administration and Infrastructure of the Republic of Armenia, Davit Khudatyan, stated that the AI technology will provide a detailed analysis by passing over road surfaces. The system is expected to improve planning and maintenance efficiency.

The project is estimated to cost between 500 and 600 million drams and forms part of broader efforts to modernise infrastructure management in Armenia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Geneva Cyber Week to bring diplomacy, cyber policy, and AI security debates together

The United Nations Institute for Disarmament Research and the Swiss Federal Department of Foreign Affairs will co-host Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.

Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change, with organisers framing the gathering as a space for more practical cooperation across diplomatic, technical, operational, and policy communities.

“Cybersecurity is no longer a niche technical issue; it is a strategic policy challenge with implications for international peace, economic stability and public trust. At a moment of growing fragmentation and accelerating technological change, Geneva Cyber Week brings together the communities that need to be in the room — diplomatic, technical, operational and policy — to move from shared concern to practical cooperation,” said Dr Giacomo Persi Paoli, Head of Security and Technology Programme at UNIDIR.

The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance. Scheduled sessions include UNIDIR’s Cyber Stability Conference, Peak Incident Response organised by the Swiss CSIRT Forum, Digital International Geneva, the World Economic Forum Annual Meeting on Cybersecurity, and a Council of Europe session titled ‘Artificial Intelligence, Cybercrime and Electronic Evidence: Risks, Opportunities, and Global Cooperation’.

The week will also include partner-led panels, workshops, simulations, exhibitions, and networking events to connect specialist communities that do not always work in the same room. That broader structure reflects an effort to treat cyber issues not only as a technical or security matter but also as a governance, trust-building, and international-coordination challenge.

“At a time when digital threats know no borders, fostering inclusive discussions is essential to building trust, advancing common norms, and promoting a secure and open cyberspace for all. International Geneva provides an unparalleled multilateral environment to address these cybersecurity challenges collectively. Geneva Cyber Week’s diverse programme embodies this collaborative spirit,” said Marina Wyss Ross, Deputy Head of International Security Division and Chief of Section for Arms Control, Disarmament and Cybersecurity at the Swiss FDFA.

Across the city, Geneva will also mark the week visually, including flags on the Mont Blanc Bridge and special illumination of the Jet d’Eau on Monday evening. But beyond the symbolism, the event’s significance lies in how it seeks to bring cyber diplomacy, incident response, governance debates, and emerging technology risks into the same international conversation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Japan approves APPI amendment bill on personal data, AI training, and fines

Japan’s Cabinet has approved a bill to amend the Act on the Protection of Personal Information, or APPI, for submission to parliament.

The proposed amendments combine stricter enforcement with regulatory easing. They would introduce an administrative fine system, strengthen protections for children’s data and certain biometric data, and allow broader use of personal data for AI training. The bill would also ease some data-breach notification requirements.

Digital Minister of Japan, Hisashi Matsumoto, said enabling the use of sensitive personal data without consent is important for developing domestic AI models. He said the bill seeks to balance that objective with stronger protections for children’s data and facial-recognition data, as well as the introduction of administrative fines.

The fine mechanism would be introduced in a limited form. Provisions to impose fines for large-scale data breaches resulting from inadequate security measures were removed. Instead, the bill would target improper acquisition or use of personal data, unlawful provision of data to third parties, and misuse of sensitive data beyond stated statistical purposes, including transfers to third parties.

According to the proposal, fines would apply in large-scale cases involving more than 1,000 affected individuals, with amounts linked to profits derived from unlawful data handling. During drafting, the Personal Information Protection Commission also dropped plans to introduce consumer class actions for legal redress, while saying it would continue studying the issue.

The Personal Information Protection Commission is seeking passage during the current parliamentary session. The proposal follows a lengthy amendment process, during which earlier plans faced opposition from business and technology groups.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches child safety framework to address AI risks

A new framework has been introduced by OpenAI to address risks of AI-enabled child abuse and strengthen protection mechanisms across digital systems.

An initiative that reflects growing concern over how emerging technologies can both enable and prevent harm.

The blueprint focuses on modernising legal frameworks to address AI-generated harmful content, improving reporting and coordination among service providers, and embedding safety measures directly into AI systems.

These measures aim to enhance early detection and prevent misuse at scale.

Developed in collaboration with organisations such as the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, the framework promotes shared standards across industry and public authorities.

It emphasises coordinated responses and stronger accountability mechanisms.

An approach that combines technical safeguards, human oversight, and legal enforcement, aiming to improve response speed and reduce risks before harm occurs.

Ultimately, the initiative highlights the need for continuous adaptation as AI capabilities evolve and reshape online safety challenges.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!