Global data governance efforts expand as UNESCO supports policy capacity for AI systems

UNESCO and the United Nations Development Programme (UNDP) have launched a joint initiative to support governments in developing rights-based data governance frameworks for AI. The programme reflects growing global efforts to align digital transformation with public interest objectives.

The Data governance for inclusive digital and AI futures initiative provides policymakers with practical tools to design transparent and accountable data systems, with a focus on safeguarding rights and enabling inclusive AI deployment.

It responds to increasing demand for structured governance approaches as countries expand the use of data-driven technologies.

Participants from multiple regions applied governance frameworks to areas including healthcare, digital identity, and social protection. These projects demonstrate how data governance can improve public service delivery while strengthening accountability and citizen trust.

Hosted at ITU Academy and supported by the EU Global Gateway initiative, the programme also promotes cross-country collaboration and knowledge exchange, reinforcing international coordination in data governance.

An initiative by UNESCO that highlights the importance of building institutional capacity to ensure that AI systems operate within clear legal and ethical frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI reshapes classrooms and universities across Vietnam education system

AI is becoming a central part of education in Vietnam, changing how schools are managed, how students learn, and how research is carried out. Officials say the shift is part of the country’s wider digital transformation in education.

Nguyễn Sơn Hải of Vietnam’s Ministry of Education and Training said earlier reforms focused on digitising activities, while AI is now reshaping teaching and administration more broadly. The ministry is also preparing legal and policy frameworks to support safe and controlled AI use in education.

Authorities have identified priorities, including AI skills for learners, shared digital platforms, and stronger infrastructure. An AI education programme for junior secondary pupils is being piloted and is expected to begin officially in the 2026–2027 academic year.

Universities are also adapting their strategies as AI changes higher education. Hanoi University of Science and Technology said it is redesigning training, assessment, and digital systems to reflect these changes.

At the same time, institutions, including Thai Nguyen University, are linking research more closely with business and local development needs. Officials say wider access to internet services and devices remains essential to ensure equal access to digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU AI Continent Action Plan shows progress in infrastructure, data and governance

The European Commission has reported significant progress under its AI Continent Action Plan, marking one year of implementation aimed at strengthening Europe’s position in AI. The strategy focuses on infrastructure, data, talent, adoption and trustworthy AI.

Investment in computational capacity has expanded, with AI factories deployed across European supercomputers and further large-scale facilities in development. These initiatives aim to increase access to advanced computing resources for researchers and emerging companies.

On data governance, the Commission introduced the Data Union Strategy and complementary regulatory measures to improve data sharing and provide legal certainty for businesses.

Efforts to support talent development and mobility, alongside new training initiatives in the EU, form another central component of the plan.

The programme also promotes AI adoption across public and industrial sectors through targeted funding and coordinated initiatives. The overall approach reflects a policy framework designed to balance innovation with regulatory oversight and alignment with European values.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Serpro joins Brazil-China AI cooperation protocol

Brazil’s Ministry of Science, Technology, and Innovation, Serpro, and the Chinese company iFlytek have signed a cooperation protocol on AI focused on building national capabilities for the functioning of the state.

According to Serpro, the protocol forms part of broader BrazilChina cooperation in science and technology. Acting Minister Luis Fernandes said the initiative aims to foster joint technology development and knowledge transfer with Brazil, with implications for digital sovereignty.

The protocol sets guidelines for cooperation in research, development, and capacity-building in AI, with a focus on large language models adapted to Brazilian Portuguese, translation and accessibility systems, cybersecurity applications, and AI infrastructure in Brazil. Serpro said the initiative also covers data centres, secure cloud, and interoperable data platforms.

Serpro will lead the technical execution of the initiative. The company said its role is to connect research, public policy, and delivery of public services, and added that it already has more than 300 AI-based solutions in its portfolio. The protocol also provides for training measures, including researcher exchanges, courses, technical visits, and scholarships.

The Serpro announcement states that initiatives under the protocol will depend on specific instruments to be concluded between the participants. It also presents the partnership as part of a broader effort to strengthen Brazil’s AI technical capacity through international cooperation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FBI reports billions lost to crypto and AI scams

The Federal Bureau of Investigation reports that cyber-enabled crimes cost Americans nearly $21 billion in 2025, according to its latest Internet Crime Report. The Internet Crime Complaint Center recorded more than 1 million complaints, marking a rise from the previous year.

Investment fraud, phishing, extortion, and tech support scams remained the most common threats, with older adults reporting disproportionately high losses. Individuals over 60 accounted for approximately $7.7 billion in losses, reflecting a sharp year-on-year increase.

Cryptocurrency-related fraud was the most financially damaging category, with losses exceeding $11 billion across more than 180,000 complaints. The report also highlighted emerging risks linked to AI, including deepfake identities, voice cloning, and fabricated media used to manipulate victims.

The FBI has expanded initiatives such as Operation Level Up to identify ongoing scams and reduce losses, while emphasising early reporting and awareness measures. Officials say scammers increasingly use psychological pressure and realistic digital impersonation to deceive victims.

Rising losses highlight how rapidly evolving digital fraud techniques are outpacing public awareness, with crypto and AI tools making scams more scalable and convincing.

Strengthening detection, reporting, and education will be critical to reducing financial harm and improving resilience against increasingly sophisticated online crime networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft outlines approach to scaling AI across organisational systems

A shift from early AI adoption towards what it terms ‘frontier transformation’ has been described by Microsoft, where AI is integrated into core organisational processes.

Such an approach reflects how AI is increasingly embedded within everyday workflows rather than used in isolated pilots.

According to Microsoft, scaling AI requires moving beyond experimentation and establishing structured operating models. It includes addressing practical challenges such as data integration, system reliability, and alignment with organisational objectives.

A framework that also highlights the importance of governance and execution, with AI systems expected to operate under defined standards similar to other critical infrastructure. Something that involves coordination across platforms, internal processes, and external partners.

Why does it matter?

Frontier transformation illustrates a broader transition in how organisations approach AI deployment, focusing on long-term integration, operational consistency, and scalable implementation across different sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU universities could anchor AI strategy

Universities could play a central role in strengthening AI sovereignty across the European Union, it was said at a Brussels forum organised by Udice. Higher education institutions are positioned as key contributors to research, talent development and technological capability.

Universities already underpin much of Europe’s AI ecosystem through fundamental research and industry collaboration. Their role extends to training skilled workers needed to sustain long-term innovation.

However, challenges remain, including fragmented funding, competition for global talent and limited scaling of research into commercial applications. These barriers may constrain the European Union’s ability to capitalise on its academic strengths fully.

Yet, stronger coordination, investment and policy support could enable universities to act as a backbone for AI development and strategic autonomy in the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human work roles shift alongside AI

Reporting by The Korea Herald highlights that AI is increasingly reshaping workplace expectations, with employees adapting how they approach tasks and productivity. The shift reflects broader changes in how work is organised and delivered.

The article indicates that workers are using AI tools to improve efficiency while also reassessing workloads and job design. This is leading to a growing focus on balancing automation with human input.

At the same time, organisations are being pushed to rethink management structures, accountability and skills development. The integration of AI is influencing both individual roles and wider organisational strategies.

The Korea Herald suggests that long-term success will depend on how effectively businesses align AI adoption with workforce needs and sustainable work practices globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Armenia plans AI road scanning system

Armenpress reports that the Government of the Republic of Armenia plans to acquire an AI-powered road-scanning device to improve infrastructure maintenance. The system is intended to assess road conditions and guide repair decisions.

According to the Ministry of Territorial Administration and Infrastructure of the Republic of Armenia, the device will scan roads and use AI to determine the type and depth of repairs required. This includes identifying whether partial repairs or full reconstruction are needed.

Minister of Territorial Administration and Infrastructure of the Republic of Armenia, Davit Khudatyan, stated that the AI technology will provide a detailed analysis by passing over road surfaces. The system is expected to improve planning and maintenance efficiency.

The project is estimated to cost between 500 and 600 million drams and forms part of broader efforts to modernise infrastructure management in Armenia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot