South Korea warns on AI fake news risks

Reporting by The Korea Herald states that South Korean Prime Minister Kim Min-seok has warned of the risks of AI-generated fake news ahead of an upcoming election. Authorities are urging greater vigilance as digital content becomes harder to verify.

According to the report, AI technologies are increasingly capable of producing realistic false information, including manipulated images and videos. This raises concerns about their potential impact on public opinion and trust.

The government has called for precautionary measures to limit the spread of misinformation and protect the integrity of democratic processes. This includes encouraging awareness and responsible use of AI tools.

The warning reflects broader concerns about the influence of AI driven disinformation during election cycles in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Corporate AI governance gaps highlighted in UNESCO report

UNESCO and the Thomson Reuters Foundation have published ‘Responsible AI in practice: 2025 global insights from the AI Company Data Initiative‘, presenting findings from what the report describes as the largest global dataset of corporate responsible AI disclosures.

The report analyses 2,972 companies across 11 sectors and multiple regions using publicly available disclosures and company survey responses collected through the AI Company Data Initiative.

The report says AI is being embedded across companies’ products, services, and internal operations faster than governance and disclosure are developing. It states that 43.7% of companies publicly communicate having an AI strategy or guidelines, but only 13% publicly claim adherence to a formal AI governance framework.

Among those that do cite a framework, 53% refer to the EU AI Act, while the report says 43.6% cite ‘other’ frameworks, which it presents as weakening comparability across the wider AI governance ecosystem.

The publication also says many companies describe AI governance in conceptual terms while providing less evidence on operational controls, accountability pathways, monitoring, and remediation. It states that 40% report board- or committee-level oversight on AI, and 12.4% report having a policy to ensure a human oversees AI systems.

At the same time, the publication says 72% of companies do not report conducting any AI-related impact assessment. Of those that do, 11% report environmental impact assessments and 7% report human rights impact assessments. The key statistics on page 10 visually present these findings.

Regarding labour impacts, the report says companies do not provide adequate protection for workers as AI reshapes jobs. It states that while 31% of companies claim to have AI training programmes, only 12% offered structured training with comprehensive coverage. It also argues that effective worker protection requires stronger evidence of reskilling, retraining, redeployment, transition support, and access to remedy where AI affects workers’ rights.

Why does it matter?

The report further states that ethical issues, including human rights and environmental impacts, are being sidelined in AI governance and risk management, while transparency regarding training data, third-party systems, and user rights remains uneven. It presents the AI Company Data Initiative as a tool to help companies assess their governance practices against UNESCO’s Recommendation on the Ethics of AI and to give investors more comparable information on how AI is governed in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government applies AI to improve efficiency in transport policy consultations

The UK Department for Transport (DfT) has introduced generative AI tools to speed up the analysis of public consultations, significantly reducing time and operational costs. Managing 55 consultations yearly, the department often handles over 100,000 responses per process, previously requiring months of manual review.

A new Consultation Analysis Tool, built with Google Cloud and the Alan Turing Institute, processes large datasets within hours using advanced AI. The system identifies key themes with up to 90% accuracy, enabling faster policy responses while delivering estimated annual savings of £4 million.

Beyond consultation analysis, the department has expanded its use of AI across infrastructure planning and public communication. Cloud-based tools support sustainable transport decisions and help draft public inquiry responses by retrieving policy data and generating structured replies.

Human oversight remains central to the framework. AI-generated outputs are reviewed for accuracy, fairness, and bias, ensuring that final decisions stay with policy experts while maintaining transparency and public trust in government processes.

At a wider level, this reflects how AI can strengthen evidence-based policymaking, improve administrative efficiency, and free up expert capacity for higher-value decision-making, provided that transparency, accountability, and human oversight remain embedded in the process.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Global data governance efforts expand as UNESCO supports policy capacity for AI systems

UNESCO and the United Nations Development Programme (UNDP) have launched a joint initiative to support governments in developing rights-based data governance frameworks for AI. The programme reflects growing global efforts to align digital transformation with public interest objectives.

The Data governance for inclusive digital and AI futures initiative provides policymakers with practical tools to design transparent and accountable data systems, with a focus on safeguarding rights and enabling inclusive AI deployment.

It responds to increasing demand for structured governance approaches as countries expand the use of data-driven technologies.

Participants from multiple regions applied governance frameworks to areas including healthcare, digital identity, and social protection. These projects demonstrate how data governance can improve public service delivery while strengthening accountability and citizen trust.

Hosted at ITU Academy and supported by the EU Global Gateway initiative, the programme also promotes cross-country collaboration and knowledge exchange, reinforcing international coordination in data governance.

An initiative by UNESCO that highlights the importance of building institutional capacity to ensure that AI systems operate within clear legal and ethical frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI reshapes classrooms and universities across Vietnam education system

AI is becoming a central part of education in Vietnam, changing how schools are managed, how students learn, and how research is carried out. Officials say the shift is part of the country’s wider digital transformation in education.

Nguyễn Sơn Hải of Vietnam’s Ministry of Education and Training said earlier reforms focused on digitising activities, while AI is now reshaping teaching and administration more broadly. The ministry is also preparing legal and policy frameworks to support safe and controlled AI use in education.

Authorities have identified priorities, including AI skills for learners, shared digital platforms, and stronger infrastructure. An AI education programme for junior secondary pupils is being piloted and is expected to begin officially in the 2026–2027 academic year.

Universities are also adapting their strategies as AI changes higher education. Hanoi University of Science and Technology said it is redesigning training, assessment, and digital systems to reflect these changes.

At the same time, institutions, including Thai Nguyen University, are linking research more closely with business and local development needs. Officials say wider access to internet services and devices remains essential to ensure equal access to digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU AI Continent Action Plan shows progress in infrastructure, data and governance

The European Commission has reported significant progress under its AI Continent Action Plan, marking one year of implementation aimed at strengthening Europe’s position in AI. The strategy focuses on infrastructure, data, talent, adoption and trustworthy AI.

Investment in computational capacity has expanded, with AI factories deployed across European supercomputers and further large-scale facilities in development. These initiatives aim to increase access to advanced computing resources for researchers and emerging companies.

On data governance, the Commission introduced the Data Union Strategy and complementary regulatory measures to improve data sharing and provide legal certainty for businesses.

Efforts to support talent development and mobility, alongside new training initiatives in the EU, form another central component of the plan.

The programme also promotes AI adoption across public and industrial sectors through targeted funding and coordinated initiatives. The overall approach reflects a policy framework designed to balance innovation with regulatory oversight and alignment with European values.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Serpro joins Brazil-China AI cooperation protocol

Brazil’s Ministry of Science, Technology, and Innovation, Serpro, and the Chinese company iFlytek have signed a cooperation protocol on AI focused on building national capabilities for the functioning of the state.

According to Serpro, the protocol forms part of broader BrazilChina cooperation in science and technology. Acting Minister Luis Fernandes said the initiative aims to foster joint technology development and knowledge transfer with Brazil, with implications for digital sovereignty.

The protocol sets guidelines for cooperation in research, development, and capacity-building in AI, with a focus on large language models adapted to Brazilian Portuguese, translation and accessibility systems, cybersecurity applications, and AI infrastructure in Brazil. Serpro said the initiative also covers data centres, secure cloud, and interoperable data platforms.

Serpro will lead the technical execution of the initiative. The company said its role is to connect research, public policy, and delivery of public services, and added that it already has more than 300 AI-based solutions in its portfolio. The protocol also provides for training measures, including researcher exchanges, courses, technical visits, and scholarships.

The Serpro announcement states that initiatives under the protocol will depend on specific instruments to be concluded between the participants. It also presents the partnership as part of a broader effort to strengthen Brazil’s AI technical capacity through international cooperation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FBI reports billions lost to crypto and AI scams

The Federal Bureau of Investigation reports that cyber-enabled crimes cost Americans nearly $21 billion in 2025, according to its latest Internet Crime Report. The Internet Crime Complaint Center recorded more than 1 million complaints, marking a rise from the previous year.

Investment fraud, phishing, extortion, and tech support scams remained the most common threats, with older adults reporting disproportionately high losses. Individuals over 60 accounted for approximately $7.7 billion in losses, reflecting a sharp year-on-year increase.

Cryptocurrency-related fraud was the most financially damaging category, with losses exceeding $11 billion across more than 180,000 complaints. The report also highlighted emerging risks linked to AI, including deepfake identities, voice cloning, and fabricated media used to manipulate victims.

The FBI has expanded initiatives such as Operation Level Up to identify ongoing scams and reduce losses, while emphasising early reporting and awareness measures. Officials say scammers increasingly use psychological pressure and realistic digital impersonation to deceive victims.

Rising losses highlight how rapidly evolving digital fraud techniques are outpacing public awareness, with crypto and AI tools making scams more scalable and convincing.

Strengthening detection, reporting, and education will be critical to reducing financial harm and improving resilience against increasingly sophisticated online crime networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft outlines approach to scaling AI across organisational systems

A shift from early AI adoption towards what it terms ‘frontier transformation’ has been described by Microsoft, where AI is integrated into core organisational processes.

Such an approach reflects how AI is increasingly embedded within everyday workflows rather than used in isolated pilots.

According to Microsoft, scaling AI requires moving beyond experimentation and establishing structured operating models. It includes addressing practical challenges such as data integration, system reliability, and alignment with organisational objectives.

A framework that also highlights the importance of governance and execution, with AI systems expected to operate under defined standards similar to other critical infrastructure. Something that involves coordination across platforms, internal processes, and external partners.

Why does it matter?

Frontier transformation illustrates a broader transition in how organisations approach AI deployment, focusing on long-term integration, operational consistency, and scalable implementation across different sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!