Council of the EU pushes for human-centred AI in education systems

The Council of the European Union has approved conclusions calling for an ethical, safe and human-centred approach to AI in education, stressing that teachers should remain at the heart of the learning process as AI tools become more widely used across schools and universities.

The Council said the conclusions focus on strengthening digital skills and AI literacy, guaranteeing inclusion and fairness, empowering teachers, and supporting the well-being of both teachers and learners. It also noted that the relationship between AI and teaching is being addressed for the first time in the EU education policy.

The EU ministers highlighted both the opportunities and risks associated with AI-driven education systems. The Council said AI could improve accessibility, support disadvantaged learners, enable more individualised teaching and assessment methods, and reduce administrative workloads for educators.

At the same time, the conclusions raise concerns about misinformation, algorithmic bias, over-reliance on technology, reduced teacher autonomy, data protection risks and the widening of digital inequalities across Europe. The Council also warned that AI could affect learners’ concentration and skill acquisition, while raising broader societal and environmental concerns.

The conclusions call on national governments to strengthen teachers’ AI and digital skills through training, while encouraging the development and use of education-specific AI tools that provide clear pedagogical value and align with data protection, accountability and risk-awareness requirements.

The Council also said teachers should have opportunities to contribute to the design and evaluation of AI tools used in education, reflecting a digital humanism approach focused on human agency and democratic values.

Member states are urged to ensure AI deployment does not undermine teachers’ autonomy or sustainable working conditions, and that digital tools remain accessible and suitable for all learners. The European Commission was encouraged to support international cooperation, research, ethical guidance, peer-to-peer exchanges and capacity-building as AI adoption accelerates across European education systems.

Why does it matter?

AI is moving into classrooms not only as a learning tool, but as part of how teaching, assessment, administration and student support are organised. The Council’s conclusions underline that education policy will need to address more than technical adoption, including teacher autonomy, digital inequality, learner well-being, data protection and the risk of over-reliance on automated systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO explores how AI and design can reshape culture and creativity

UNESCO’s Regional Office for East Asia has launched a global call for good practice cases on how AI and design are being used to support culture, creativity, education, sustainability and social inclusion.

The call invites submissions from organisations, institutions, practitioners, educators and innovators using AI together with design approaches to create positive outcomes in cultural and creative sectors. UNESCO says the initiative is looking for practical examples that support culture, creativity, livelihoods, learning, sustainability and social inclusion.

The call focuses on four thematic areas: cultural heritage protection, documentation and interpretation; cultural tourism and visitor experience design; fashion and creative industry innovation; and design education and capacity development.

Selected projects may receive UNESCO recognition, be included in a publication or catalogue, participate in exhibitions or showcases, receive invitations to talks or events, and gain visibility through UNESCO communication channels.

The initiative reflects growing international interest in how AI can support creative and cultural sectors beyond industrial productivity. UNESCO’s framing places design principles such as inclusion, accessibility, cultural relevance and people-centred use at the centre of responsible AI deployment in cultural and educational contexts.

Submissions are open until 15 June 2026, with selected cases scheduled to be announced on 15 July 2026. Applications may be submitted in English or Chinese and are expected to demonstrate practical examples of AI supporting learning, livelihoods, creativity or sustainable development through design-oriented approaches.

Why does it matter?

The call points to a wider effort to shape AI use in culture and creativity around public value rather than solely on automation. By focusing on heritage, tourism, fashion and design education, UNESCO is encouraging examples where AI supports local knowledge, creative livelihoods, cultural access and inclusive innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US and China reportedly weigh AI risk talks ahead of leaders’ summit

The United States and China are considering launching official discussions on AI risk management, The Wall Street Journal reported, citing people familiar with the matter.

According to the report, the White House and the Chinese government are also considering whether to place AI on the agenda for a planned summit in Beijing between US President Donald Trump and Chinese President Xi Jinping. If agreed, the talks would mark the first AI-specific engagement between the two governments under the current US administration.

The possible dialogue could focus on risks linked to advanced AI systems, including unexpected model behaviour, autonomous military applications and misuse by non-state actors using powerful open-source tools, people familiar with the discussions told the newspaper. The report said Washington is waiting for Beijing to designate a counterpart for the talks.

The WSJ reported that US Treasury Secretary Scott Bessent is leading the US side, while Chinese Vice Finance Minister Liao Min has been involved in discussions on setting up such a channel. The newspaper added that the two presidents would ultimately decide whether AI appears on the formal summit agenda.

Liu Pengyu, spokesperson for the Chinese Embassy in Washington, was cited as saying that China is ready to engage in communication on AI risk mitigation. Analysts have raised the possibility that any future dialogue could support crisis-management tools, including an AI hotline between senior leaders.

The report places the latest deliberations in the context of earlier US-China engagement on AI. In 2023, then US President Joe Biden and Xi launched a formal AI dialogue, and both sides later said humans, not AI, would retain authority over nuclear-launch decisions. The WSJ said the earlier process produced limited results, but AI has remained a high-level focus in bilateral relations.

Non-governmental discussions have also reportedly continued in parallel, including exchanges involving former Microsoft research executive Craig Mundie and Chinese counterparts from Tsinghua University and major AI companies. Participants cited by the newspaper said those exchanges have focused on frontier-model safety, technical guardrails and broader questions of strategic stability.

Why does it matter?

A formal AI risk channel between Washington and Beijing would signal that both governments see advanced AI as a strategic stability issue, not only an economic or technological race. Even brief talks could matter if they create channels for crisis communication about military AI, frontier-model failures, or misuse by non-state actors. However, because the discussions are still only reported as under consideration, the significance lies in the possibility of a risk-management mechanism, not in any confirmed diplomatic breakthrough.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OECD finds audit institutions are building AI capacity but struggling to scale

Public audit institutions are expanding their use of AI, but most remain at an early stage of adoption, with a significant gap between pilot projects and full operational deployment, according to a new OECD paper.

Drawing on consultations with 15 institutions across 14 countries and the European Union, the paper says AI is being explored to strengthen oversight and improve audit processes in areas such as anomaly detection, document processing, knowledge management and predictive risk assessment.

The OECD says institutional commitment is already visible across several indicators. Among the institutions consulted, 67% reported having a formal AI strategy, 80% had internal AI guidelines or policies, 87% offered AI-related staff training, and 87% had at least one AI tool in production.

However, the paper stresses that maturity levels vary widely and that many tools remain limited in scale or are still being tested. It identifies a gap between experimentation and scalable operational deployment, despite the growing integration of AI into broader digital transformation efforts.

The paper highlights several emerging audit use cases, including machine-learning systems for anomaly detection in procurement and financial records, predictive models to identify entities at higher risk of distress or non-compliance, intelligent document processing for extracting data from unstructured files, and generative AI tools for drafting, summarising and translating documents.

It also points to more specialised applications, such as semantic search, knowledge management, and visual or spatial analysis using satellite imagery, drones or other sensor-based systems.

Despite growing experimentation, the OECD says the main barriers to wider use remain structural. Fragmented data systems, weak interoperability, limited internal technical expertise and uneven digital infrastructure continue to slow progress.

The paper argues that robust data governance, secure and interoperable systems, and stronger in-house development capacity will be critical if public audit bodies are to scale AI responsibly while maintaining transparency, accountability and public trust.

It also stresses that AI is being positioned as a support tool rather than a substitute for auditors. Across the cases reviewed, human oversight remains central, both because of current limitations in explainability and reliability and because audit institutions are treating AI adoption cautiously in high-stakes oversight settings.

The OECD presents the current period as a transitional phase in which public audit institutions are building the foundations needed for broader and more trustworthy use of AI in oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ICESCO and Morocco sign agreement on AI and digital capacity building

The Islamic World Educational, Scientific and Cultural Organisation (ICESCO) and Morocco’s Ministry of Digital Transition and Administrative Reform have signed a memorandum of understanding on cooperation in digital transformation, AI and strategic foresight.

The agreement was signed in Rabat on the sidelines of the African Open Government Conference by ICESCO Director-General Dr Salim M. AlMalik and Dr Amal El Fallah, Minister Delegate to the Head of Government in charge of Digital Transition and Administrative Reform of Morocco.

The memorandum provides for workshops, training programmes and joint seminars aimed at building capacity among public and private sector professionals in digital transformation, AI, strategic foresight and digital diplomacy. It also covers the exchange of expertise and open data, the preparation of reference materials, and research related to future skills and professions in ICESCO member states.

The agreement further includes cooperation with universities and research centres to support a knowledge ecosystem aligned with the requirements of the digital economy. It also refers to innovation laboratories and digital tools for the digitisation, indexing, research and analysis of cultural and scientific heritage materials.

Why does it matter?

The agreement places AI within a broader capacity-building agenda that includes public-sector skills, digital diplomacy, open data, foresight and heritage digitisation. Also, the policy relevance lies in how international organisations and national governments are using AI cooperation not only for technology adoption, but also for institutional readiness and future skills development across member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO report warns over global quantum research inequality

According to UNESCO, the unequal access to quantum research infrastructure risks widening global scientific and technological divides, with nearly one in three researchers worldwide still lacking access to quantum research facilities despite rapid growth in investment and interest in the field.

The findings come from The Quantum Moment: A Global Report, Outcomes of the International Year of Quantum Science and Technology, which analysed more than 1,300 quantum science events across 83 countries and included a global survey of 590 experts in 81 countries.

The report highlights major regional disparities, with Europe and North America hosting 7 times as many quantum-related events per country as Africa.

More than 150 countries still lack a national quantum strategy, even though global public and private investment in quantum science and technology reached $55.7 billion by mid-2025, according to UNESCO.

The organisation also points to a persistent gender gap, noting that while women account for a much larger share of early-career participants, they make up only around 16% of senior researchers and 12% of leadership roles in quantum fields.

UNESCO says quantum technologies could transform areas including healthcare, computing, cybersecurity, and climate modelling. To address infrastructure inequality, it has launched the Global Quantum Initiative and expanded programmes that give researchers from developing economies remote access to advanced quantum computing systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Uganda to host Digital Government Africa 2026 summit

Uganda has announced that it will host the 2026 Digital Government Africa conference, presenting the event as a platform for continental dialogue on digital transformation, public service modernisation, and government innovation.

The announcement was made at a press conference in Kampala by the Ministry of ICT and National Guidance, the National Information Technology Authority of Uganda, and representatives of African Brains Global.

According to the organisers, the summit will bring together ministers, regulators, cybersecurity experts, cloud and data centre providers, digital finance institutions, investors, innovators, and development partners from across Africa and beyond. The event is scheduled to take place in Kampala from 6 to 8 October 2026.

Uganda’s Minister of ICT and National Guidance, Chris Baryomunsi, said the conference reflects growing confidence in the country’s digital transformation efforts and offers an opportunity to showcase how ICT is shaping service delivery and national development. The government linked the summit to Uganda’s wider Digital Transformation Roadmap, which focuses on digital infrastructure, e-government services, cybersecurity resilience, digital skills, and innovation.

Officials also pointed to Uganda’s expanding digital infrastructure. According to the ministry, the National Backbone Infrastructure now exceeds 5,000 kilometres of fibre-optic cable, connecting government institutions, districts, and urban centres, while more than 1,500 government sites use high-speed internet to support systems such as financial management, e-procurement, and online tax services.

The government also cited broader indicators of digital growth, including more than 44.3 million active mobile connections, expanding internet access through 4G and emerging 5G trials, and an ICT sector contributing more than 9% to GDP. Officials said hosting the summit should strengthen engagement between policymakers and innovators and raise Uganda’s profile as an ICT investment destination.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia expands collaboration efforts in key science and technology areas

The Australian Government Department of Industry, Science and Resources has announced $6.2 million in funding for nine international projects under round two of the Global Science and Technology Diplomacy Fund (GSTDF).

The programme supports collaboration, innovation and commercialisation in priority technology areas. The selected projects focus on AI, advanced manufacturing, quantum technologies and hydrogen, with several initiatives applying AI to areas such as robotics, satellite networks and ocean forecasting.

According to the department, Australian researchers will work with international partners across Asia-Pacific, with projects spanning fields from healthcare to environmental monitoring and space technologies.

The funding reflects a broader effort to deepen international cooperation and advance strategic technologies, with collaborations involving countries including Singapore, Vietnam, Japan, Malaysia, New Zealand, and South Korea, supporting innovation linked to Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot