Australian Senate opens inquiry into AI data centres

The Australian Greens announced that the Senate has established a parliamentary inquiry into AI data centres, according to its official statement. The move follows growing concern over the rapid expansion of energy-intensive AI infrastructure and limited federal oversight.

The inquiry will examine environmental, economic and social impacts, including energy and water use, effects on communities, and the regulatory framework governing AI. It aims to better understand how these facilities influence resources and infrastructure.

Greens Senator Sarah Hanson-Young said communities have raised concerns about pressure on energy supply, water availability and environmental protection. She also called for greater transparency and parliamentary scrutiny of agreements involving global technology companies.

The party warned against repeating past regulatory failures and stressed the need for accountability as AI infrastructure expands. The inquiry is expected to gather input from affected communities and stakeholders across Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK backs stronger cooperation on AI and frontier technologies at OSCE

The UK has highlighted both the opportunities and risks linked to frontier technologies during a high-level conference organised by the Organization for Security and Co-operation in Europe in Geneva.

Speaking at the event, UK Tech Envoy Sarah Spencer said AI could support early warning and early action in humanitarian crises, but could also amplify misinformation and instability if misused or deployed without adequate safeguards.

Spencer said responsible governance of frontier technologies requires partnerships between states, institutions, industry and civil society, arguing that such cooperation matters more than individual products in building inclusive, responsible and sustainable digital ecosystems.

She also highlighted the OSCE’s role in fostering dialogue on frontier technologies, reducing misunderstandings and supporting anticipatory approaches to governance. The UK said it was ready to support efforts to ensure technological progress contributes to a safer, more secure and more humane future.

The conference, titled ‘Anticipating technologies – for a safe and humane future’, brought together participants to discuss how emerging technologies are affecting security, stability and international cooperation.

Why does it matter?

The statement places AI and other frontier technologies within a security and diplomacy context, rather than treating them only as innovation issues. It highlights growing concern that emerging technologies can support humanitarian and development goals, but also create risks for misinformation, conflict escalation and strategic stability if governance and cooperation lag behind deployment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China AI ethics draft translated by Georgetown’s CSET

The Center for Security and Emerging Technology (CSET), a policy research organisation within Georgetown University’s Walsh School of Foreign Service, has published an English translation of China’s draft trial measures on ethics reviews for AI technology.

The translated draft says the measures would apply to AI-related scientific and technological activities conducted within China that may pose ethical risks to human health, human dignity, the ecological environment, public order, or sustainable development. It covers universities, research institutions, medical and health institutions, enterprises, and other organisations involved in AI research and development.

Under the draft, organisations with the necessary conditions would be expected to establish AI technology ethics committees, while others could commission specialised ethics service centres to conduct reviews. Review applications would need to include details on the AI activity, algorithms, data sources, data cleaning methods, testing and evaluation, expected applications, user groups, risk assessments, and risk prevention plans.

The review process would focus on fairness and impartiality; controllability and trustworthiness; transparency and explainability; accountability and traceability; and whether the activity has scientific and social value. Committees or service centres would generally have 30 days to approve, reject, or request revisions to an application.

Higher-risk activities would require expert reconsideration. The draft list includes human-computer fusion systems that strongly affect behaviour, psychological or emotional states, or health; AI models and systems able to mobilise public opinion or channel social consciousness; and highly autonomous automated decision-making systems used in safety or personal health-risk scenarios.

Approved AI activities would also be subject to follow-up reviews, generally at intervals of no more than 12 months, while activities requiring expert reconsideration would be subject to follow-up reviews at least every 6 months. Emergency ethics reviews would normally have to be completed within 72 hours.

CSET notes that China released a final trial version of the regulation in April 2026, which it is now translating. The newly published draft translation therefore provides insight into the regulatory structure that preceded the final version, including committee-based ethics review, external service centres, expert reconsideration, and oversight roles for the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and other departments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council of the EU pushes for human-centred AI in education systems

The Council of the European Union has approved conclusions calling for an ethical, safe and human-centred approach to AI in education, stressing that teachers should remain at the heart of the learning process as AI tools become more widely used across schools and universities.

The Council said the conclusions focus on strengthening digital skills and AI literacy, guaranteeing inclusion and fairness, empowering teachers, and supporting the well-being of both teachers and learners. It also noted that the relationship between AI and teaching is being addressed for the first time in the EU education policy.

The EU ministers highlighted both the opportunities and risks associated with AI-driven education systems. The Council said AI could improve accessibility, support disadvantaged learners, enable more individualised teaching and assessment methods, and reduce administrative workloads for educators.

At the same time, the conclusions raise concerns about misinformation, algorithmic bias, over-reliance on technology, reduced teacher autonomy, data protection risks and the widening of digital inequalities across Europe. The Council also warned that AI could affect learners’ concentration and skill acquisition, while raising broader societal and environmental concerns.

The conclusions call on national governments to strengthen teachers’ AI and digital skills through training, while encouraging the development and use of education-specific AI tools that provide clear pedagogical value and align with data protection, accountability and risk-awareness requirements.

The Council also said teachers should have opportunities to contribute to the design and evaluation of AI tools used in education, reflecting a digital humanism approach focused on human agency and democratic values.

Member states are urged to ensure AI deployment does not undermine teachers’ autonomy or sustainable working conditions, and that digital tools remain accessible and suitable for all learners. The European Commission was encouraged to support international cooperation, research, ethical guidance, peer-to-peer exchanges and capacity-building as AI adoption accelerates across European education systems.

Why does it matter?

AI is moving into classrooms not only as a learning tool, but as part of how teaching, assessment, administration and student support are organised. The Council’s conclusions underline that education policy will need to address more than technical adoption, including teacher autonomy, digital inequality, learner well-being, data protection and the risk of over-reliance on automated systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO explores how AI and design can reshape culture and creativity

UNESCO’s Regional Office for East Asia has launched a global call for good practice cases on how AI and design are being used to support culture, creativity, education, sustainability and social inclusion.

The call invites submissions from organisations, institutions, practitioners, educators and innovators using AI together with design approaches to create positive outcomes in cultural and creative sectors. UNESCO says the initiative is looking for practical examples that support culture, creativity, livelihoods, learning, sustainability and social inclusion.

The call focuses on four thematic areas: cultural heritage protection, documentation and interpretation; cultural tourism and visitor experience design; fashion and creative industry innovation; and design education and capacity development.

Selected projects may receive UNESCO recognition, be included in a publication or catalogue, participate in exhibitions or showcases, receive invitations to talks or events, and gain visibility through UNESCO communication channels.

The initiative reflects growing international interest in how AI can support creative and cultural sectors beyond industrial productivity. UNESCO’s framing places design principles such as inclusion, accessibility, cultural relevance and people-centred use at the centre of responsible AI deployment in cultural and educational contexts.

Submissions are open until 15 June 2026, with selected cases scheduled to be announced on 15 July 2026. Applications may be submitted in English or Chinese and are expected to demonstrate practical examples of AI supporting learning, livelihoods, creativity or sustainable development through design-oriented approaches.

Why does it matter?

The call points to a wider effort to shape AI use in culture and creativity around public value rather than solely on automation. By focusing on heritage, tourism, fashion and design education, UNESCO is encouraging examples where AI supports local knowledge, creative livelihoods, cultural access and inclusive innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US and China reportedly weigh AI risk talks ahead of leaders’ summit

The United States and China are considering launching official discussions on AI risk management, The Wall Street Journal reported, citing people familiar with the matter.

According to the report, the White House and the Chinese government are also considering whether to place AI on the agenda for a planned summit in Beijing between US President Donald Trump and Chinese President Xi Jinping. If agreed, the talks would mark the first AI-specific engagement between the two governments under the current US administration.

The possible dialogue could focus on risks linked to advanced AI systems, including unexpected model behaviour, autonomous military applications and misuse by non-state actors using powerful open-source tools, people familiar with the discussions told the newspaper. The report said Washington is waiting for Beijing to designate a counterpart for the talks.

The WSJ reported that US Treasury Secretary Scott Bessent is leading the US side, while Chinese Vice Finance Minister Liao Min has been involved in discussions on setting up such a channel. The newspaper added that the two presidents would ultimately decide whether AI appears on the formal summit agenda.

Liu Pengyu, spokesperson for the Chinese Embassy in Washington, was cited as saying that China is ready to engage in communication on AI risk mitigation. Analysts have raised the possibility that any future dialogue could support crisis-management tools, including an AI hotline between senior leaders.

The report places the latest deliberations in the context of earlier US-China engagement on AI. In 2023, then US President Joe Biden and Xi launched a formal AI dialogue, and both sides later said humans, not AI, would retain authority over nuclear-launch decisions. The WSJ said the earlier process produced limited results, but AI has remained a high-level focus in bilateral relations.

Non-governmental discussions have also reportedly continued in parallel, including exchanges involving former Microsoft research executive Craig Mundie and Chinese counterparts from Tsinghua University and major AI companies. Participants cited by the newspaper said those exchanges have focused on frontier-model safety, technical guardrails and broader questions of strategic stability.

Why does it matter?

A formal AI risk channel between Washington and Beijing would signal that both governments see advanced AI as a strategic stability issue, not only an economic or technological race. Even brief talks could matter if they create channels for crisis communication about military AI, frontier-model failures, or misuse by non-state actors. However, because the discussions are still only reported as under consideration, the significance lies in the possibility of a risk-management mechanism, not in any confirmed diplomatic breakthrough.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OECD finds audit institutions are building AI capacity but struggling to scale

Public audit institutions are expanding their use of AI, but most remain at an early stage of adoption, with a significant gap between pilot projects and full operational deployment, according to a new OECD paper.

Drawing on consultations with 15 institutions across 14 countries and the European Union, the paper says AI is being explored to strengthen oversight and improve audit processes in areas such as anomaly detection, document processing, knowledge management and predictive risk assessment.

The OECD says institutional commitment is already visible across several indicators. Among the institutions consulted, 67% reported having a formal AI strategy, 80% had internal AI guidelines or policies, 87% offered AI-related staff training, and 87% had at least one AI tool in production.

However, the paper stresses that maturity levels vary widely and that many tools remain limited in scale or are still being tested. It identifies a gap between experimentation and scalable operational deployment, despite the growing integration of AI into broader digital transformation efforts.

The paper highlights several emerging audit use cases, including machine-learning systems for anomaly detection in procurement and financial records, predictive models to identify entities at higher risk of distress or non-compliance, intelligent document processing for extracting data from unstructured files, and generative AI tools for drafting, summarising and translating documents.

It also points to more specialised applications, such as semantic search, knowledge management, and visual or spatial analysis using satellite imagery, drones or other sensor-based systems.

Despite growing experimentation, the OECD says the main barriers to wider use remain structural. Fragmented data systems, weak interoperability, limited internal technical expertise and uneven digital infrastructure continue to slow progress.

The paper argues that robust data governance, secure and interoperable systems, and stronger in-house development capacity will be critical if public audit bodies are to scale AI responsibly while maintaining transparency, accountability and public trust.

It also stresses that AI is being positioned as a support tool rather than a substitute for auditors. Across the cases reviewed, human oversight remains central, both because of current limitations in explainability and reliability and because audit institutions are treating AI adoption cautiously in high-stakes oversight settings.

The OECD presents the current period as a transitional phase in which public audit institutions are building the foundations needed for broader and more trustworthy use of AI in oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ICESCO and Morocco sign agreement on AI and digital capacity building

The Islamic World Educational, Scientific and Cultural Organisation (ICESCO) and Morocco’s Ministry of Digital Transition and Administrative Reform have signed a memorandum of understanding on cooperation in digital transformation, AI and strategic foresight.

The agreement was signed in Rabat on the sidelines of the African Open Government Conference by ICESCO Director-General Dr Salim M. AlMalik and Dr Amal El Fallah, Minister Delegate to the Head of Government in charge of Digital Transition and Administrative Reform of Morocco.

The memorandum provides for workshops, training programmes and joint seminars aimed at building capacity among public and private sector professionals in digital transformation, AI, strategic foresight and digital diplomacy. It also covers the exchange of expertise and open data, the preparation of reference materials, and research related to future skills and professions in ICESCO member states.

The agreement further includes cooperation with universities and research centres to support a knowledge ecosystem aligned with the requirements of the digital economy. It also refers to innovation laboratories and digital tools for the digitisation, indexing, research and analysis of cultural and scientific heritage materials.

Why does it matter?

The agreement places AI within a broader capacity-building agenda that includes public-sector skills, digital diplomacy, open data, foresight and heritage digitisation. Also, the policy relevance lies in how international organisations and national governments are using AI cooperation not only for technology adoption, but also for institutional readiness and future skills development across member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot