UN calls for AI-driven transformation of future cities

UN organisations and urban experts have called on governments, city leaders, and the private sector to accelerate the use of AI and digital technologies to shape the future of urban life. The appeal was made during the 3rd UN Virtual Worlds Day held in Geneva.

With 70 percent of the global population expected to live in urban areas by 2050, discussions focused on the emergence of an ‘AI-enabled citiverse’ combining AI, digital twins and spatial intelligence to improve planning, infrastructure management and quality of life in cities.

Participants outlined five strategic priorities, including strengthening inclusive AI systems, improving data-driven decision-making, and ensuring responsible economic and social development. Emphasis was also placed on global cooperation and the need for common standards to guide digital urban transformation.

The conference also highlighted key risks, including governance gaps, trust and safety concerns, and widening digital divides. A joint briefing warned that the benefits of AI-driven urban systems must be distributed fairly, including to developing economies and underserved communities.

Why does it matter? 

The integration of AI into urban systems signals a structural shift in how cities are designed, managed and experienced. As urbanisation accelerates globally, AI-enabled infrastructure could significantly improve efficiency, resilience and sustainability, but also risks deepening inequality if governance and access remain uneven across regions.

United Nations organisations and urban experts have called on governments, city leaders and the private sector to accelerate the use of AI and digital technologies in shaping the future of urban life. The appeal was made during the 3rd UN Virtual Worlds Day held in Geneva.

With 70 percent of the global population expected to live in urban areas by 2050, discussions focused on the emergence of an ‘AI-enabled citiverse’ combining AI, digital twins and spatial intelligence to improve planning, infrastructure management and quality of life in cities.

Participants outlined five strategic priorities, including strengthening inclusive AI systems, improving data-driven decision-making, and ensuring responsible economic and social development. Emphasis was also placed on global cooperation and the need for common standards to guide digital urban transformation.

The conference also highlighted key risks such as governance gaps, trust and safety concerns, and widening digital divides. A joint briefing warned that the benefits of AI-driven urban systems must be distributed fairly, including to developing economies and underserved communities.

Why does it matter? 

The integration of AI into urban systems signals a structural shift in how cities are designed, managed and experienced. As urbanisation accelerates globally, AI-enabled infrastructure could significantly improve efficiency, resilience and sustainability, but also risks deepening inequality if governance and access remain uneven across regions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Worldwide AI adoption surges, new report shows

Ireland remains one of the world’s leading markets for AI adoption, with 48.4% of its working-age population using AI tools, according to Microsoft’s Global AI Diffusion Report for the first quarter of 2026.

Microsoft said Ireland recorded a quarterly increase of 3.8 percentage points, placing it fourth globally and close to surpassing the 50% milestone. If current trends continue, Ireland could overtake Norway, which currently ranks third for AI adoption.

Globally, AI usage increased from 16.3% to 17.8% of the working-age population during the first quarter of 2026. Adoption remains uneven, with 26 economies now exceeding 30% usage, while the United Arab Emirates leads globally at 70.1%.

Regional trends show strong momentum in Asia, driven in part by improved AI capabilities for Asian languages. Microsoft said South Korea, Thailand and Japan recorded some of the greatest movement during the quarter.

At the same time, the gap between the Global North and Global South widened, with AI usage reaching 27.5% in developed regions compared with 15.4% elsewhere. Microsoft said it measures AI diffusion as the share of people aged 15 to 64 who used a generative AI product during the reported period.

Advances in AI-assisted coding also affected software development. Microsoft said global git pushes increased 78% year on year, while US software developer employment reached about 2.2 million in 2025 and was about 4% higher in March 2026 than in March 2025. The report cautions that it is still too early to determine the full labour-market impact of AI-assisted coding.

Why does it matter?

The report shows how quickly generative AI is becoming part of everyday work and digital activity, but also how uneven that adoption remains across countries and regions. If high-adoption economies continue to move faster, AI could widen existing digital and economic divides, especially where infrastructure, language support, skills and access remain weaker.

The findings also show why governments and businesses are under pressure to adapt workforce training, regulation and digital infrastructure as AI use spreads. Rising adoption may support productivity gains, but it also raises questions about who benefits, which regions fall behind and how labour markets adjust as AI tools become more embedded in software development and services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Australian Senate opens inquiry into AI data centres

The Australian Greens announced that the Senate has established a parliamentary inquiry into AI data centres, according to its official statement. The move follows growing concern over the rapid expansion of energy-intensive AI infrastructure and limited federal oversight.

The inquiry will examine environmental, economic and social impacts, including energy and water use, effects on communities, and the regulatory framework governing AI. It aims to better understand how these facilities influence resources and infrastructure.

Greens Senator Sarah Hanson-Young said communities have raised concerns about pressure on energy supply, water availability and environmental protection. She also called for greater transparency and parliamentary scrutiny of agreements involving global technology companies.

The party warned against repeating past regulatory failures and stressed the need for accountability as AI infrastructure expands. The inquiry is expected to gather input from affected communities and stakeholders across Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK backs stronger cooperation on AI and frontier technologies at OSCE

The UK has highlighted both the opportunities and risks linked to frontier technologies during a high-level conference organised by the Organization for Security and Co-operation in Europe in Geneva.

Speaking at the event, UK Tech Envoy Sarah Spencer said AI could support early warning and early action in humanitarian crises, but could also amplify misinformation and instability if misused or deployed without adequate safeguards.

Spencer said responsible governance of frontier technologies requires partnerships between states, institutions, industry and civil society, arguing that such cooperation matters more than individual products in building inclusive, responsible and sustainable digital ecosystems.

She also highlighted the OSCE’s role in fostering dialogue on frontier technologies, reducing misunderstandings and supporting anticipatory approaches to governance. The UK said it was ready to support efforts to ensure technological progress contributes to a safer, more secure and more humane future.

The conference, titled ‘Anticipating technologies – for a safe and humane future’, brought together participants to discuss how emerging technologies are affecting security, stability and international cooperation.

Why does it matter?

The statement places AI and other frontier technologies within a security and diplomacy context, rather than treating them only as innovation issues. It highlights growing concern that emerging technologies can support humanitarian and development goals, but also create risks for misinformation, conflict escalation and strategic stability if governance and cooperation lag behind deployment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China AI ethics draft translated by Georgetown’s CSET

The Center for Security and Emerging Technology (CSET), a policy research organisation within Georgetown University’s Walsh School of Foreign Service, has published an English translation of China’s draft trial measures on ethics reviews for AI technology.

The translated draft says the measures would apply to AI-related scientific and technological activities conducted within China that may pose ethical risks to human health, human dignity, the ecological environment, public order, or sustainable development. It covers universities, research institutions, medical and health institutions, enterprises, and other organisations involved in AI research and development.

Under the draft, organisations with the necessary conditions would be expected to establish AI technology ethics committees, while others could commission specialised ethics service centres to conduct reviews. Review applications would need to include details on the AI activity, algorithms, data sources, data cleaning methods, testing and evaluation, expected applications, user groups, risk assessments, and risk prevention plans.

The review process would focus on fairness and impartiality; controllability and trustworthiness; transparency and explainability; accountability and traceability; and whether the activity has scientific and social value. Committees or service centres would generally have 30 days to approve, reject, or request revisions to an application.

Higher-risk activities would require expert reconsideration. The draft list includes human-computer fusion systems that strongly affect behaviour, psychological or emotional states, or health; AI models and systems able to mobilise public opinion or channel social consciousness; and highly autonomous automated decision-making systems used in safety or personal health-risk scenarios.

Approved AI activities would also be subject to follow-up reviews, generally at intervals of no more than 12 months, while activities requiring expert reconsideration would be subject to follow-up reviews at least every 6 months. Emergency ethics reviews would normally have to be completed within 72 hours.

CSET notes that China released a final trial version of the regulation in April 2026, which it is now translating. The newly published draft translation therefore provides insight into the regulatory structure that preceded the final version, including committee-based ethics review, external service centres, expert reconsideration, and oversight roles for the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and other departments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council of the EU pushes for human-centred AI in education systems

The Council of the European Union has approved conclusions calling for an ethical, safe and human-centred approach to AI in education, stressing that teachers should remain at the heart of the learning process as AI tools become more widely used across schools and universities.

The Council said the conclusions focus on strengthening digital skills and AI literacy, guaranteeing inclusion and fairness, empowering teachers, and supporting the well-being of both teachers and learners. It also noted that the relationship between AI and teaching is being addressed for the first time in the EU education policy.

The EU ministers highlighted both the opportunities and risks associated with AI-driven education systems. The Council said AI could improve accessibility, support disadvantaged learners, enable more individualised teaching and assessment methods, and reduce administrative workloads for educators.

At the same time, the conclusions raise concerns about misinformation, algorithmic bias, over-reliance on technology, reduced teacher autonomy, data protection risks and the widening of digital inequalities across Europe. The Council also warned that AI could affect learners’ concentration and skill acquisition, while raising broader societal and environmental concerns.

The conclusions call on national governments to strengthen teachers’ AI and digital skills through training, while encouraging the development and use of education-specific AI tools that provide clear pedagogical value and align with data protection, accountability and risk-awareness requirements.

The Council also said teachers should have opportunities to contribute to the design and evaluation of AI tools used in education, reflecting a digital humanism approach focused on human agency and democratic values.

Member states are urged to ensure AI deployment does not undermine teachers’ autonomy or sustainable working conditions, and that digital tools remain accessible and suitable for all learners. The European Commission was encouraged to support international cooperation, research, ethical guidance, peer-to-peer exchanges and capacity-building as AI adoption accelerates across European education systems.

Why does it matter?

AI is moving into classrooms not only as a learning tool, but as part of how teaching, assessment, administration and student support are organised. The Council’s conclusions underline that education policy will need to address more than technical adoption, including teacher autonomy, digital inequality, learner well-being, data protection and the risk of over-reliance on automated systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO explores how AI and design can reshape culture and creativity

UNESCO’s Regional Office for East Asia has launched a global call for good practice cases on how AI and design are being used to support culture, creativity, education, sustainability and social inclusion.

The call invites submissions from organisations, institutions, practitioners, educators and innovators using AI together with design approaches to create positive outcomes in cultural and creative sectors. UNESCO says the initiative is looking for practical examples that support culture, creativity, livelihoods, learning, sustainability and social inclusion.

The call focuses on four thematic areas: cultural heritage protection, documentation and interpretation; cultural tourism and visitor experience design; fashion and creative industry innovation; and design education and capacity development.

Selected projects may receive UNESCO recognition, be included in a publication or catalogue, participate in exhibitions or showcases, receive invitations to talks or events, and gain visibility through UNESCO communication channels.

The initiative reflects growing international interest in how AI can support creative and cultural sectors beyond industrial productivity. UNESCO’s framing places design principles such as inclusion, accessibility, cultural relevance and people-centred use at the centre of responsible AI deployment in cultural and educational contexts.

Submissions are open until 15 June 2026, with selected cases scheduled to be announced on 15 July 2026. Applications may be submitted in English or Chinese and are expected to demonstrate practical examples of AI supporting learning, livelihoods, creativity or sustainable development through design-oriented approaches.

Why does it matter?

The call points to a wider effort to shape AI use in culture and creativity around public value rather than solely on automation. By focusing on heritage, tourism, fashion and design education, UNESCO is encouraging examples where AI supports local knowledge, creative livelihoods, cultural access and inclusive innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US and China reportedly weigh AI risk talks ahead of leaders’ summit

The United States and China are considering launching official discussions on AI risk management, The Wall Street Journal reported, citing people familiar with the matter.

According to the report, the White House and the Chinese government are also considering whether to place AI on the agenda for a planned summit in Beijing between US President Donald Trump and Chinese President Xi Jinping. If agreed, the talks would mark the first AI-specific engagement between the two governments under the current US administration.

The possible dialogue could focus on risks linked to advanced AI systems, including unexpected model behaviour, autonomous military applications and misuse by non-state actors using powerful open-source tools, people familiar with the discussions told the newspaper. The report said Washington is waiting for Beijing to designate a counterpart for the talks.

The WSJ reported that US Treasury Secretary Scott Bessent is leading the US side, while Chinese Vice Finance Minister Liao Min has been involved in discussions on setting up such a channel. The newspaper added that the two presidents would ultimately decide whether AI appears on the formal summit agenda.

Liu Pengyu, spokesperson for the Chinese Embassy in Washington, was cited as saying that China is ready to engage in communication on AI risk mitigation. Analysts have raised the possibility that any future dialogue could support crisis-management tools, including an AI hotline between senior leaders.

The report places the latest deliberations in the context of earlier US-China engagement on AI. In 2023, then US President Joe Biden and Xi launched a formal AI dialogue, and both sides later said humans, not AI, would retain authority over nuclear-launch decisions. The WSJ said the earlier process produced limited results, but AI has remained a high-level focus in bilateral relations.

Non-governmental discussions have also reportedly continued in parallel, including exchanges involving former Microsoft research executive Craig Mundie and Chinese counterparts from Tsinghua University and major AI companies. Participants cited by the newspaper said those exchanges have focused on frontier-model safety, technical guardrails and broader questions of strategic stability.

Why does it matter?

A formal AI risk channel between Washington and Beijing would signal that both governments see advanced AI as a strategic stability issue, not only an economic or technological race. Even brief talks could matter if they create channels for crisis communication about military AI, frontier-model failures, or misuse by non-state actors. However, because the discussions are still only reported as under consideration, the significance lies in the possibility of a risk-management mechanism, not in any confirmed diplomatic breakthrough.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!