WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Young users’ reliance on ChatGPT raises questions over AI advice and autonomy

Sam Altman has described a generational divide in how people use ChatGPT, saying younger users are integrating the tool more deeply into learning, planning and everyday decision-making.

Speaking at Sequoia Capital’s AI Ascent 2025, the OpenAI CEO said older users tend to treat ChatGPT more like a search tool, while people in their 20s and 30s often use it as a personal advisor. College students, he said, are going further by treating ChatGPT almost like an operating system, connecting it to files, tasks and complex workflows.

The remarks point to a shift in how AI tools are being embedded into daily routines, particularly among students and younger adults. Business Insider reported that a February 2025 OpenAI report found US college students were among the platform’s most frequent users, while a Pew Research Centre survey found that 26% of US teens aged 13 to 17 used ChatGPT for schoolwork in 2024, double the share recorded in 2023.

Altman’s comments also raise questions about dependence, accuracy and boundaries as AI systems move closer to advisory roles. While users may benefit from private spaces to test ideas, organise tasks and prepare decisions, concerns remain over over-reliance, data privacy and the shifting role of human relationships in decision-making.

Why does it matter?

The trend suggests that AI is becoming more than an information tool for younger users. As ChatGPT and similar systems become part of studying, planning and personal decision-making, they influence not only how information is consumed, but also how habits, confidence and judgement develop.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

European Commission moves to standardise AI transparency obligations

The European Commission has published draft guidelines outlining how transparency obligations under Article 50 of the AI Act should be applied across certain AI systems. The guidance is intended to help competent authorities, providers and deployers ensure compliance in a consistent, effective and uniform manner.

Prepared in parallel with a separate Code of Practice on the marking and labelling of AI-generated content, the draft guidelines clarify the scope of legal obligations and address areas not covered by the code. The focus is on helping users identify when they are interacting with AI systems or encountering AI-generated content.

A targeted consultation is open until 3 June, allowing stakeholders to provide feedback on the draft framework. The consultation will inform the final version of the guidelines, which are intended to support more consistent implementation and enforcement of Article 50 obligations across the EU.

The initiative reflects a broader regulatory push in the European Union to strengthen oversight of AI transparency, particularly as generative AI tools become more widely used in content creation, communication and digital services.

Why does it matter?

Transparency obligations are central to the AI Act‘s approach to trust in digital environments. Clear disclosure and labelling requirements can help users understand when they are interacting with AI systems or encountering AI-generated material, reducing risks linked to manipulation, misinformation and misplaced reliance on machine-generated outputs.

Consistent guidance also matters for legal certainty. Providers and deployers need clearer expectations on how Article 50 applies in practice, while regulators need a common basis for enforcement across member states.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

UN Virtual Worlds Day to examine AI-driven cities

The International Telecommunication Union will host the 3rd UN Virtual Worlds Day at its headquarters in Geneva, bringing together UN entities, governments, city leaders, industry representatives and innovators to discuss the future of AI-driven cities and communities.

The UN Virtual Worlds Day is being organised with a wide group of UN and international partners, including ITCILO, FAO, UNDP, UNECA, UNECE, UNECLAC, UN-Habitat, UNICEF, the UN Innovation Network, UNU-EGOV, the World Bank, WIPO, WMO, and the Global Cities Hub.

The programme will include high-level dialogue and an Ambassador Roundtable focused on how artificial intelligence, immersive virtual environments, spatial intelligence, and other frontier technologies are shaping urban governance and public service delivery.

Discussions will also examine emerging concepts such as the AI-enabled citiverse, where digital and immersive technologies may be used to support planning, service design, and engagement in cities and communities.

The event will link these developments to the implementation of the Global Digital Compact, with a focus on trusted, inclusive, and people-centred outcomes for urban and community governance worldwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China launches AI ethics review pilot programme

A national pilot programme for AI ethics review and services has been launched by China, as authorities move to strengthen oversight of growing risks linked to advanced AI systems.

The initiative, announced by China’s Ministry of Industry and Information Technology, aims to establish practical mechanisms for AI ethics governance as concerns over algorithmic discrimination, emotional dependence, and broader societal risks continue to grow. Authorities said the initiative will initially operate in provincial-level regions hosting national AI industrial innovation pilot zones. It will focus on refining provincial AI ethics review rules, supporting the creation of ethics committees, and developing specialised ethics review and service centres. Chinese regulators also plan to transform the ethics review process into technical standards while improving mechanisms for reporting AI-related ethical concerns.

The Ministry of Industry and Information Technology has also called for the creation of a national AI ethics risk monitoring service network, along with training materials, ethics education courses, and early-warning systems to support pilot cities.

By embedding ethics reviews into AI development and deployment processes, China appears to be building a more institutionalised framework for managing the societal and technological risks associated with increasingly powerful AI systems.

Why does it matter?

China’s latest move signals a shift from broad AI governance principles towards operational enforcement mechanisms embedded directly into regional innovation ecosystems. The programme could influence how other governments approach AI oversight, particularly as global concerns grow over algorithmic bias, psychological manipulation, and accountability in frontier AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI productivity claims need stronger scrutiny according to Ada Lovelace Institute’s findings

The Ada Lovelace Institute has warned that AI productivity claims in the UK public sector need stronger scrutiny, as headline estimates are already shaping spending, workforce planning and public service reform.

In a policy briefing on AI and public services, the institute says UK government communications, industry reports and third-party analyses frequently present AI as a tool for cutting costs, saving time and boosting growth. It argues that stronger evidence is needed to assess whether those claims translate into public value.

The briefing notes that the UK’s 2025 Spending Review committed to ‘a step change in investment in digital and AI across public services’, informed by estimates of potential savings and productivity benefits that run as high as £45 billion per year.

Many current estimates rely on limited or uncertain evidence, the institute argues. Studies often measure first-order effects, such as time savings or cost reductions, while paying less attention to outcomes that matter for public services, including service quality, equity, citizen experience, institutional capacity and worker well-being.

The briefing also warns that productivity claims often fail to fully account for implementation costs, trade-offs, transition periods and the opportunity cost of prioritising AI investment over other public spending.

Several methodological concerns are identified in AI productivity research, including reliance on task automation models, self-reported surveys and limited triangulation across methods. The institute also highlights the growing use of large language models to assess which tasks they can perform, warning that this creates a circular dynamic in which AI systems are used to judge their own capabilities.

Headline figures can obscure mixed evidence, with productivity estimates varying widely and positive findings often receiving more attention than contradictory or null results. Industry involvement can also shape what gets researched and how results are framed, particularly when AI companies fund studies, provide tools or publish their own reports.

To improve the evidence base, the Ada Lovelace Institute calls for productivity research to reflect uncertainty, report ranges rather than single headline numbers and measure outcomes that matter for public services. It recommends more independent research, transparent methodologies, longer-term studies and measurement built into AI deployments from the start, including tracking service quality, error rates, staff well-being and citizen satisfaction.

Why does it matter?

Public-sector AI is increasingly being justified through promises of efficiency, savings and productivity growth. If those claims are based on weak or narrow evidence, governments risk making major investment and workforce decisions before understanding the real costs, trade-offs and effects on service quality.

The briefing is important because it shifts the question from whether AI can save time in isolated tasks to whether AI improves public services in practice. That includes outcomes such as fairness, reliability, staff well-being, citizen experience and institutional capacity, which are harder to measure than headline savings but central to public value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FTC guidance sets out platform duties under Take It Down Act

The US Federal Trade Commission has issued guidance for online platforms on compliance with Section 3 of the Take It Down Act, which takes effect on 19 May 2026 and requires covered platforms to remove non-consensual intimate photos or videos within 48 hours of receiving a valid request.

The FTC says the law applies to a broad range of online platforms, including websites, apps, social media, messaging, image and video sharing, and gaming services. Platforms may fall under the law if they primarily provide a forum for user-generated content or regularly publish, curate, host, or furnish intimate content shared without consent.

Covered platforms must provide clear and conspicuous plain-language information about how people can submit removal requests for intimate photos or videos shared without consent. The FTC says platforms should make the process easy to use, including for people who do not have an account on the service.

The law also covers ‘digital forgeries’, including intimate images that were digitally created or altered using software, apps, or AI. Platforms that receive a valid request must remove the reported content and make reasonable efforts to locate and remove known identical copies within 48 hours.

The FTC also encourages platforms to help prevent removed images from spreading further, including through hashing technology and, where appropriate, by sharing hashes with services such as the National Center for Missing and Exploited Children’s Take It Down service or StopNCII.org.

Violations of the Take It Down Act will be enforced by the FTC and treated as violations of an FTC rule. The agency says platforms that breach the law may face civil penalties of $53,088 per violation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot