WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI productivity claims need stronger scrutiny according to Ada Lovelace Institute’s findings

The Ada Lovelace Institute has warned that AI productivity claims in the UK public sector need stronger scrutiny, as headline estimates are already shaping spending, workforce planning and public service reform.

In a policy briefing on AI and public services, the institute says UK government communications, industry reports and third-party analyses frequently present AI as a tool for cutting costs, saving time and boosting growth. It argues that stronger evidence is needed to assess whether those claims translate into public value.

The briefing notes that the UK’s 2025 Spending Review committed to ‘a step change in investment in digital and AI across public services’, informed by estimates of potential savings and productivity benefits that run as high as £45 billion per year.

Many current estimates rely on limited or uncertain evidence, the institute argues. Studies often measure first-order effects, such as time savings or cost reductions, while paying less attention to outcomes that matter for public services, including service quality, equity, citizen experience, institutional capacity and worker well-being.

The briefing also warns that productivity claims often fail to fully account for implementation costs, trade-offs, transition periods and the opportunity cost of prioritising AI investment over other public spending.

Several methodological concerns are identified in AI productivity research, including reliance on task automation models, self-reported surveys and limited triangulation across methods. The institute also highlights the growing use of large language models to assess which tasks they can perform, warning that this creates a circular dynamic in which AI systems are used to judge their own capabilities.

Headline figures can obscure mixed evidence, with productivity estimates varying widely and positive findings often receiving more attention than contradictory or null results. Industry involvement can also shape what gets researched and how results are framed, particularly when AI companies fund studies, provide tools or publish their own reports.

To improve the evidence base, the Ada Lovelace Institute calls for productivity research to reflect uncertainty, report ranges rather than single headline numbers and measure outcomes that matter for public services. It recommends more independent research, transparent methodologies, longer-term studies and measurement built into AI deployments from the start, including tracking service quality, error rates, staff well-being and citizen satisfaction.

Why does it matter?

Public-sector AI is increasingly being justified through promises of efficiency, savings and productivity growth. If those claims are based on weak or narrow evidence, governments risk making major investment and workforce decisions before understanding the real costs, trade-offs and effects on service quality.

The briefing is important because it shifts the question from whether AI can save time in isolated tasks to whether AI improves public services in practice. That includes outcomes such as fairness, reliability, staff well-being, citizen experience and institutional capacity, which are harder to measure than headline savings but central to public value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California expands digital democracy platform for AI policy debate

California’s Governor is expanding Engaged California, a digital democracy initiative designed to give residents a direct voice in shaping AI policy across the state. The programme invites Californians to share how AI is affecting their jobs, industries, and communities, with the findings expected to help guide future state policy decisions.

The initiative will begin with a public participation phase, during which residents can submit experiences and recommendations through the state’s online platform. A second phase, later in 2026, will bring together a smaller representative group of residents for live deliberative forums focused on AI’s economic and social impact. The process aims to identify areas of public consensus on how government should respond to rapidly evolving AI technologies.

State officials described ‘Engaged California’ as a first-in-the-nation deliberative democracy programme inspired partly by Taiwan’s digital governance model. Instead of functioning like a social media platform or public poll, the initiative is designed to encourage structured discussion and collaborative policymaking around emerging technologies.

California also used the announcement to highlight broader AI initiatives already underway, including AI procurement reforms, workforce training partnerships with major technology companies, AI-powered wildfire detection systems, cybersecurity assessments, and responsible governance frameworks.

Officials said the state aims to balance innovation with safeguards related to child safety, deepfakes, digital likeness protections, and AI accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Chainalysis points to rising adoption of blockchain forecasting

Crypto prediction markets are expanding rapidly as blockchain technology reshapes how users speculate on and hedge against real-world events, according to blockchain analytics firm Chainalysis.

Platforms that allow traders to take positions on elections, interest rates, sports and geopolitical developments have attracted both retail users and institutional firms, pushing the sector towards a more mature financial structure. Chainalysis says activity has grown sharply since late 2024, with inflows reflecting both retail participation and deposits from market makers.

The firm says major financial and crypto companies are increasingly building infrastructure around event-based contracts. It points to the involvement of major financial institutions and crypto platforms such as Robinhood, Coinbase and Crypto.com, which are exploring or launching prediction market offerings.

Chainalysis argues that blockchain transparency could help prediction markets address compliance and market-integrity risks by recording transactions on public ledgers. The firm says that visibility can support investigations into money laundering, sanctions exposure, wash trading, insider trading and market manipulation.

Regulatory uncertainty nevertheless remains a major obstacle. In the United States, regulators and state authorities continue to debate whether some prediction markets should be treated as financial derivatives or gambling products. Chainalysis also notes that several jurisdictions in Europe, Asia-Pacific and Latin America have restricted or blocked major prediction market platforms.

The firm argues that stronger blockchain-based monitoring tools could help regulators and compliance teams support responsible innovation while reducing financial crime and market abuse risks.

Why does it matter?

The growth of crypto prediction markets points to a wider convergence between digital finance, public forecasting and event-based speculation. Institutional interest suggests the sector is moving beyond retail betting, but unresolved questions over gambling law, derivatives regulation, market manipulation and the use of non-public information will shape whether these platforms become a recognised part of financial markets or remain legally fragmented.

Chainalysis also raises a broader governance question: whether public-ledger transparency can make crypto-native markets easier to monitor than traditional betting or derivatives systems, or whether global accessibility and fragmented oversight will create new risks for regulators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

OpenAI introduces a trusted contact safety feature in ChatGPT

OpenAI has started rolling out Trusted Contact, an optional safety feature in ChatGPT designed to help connect adult users with real-world support during moments of serious emotional distress.

The feature allows users to nominate one trusted adult, such as a friend, family member or caregiver, who may receive a notification if OpenAI’s automated systems and trained reviewers detect that the user may have discussed self-harm in a way that indicates a serious safety concern.

OpenAI said the feature is intended to add another layer of support alongside existing safeguards in ChatGPT, including prompts that encourage users to contact crisis hotlines, emergency services, mental health professionals, or trusted people when appropriate. The company stressed that Trusted Contact does not replace professional care or crisis services.

Users can add a trusted contact through ChatGPT settings. The contact receives an invitation explaining the role and must accept it within one week before the feature becomes active. Users can later edit or remove their trusted contact, while the trusted contact can also remove themselves.

If ChatGPT detects a possible serious self-harm concern, the user is informed that their trusted contact may be notified and is encouraged to reach out directly. A small team of specially trained reviewers then assesses the situation before any notification is sent.

OpenAI said notifications are intentionally limited and do not include chat details or transcripts. Instead, they share the general reason that self-harm came up in a potentially concerning way and encourage the trusted contact to check in. The company said every notification undergoes human review and aims to review safety notifications in under one hour.

The feature was developed with guidance from clinicians, researchers and organisations specialising in mental health and suicide prevention, including the American Psychological Association. OpenAI said Trusted Contact forms part of broader efforts to improve how AI systems respond to people experiencing distress and connect them with real-world care, relationships and resources.

Why does it matter?

Trusted Contact points to a broader shift in AI safety away from content moderation alone toward real-world support mechanisms for users in moments of vulnerability. As conversational AI systems become part of everyday personal reflection and emotional support, companies face growing pressure to define when and how they should intervene, how much privacy to preserve, and what role human review should play in high-risk situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European Central Bank moves forward with digital euro technical work

The European Central Bank is advancing technical work on the digital euro, a proposed electronic form of central bank money designed to complement cash in an increasingly digital payments landscape.

The project reflects Europe’s response to the rapid shift towards digital payments, where cards, apps and mobile wallets are increasingly used for everyday transactions. The ECB says a digital euro would provide a European payment option that could be used across the euro area, both online and offline.

Users would be able to store digital euro holdings in an account set up with a bank or public intermediary and use them for in-store, online and person-to-person payments. The ECB says the system would aim to combine the convenience of digital payments with features associated with cash, including offline functionality.

Policy objectives include strengthening Europe’s strategic autonomy in payments, supporting monetary sovereignty and ensuring access to public money in digital form. The ECB has also presented privacy as a central design feature, saying offline digital euro payments would offer cash-like privacy, with transaction details known only to the payer and the recipient.

The project remains conditional on the EU legislative process. The ECB aims to be technically ready for a potential first issuance of the digital euro in 2029, assuming the necessary EU legislation is adopted in 2026.

Supporters view the digital euro as a way to preserve the role of central bank money in digital payments and reduce reliance on non-European payment providers. Debate continues over how to balance innovation, privacy, financial inclusion, bank intermediation and public trust.

Why does it matter?

The digital euro would shape how public money functions in a digital economy increasingly dominated by private payment platforms and international card schemes. Its significance lies not only in creating a new payment tool, but in preserving access to central bank money, supporting European payment sovereignty and setting privacy expectations for public digital infrastructure.

Its success will depend on whether the final design can offer clear benefits over existing payment options while maintaining trust, usability and strong safeguards. The project also raises broader questions about how central banks remain relevant in everyday payments without crowding out private-sector innovation or weakening the role of commercial banks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

Dutch court backs Solvinity DigiD contract despite US data access fears

The District Court of The Hague has rejected an attempt by three Dutch citizens to block the government from renewing its contract with Solvinity, the company responsible for hosting and technically managing systems linked to DigiD.

The plaintiffs argued that Solvinity’s planned acquisition by US-based IT provider Kyndryl could place sensitive data from more than 16 million DigiD users under US jurisdiction, potentially exposing it to US authorities and creating risks to critical public services such as healthcare, pensions, taxes, and unemployment systems.

Despite these concerns, the court ruled in favour of the Dutch State, allowing the agreement to proceed. Judges did not accept arguments that the deal would immediately threaten data security or justify halting the contract.

The decision leaves further scrutiny to the Investment Assessment Office, which is reviewing national security risks linked to the acquisition. The case highlights ongoing tensions around digital sovereignty and data protection in the Netherlands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

WTO members form duty-free pact after e-commerce moratorium lapses

The United States and 18 other World Trade Organization members have moved to create a separate pact pledging not to impose customs duties on electronic transmissions, after members failed to renew the wider WTO e-commerce moratorium.

According to the document cited in the report, the group includes the United States, Japan, South Korea, Singapore, Australia, Norway, and Argentina. The 19 members said they would not impose duties on electronic transmissions for an unspecified period and expressed disappointment that the multilateral moratorium had lapsed.

Members of the group said they remained committed to providing businesses and consumers with a measure of predictability and certainty in the absence of the WTO-wide moratorium. They also invited other WTO members to join the arrangement.

First agreed in 1998 and renewed repeatedly since then, the moratorium prevents WTO members from imposing customs duties on cross-border electronic transmissions, including streaming, downloads and software transfers.

At MC13 in March 2024, WTO members adopted the most recent ministerial decision on the issue, extending the practice of not imposing customs duties on electronic transmissions until the 14th Ministerial Conference or 31 March 2026, whichever came earlier.

Its lapse followed failed efforts to extend the arrangement, with Brazil maintaining its opposition to a four-year renewal.

US Ambassador to the WTO Joseph Barloon told delegates that Washington was launching the plurilateral agreement to give businesses and consumers greater certainty and predictability. He said the move did not close the door to multilateral engagement, but that the United States would not wait for all WTO members to agree before responding to stakeholder needs.

Business groups warned that the failure to preserve a WTO-wide moratorium would raise concerns about global digital trade. Sabina Ciofu of techUK said the 19-member pact offered a way forward but that the absence of a multilateral agreement was worrying. At the same time, International Chamber of Commerce Secretary General John Denton described the pact as a temporary fix rather than a substitute for a WTO-wide deal.

Why does it matter?

The lapse of the WTO e-commerce moratorium weakens one of the longest-standing global understandings underpinning digital trade. A 19-member pact may preserve duty-free treatment among participating economies, but it also points to a more fragmented environment in which rules for electronic transmissions could increasingly depend on partial arrangements rather than WTO-wide consensus.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!