A new Anthropic report finds AI has not yet caused significant job losses, introducing ‘observed exposure’ to measure actual workplace AI use.
Researchers combined language model capabilities with workplace data to identify occupations at risk of disruption. Although AI can perform many tasks, its actual adoption remains much lower across most industries, which is a main finding of the study.
Even in highly digital professions, only a fraction of potential automation results from AI use. For instance, computer and mathematics occupations rank among the most AI-exposed groups. Despite AI’s capability to assist with many tasks, it currently covers only about 33% of them in these fields.
Across the broader economy, many roles experience little or no impact from AI, which represents a key finding. About 30% of workers are in jobs such as cooking, bartending, mechanics, and lifeguarding, where physical tasks dominate, and measured AI exposure is almost zero.
The report also finds no clear evidence that AI adoption has increased unemployment or caused a spike in job losses since generative AI tools began spreading widely in 2022. Rather than triggering sudden job losses, researchers suggest labour-market effects emerge gradually, through slower hiring, shifting skill requirements, and changes in job composition.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Washington is considering rules that would require US government approval for overseas purchases of AI chips, tightening control over the global semiconductor supply chain. Draft proposals would make foreign buyers seek Department of Commerce authorisation before acquiring AI chips from US suppliers.
Furthermore, scrutiny will vary by order size, giving US authorities more oversight of international demand for advanced processors. The proposed rules could significantly expand oversight of leading semiconductor manufacturers such as NVIDIA and AMD, whose AI chips underpin many advanced AI systems.
The new approach to regulating exports of AI chips marks a shift toward a more interventionist strategy. Previously, during the Biden administration, an AI diffusion regulation was finalised to control the global spread of AI technology. Yet, before this rule could take effect, the current administration scrapped it. Building on these developments, the current proposed rules represent a new chapter in US AI export policy.
A US Department of Commerce spokesperson said the agency remains committed to ‘promoting secure exports of the American tech stack,’ but rejected claims that the government is reviving the earlier diffusion framework, calling it ‘burdensome, overreaching, and disastrous.’
Meanwhile, critics warn that tighter controls could have unintended effects. Restrictions on AI chip exports may drive international buyers to non-US suppliers, potentially weakening US leadership in advanced semiconductor technology as global AI hardware competition intensifies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.
The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.
The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.
According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.
Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.
The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.
A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.
Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.
The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.
Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.
WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.
Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.
The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.
Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.
The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.
The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.
Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.
Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools.
The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour.
Specialists from health, computer science, child rights and digital literacy will work alongside youth representatives to assess current research and policy responses.
Discussions during the first meeting centred on platform responsibility, including age-appropriate safety-by-design features, algorithmic amplification and addictive product design.
An initiative that also addresses digital literacy for children, parents and educators, while considering how regulatory measures can reduce risks without undermining the benefits of online participation.
The panel’s work complements the enforcement of the Digital Services Act and related European policies designed to strengthen protections for minors online.
Among the tools under development is an EU age-verification application currently tested in several member states, intended to support privacy-preserving checks compatible with the future EU digital identity framework.
The panel is expected to deliver policy recommendations to the Commission by summer 2026.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is beginning to reshape corporate strategy as organisations shift from isolated technology experiments to broader operational transformation.
According to OpenAI, businesses that treat AI as a collection of disconnected pilots risk missing the bigger structural change that the technology enables.
A new framework describes five value models through which AI can gradually reshape companies. The first stage focuses on workforce empowerment, where tools such as ChatGPT spread AI capabilities across teams and improve everyday productivity.
Once employees develop fluency, organisations can introduce AI-native distribution models that transform how customers discover products and interact with digital services.
More advanced stages involve specialised systems. Expert capability integrates AI into research, creative production, and domain-specific analysis, allowing professionals to explore a wider range of ideas and experiments.
Meanwhile, systems and dependency management introduce AI tools capable of safely updating interconnected digital environments, including codebases, documentation, and operational processes.
The final stage involves full process re-engineering through autonomous agents. In such environments, AI systems coordinate complex workflows across departments while maintaining governance, accountability, and auditability.
Organisations that successfully progress through these stages may eventually redesign their business models rather than merely improving efficiency within existing structures.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scripts, manual rules, and layered software tools traditionally ran telecom networks. A new collaboration between Google Cloud and Nokia suggests a shift: software agents can respond to goals rather than just detailed instructions.
The companies are integrating agent-based AI into Nokia’s Network as Code platform, which exposes telecom capabilities through application programming interfaces (APIs). The system allows developers to build applications that interact directly with network features such as connectivity quality, device location checks, or network slicing.
The Google-Nokia partnership introduces an AI layer that enables software agents to determine which network functions to use to achieve a goal. Such changes make development more efficient, as the AI agent can interpret instructions, automatically select the appropriate network capabilities, and reduce the need for developers to call APIs one step at a time manually.
Such automation is increasingly being explored as telecom infrastructure grows more complex with 5G, edge computing, and billions of connected devices. New features such as network slicing provide flexibility for industrial applications, private enterprise networks, and specialised connectivity, but also add operational complexity for operators.
Industry groups, including the GSMA and the 3rd Generation Partnership Project, are developing frameworks to support network APIs and automation. While agent-based AI could help networks operate more like programmable platforms, telecom operators must still address questions around reliability, security, and interoperability before large-scale deployment becomes feasible.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
London authorities are drafting new data centre policies amid concerns about their environmental impact and rising energy use. City Hall aims to balance the sector’s economic advantages with pressures on electricity, water, and emissions.
The Greater London Authority (GLA) estimates that 10 large data centres generate around 2.7 million tonnes of carbon emissions due to their high electricity consumption. Of the 100 data centres the UK plans, about 60 will be in London.
Megan Life, assistant director for environment and energy at the GLA, told the London Assembly Environment Committee the new strategy aims to ‘keep hold of the kind of economic growth benefits that data centres offer’ while addressing some ‘quite challenging’ impacts linked to their energy use.
Deputy mayor for environment Mete Coban said the expansion of data centres brings both ‘big benefits’ and ‘massive challenges’ for the capital, particularly in terms of energy and water consumption. ‘It’s not just a London problem, it’s going to be a global problem,’ he said, adding: ‘It’s about making sure that our environment doesn’t suffer in the hands of a few global corporations who will take and not give back, so we want to make sure we equitably do this.’
Policymakers are assessing how data centre growth may affect climate goals and urban infrastructure. London Mayor Sadiq Khan has commissioned a study to forecast future expansion. At the same time, UK lawmakers have launched an inquiry into the environmental impact of the sector as demand for cloud computing and AI infrastructure grows.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.
Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.
The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.
The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.
Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!