A new Anthropic report finds AI has not yet caused significant job losses, introducing ‘observed exposure’ to measure actual workplace AI use.
Researchers combined language model capabilities with workplace data to identify occupations at risk of disruption. Although AI can perform many tasks, its actual adoption remains much lower across most industries, which is a main finding of the study.
Even in highly digital professions, only a fraction of potential automation results from AI use. For instance, computer and mathematics occupations rank among the most AI-exposed groups. Despite AI’s capability to assist with many tasks, it currently covers only about 33% of them in these fields.
Across the broader economy, many roles experience little or no impact from AI, which represents a key finding. About 30% of workers are in jobs such as cooking, bartending, mechanics, and lifeguarding, where physical tasks dominate, and measured AI exposure is almost zero.
The report also finds no clear evidence that AI adoption has increased unemployment or caused a spike in job losses since generative AI tools began spreading widely in 2022. Rather than triggering sudden job losses, researchers suggest labour-market effects emerge gradually, through slower hiring, shifting skill requirements, and changes in job composition.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google has launched its new AI Centre in Berlin, creating a hub for researchers, developers, and leaders from Google DeepMind, Google Research, and Google Cloud. The centre aims to foster collaboration, debate, and innovation in AI.
The opening event highlighted the company’s work in advancing science and healthcare through AI-enabled agents and platforms. Google announced long-term research partnerships with the Technical University of Munich and Helmholtz Munich, backed by the Google.org AI for Science fund.
Built on Google’s existing research and engineering foundations in Germany and globally, the Berlin centre emphasises AI innovations with societal benefits. It will connect experts from science, business, academia, and politics to drive forward responsible AI development.
The centre will also serve as a platform for public engagement, hosting workshops, lectures, and events to raise awareness about AI applications, ethical considerations, and future opportunities across industries and communities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Roblox Corporation has unveiled an AI-powered real-time chat rephrasing feature designed to maintain civility while keeping in-game conversations fluid. Previously, messages containing profanity were blocked with hashmarks, disrupting gameplay.
The new system automatically rephrases inappropriate language into more respectful alternatives while preserving the original meaning. Users in the chat are notified when their messages are rephrased, ensuring transparency.
The feature supports in-game chat between age-verified users and all languages via Roblox’s automatic translation. The company consulted its TEEN COUNCIL to design the system, ensuring it reflects how teens naturally communicate.
Earlier experiments with real-time warnings and notifications reduced filtered messages and abuse reports by 5–6%, indicating the approach’s effectiveness.
Roblox is also enhancing its text filters to detect complex attempts to bypass Community Standards, such as leet-speak or symbols. Testing shows a 20-fold reduction in missed cases involving the sharing of personal information, such as social handles or phone numbers.
These upgrades represent a significant step toward safer, more natural in-game chat.
The company plans to continue refining these tools, aiming to minimise disruptions further while promoting civil communication. Users can expect iterative improvements and additional controls in the future to enhance chat safety and overall user experience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US tech company Oracle has introduced a new AI platform to predict safety risks across construction projects.
A system called Advisor for Safety that aims to shift industry practices from reactive incident response to predictive risk prevention.
The AI model was trained using safety information equivalent to more than 10,000 project-years across multiple project types and locations.
By analysing historical patterns, the platform generates weekly forecasts that identify projects statistically most likely to experience safety incidents.
The solution also integrates structured safety observation tools through systems such as Oracle Aconex and Oracle Primavera Unifier, allowing field teams to collect consistent data on mobile devices or web platforms.
These inputs improve predictive accuracy while enabling organisations to track potential hazards earlier in the project lifecycle.
According to Oracle, the system combines data streams ranging from incident reports and payroll records to project schedules and operational metrics.
Early adopters reportedly reduced workplace incidents by up to 50 percent and workers’ compensation costs by as much as 75 percent during the first year of use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Washington is considering rules that would require US government approval for overseas purchases of AI chips, tightening control over the global semiconductor supply chain. Draft proposals would make foreign buyers seek Department of Commerce authorisation before acquiring AI chips from US suppliers.
Furthermore, scrutiny will vary by order size, giving US authorities more oversight of international demand for advanced processors. The proposed rules could significantly expand oversight of leading semiconductor manufacturers such as NVIDIA and AMD, whose AI chips underpin many advanced AI systems.
The new approach to regulating exports of AI chips marks a shift toward a more interventionist strategy. Previously, during the Biden administration, an AI diffusion regulation was finalised to control the global spread of AI technology. Yet, before this rule could take effect, the current administration scrapped it. Building on these developments, the current proposed rules represent a new chapter in US AI export policy.
A US Department of Commerce spokesperson said the agency remains committed to ‘promoting secure exports of the American tech stack,’ but rejected claims that the government is reviving the earlier diffusion framework, calling it ‘burdensome, overreaching, and disastrous.’
Meanwhile, critics warn that tighter controls could have unintended effects. Restrictions on AI chip exports may drive international buyers to non-US suppliers, potentially weakening US leadership in advanced semiconductor technology as global AI hardware competition intensifies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new study introduces observed exposure, a measure that combines theoretical AI capability and real-world use to estimate which jobs are most susceptible to automation. Tasks performed by LLMs and actively automated at work receive higher exposure scores.
Computer programmers, customer service representatives, and financial analysts rank among the most exposed occupations.
The analysis finds that AI is far from reaching its full potential, with many tasks still beyond current capabilities. Occupations with higher observed exposure tend to grow more slowly, and workers in these roles are more likely to be older, female, highly educated, and earn higher wages.
Early evidence suggests that the hiring of younger workers aged 22-25 may be slowing in highly exposed occupations. While these effects are small, they may indicate initial labour market adjustments as AI tools become more integrated into workplace tasks.
Researchers emphasise that observed exposure provides a framework for tracking AI’s economic impact over time, helping policymakers and businesses identify potential vulnerabilities.
The study underscores the gap between AI’s theoretical capabilities and actual usage, highlighting the importance of monitoring adoption patterns. The framework uses task automation and job data to track AI’s impact on the workforce.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.
The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.
The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.
According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.
Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.
The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.
A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.
Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Experts gathered in London, UK, to examine how the concept of privacy has evolved over centuries. Discussions in London, UK, highlighted that privacy was only widely recognised as a legal and social norm after the Second World War.
Speakers in London noted that earlier societies often viewed privacy with suspicion or did not recognise it at all. Historical examples discussed included practices from Roman society and the French monarchy.
Modern legal protections expanded rapidly in recent decades, with privacy laws now covering about 80 percent of the global population. Scholars said the concept remains relatively new despite its central role in modern democracies.
The debate also explored whether privacy will remain a stable social value as technology evolves. Analysts in London said emerging technologies such as AI are reshaping debates over personal data and surveillance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.
The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.
Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.
WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.
Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.
The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.
Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.
The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.
The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.
Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.
Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!