China expands oversight of youth online safety

China has introduced new measures to regulate online information that could affect the physical and mental health of minors. Authorities in China said the rules will take effect on 1 March and aim to improve protection for young internet users.

The regulators identified four categories of online information that may harm minors. The authorities have also addressed emerging risks linked to algorithmic recommendations and generative AI technologies.

The framework in China requires internet platforms and content creators to prevent and respond to harmful material. Regulators said companies must strengthen the monitoring and governance of content affecting minors.

Authorities said the measures are designed to create a cleaner online environment for children. Officials also stressed greater responsibility for platforms that manage digital content used by minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X suspends creators over undisclosed AI armed conflict videos

Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.

Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.

Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.

The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.

Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic introduces powerful and transformative voice mode for Claude Code

Anthropic has introduced a voice mode capability for Claude Code, its AI coding assistant for developers. The feature enables users to interact with the system through spoken commands, marking a step toward more conversational and hands-free coding workflows.

Voice interaction allows developers to execute programming tasks using natural language. By activating voice mode, users can verbally request actions, reflecting a broader shift toward intuitive human-AI collaboration in software development.

The rollout is currently limited, with voice mode available to a small percentage of users before wider deployment. Technical details remain unclear, including potential usage limits and whether external voice AI providers contributed to the feature’s development.

The update builds on Anthropic’s earlier integration of voice interaction in its Claude chatbot. This expansion suggests a wider strategy to embed voice interfaces across AI tools and enhance multimodal interaction experiences.

Competition in AI coding assistants continues to intensify, with multiple technology companies developing similar tools. Within this environment, Claude Code has gained strong adoption and a growing market presence among developers.

User growth and revenue indicators highlight the growing momentum of Anthropic’s AI ecosystem. The company also experienced heightened public visibility following its decision to restrict certain military uses of its AI systems, contributing to a surge in app popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI training data is influencing what users believe

A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.

Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.

The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.

Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.

Senior author Daniel Karell warned that whilst the effects are modest in isolation, they could compound significantly for users who regularly consult chatbots for information.

Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models favour Bitcoin over fiat in landmark study

A new study from the Bitcoin Policy Institute, testing 36 AI models across more than 9,000 responses, found that AI agents overwhelmingly prefer Bitcoin over other forms of money.

Bitcoin was the most frequently selected monetary instrument overall, chosen in 48.3% of all responses, whilst almost 91% of responses favoured some form of digital currency over traditional fiat, with no model ranking fiat as its top overall preference.

The preference for Bitcoin was especially pronounced in long-term savings scenarios, where 79.1% of AI responses chose it as the best way to preserve purchasing power over multi-year horizons. For payments and cross-border transfers, however, stablecoins edged ahead, selected in 53.2% of responses compared to Bitcoin’s 36%.

The Bitcoin Policy Institute acknowledged that the study’s methodology had limitations, noting that scenario framing may have influenced results and that the models’ preferences reflect patterns in training data rather than real-world adoption.

Anthropic models showed the strongest Bitcoin preference at 68%, compared to 43% for Google, 39% for xAI, and 26% for OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba Qwen AI faces major disruption after one key leader steps down

Junyang Lin, a central technical leader of Alibaba’s Qwen AI project, has stepped down just one day after the company unveiled its Qwen 3.5 small models. Lin, who joined Alibaba in 2019 and joined the Qwen team in 2023, did not provide details about his decision.

His departure comes at a sensitive moment, as Qwen has emerged as one of China’s most prominent open-weight AI initiatives. The project is a core element of Alibaba’s strategy to compete with leading US developers such as OpenAI, Google, and Anthropic amid intensifying global AI competition.

Alibaba’s newly launched Qwen 3.5 Small Model series comprises four multimodal models with 0.8B to 9B parameters. The systems are designed for on-device deployment and lightweight AI agents, reflecting a focus on efficient and adaptable AI applications.

The release attracted attention from figures including Elon Musk, who commented on the models’ performance. Internally and across the AI ecosystem, including partners linked to Hugging Face, Lin’s exit was described as a significant loss, particularly given his role in advancing open-source development and strengthening global developer engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OneTrust’s new CEO outlines AI governance ambitions

OneTrust has entered a new leadership phase in the US after appointing John Heyman as chief executive, replacing founder Kabir Barday. Barday will remain on the board in an advisory role as the US-based compliance technology firm continues to push into AI governance.

John Heyman said organisations across the US and globally are rapidly integrating AI into daily operations. Companies deploying large numbers of AI agents increasingly need tools to manage risk, data use and regulatory compliance.

OneTrust believes demand for governance technology will grow as AI systems multiply inside businesses in the US and worldwide. John Heyman described a future where automated monitoring tools oversee AI agents operating within company systems.

Leadership at OneTrust in the US aims to build systems that track how AI agents collect and share data while maintaining enterprise control. Growing adoption of AI in the US and globally continues to drive demand for responsible governance platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chrome moves to rapid releases as Google responds to AI disruption

Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.

From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.

The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.

Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.

Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.

Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.

Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.

The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europe turns to satellite networks as Deutsche Telekom expands Starlink collaboration

Deutsche Telekom is turning to satellite connectivity to address Europe’s persistent mobile coverage gaps, rather than relying solely on terrestrial networks.

The company announced a partnership with Starlink during the Mobile World Congress in Barcelona, arguing that non-terrestrial networks can help reach remote forests, mountains and islands that remain underserved despite broad coverage elsewhere.

A collaboration that aims to support direct-to-device satellite links by 2028, enabling future smartphones to connect to Starlink’s MSS spectrum without additional hardware.

Telecommunications leaders describe the plan as a step toward an ‘everywhere network’, extending reliable service to areas long constrained by topographical and conservation barriers. The partnership follows earlier joint work with SpaceX to eliminate dead zones.

Deutsche Telekom is also increasing its use of agentic AI, integrating autonomous network-enhancing systems intended to improve translation, search and service features across devices.

Executives say these capabilities work even on older phones, reducing dependence on apps and creating a more inclusive digital environment.

Although committed to European digital sovereignty, the company insists that global collaboration remains necessary for long-term competitiveness.

Leadership argues that precise regulation and controlled data environments aligned with European standards can balance international cooperation with privacy protection. They remain confident that European technology firms and start-ups will continue driving meaningful innovation across the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ClawJacked flaw let attackers hijack AI agents through the browser

A high-severity vulnerability dubbed ‘ClawJacked’ has been discovered in OpenClaw, an open-source AI agent framework that lets developers run autonomous AI assistants locally.

The flaw, uncovered by Oasis Security, allowed malicious websites to silently hijack a user’s local AI agent instance and steal sensitive data, all triggered by a single browser visit.

The attack exploited OpenClaw’s local WebSocket gateway, which assumed that traffic from localhost could be trusted. A malicious website could open a WebSocket connection to the gateway, brute-force the password at hundreds of guesses per second, with no rate limiting applied to local connections, and then silently register as a trusted device without any user prompt.

Once inside, attackers gained admin-level access to the AI agent, connected devices, logs, and configuration data. Oasis Security responsibly disclosed the flaw, and OpenClaw issued a patch within 24 hours, releasing version 2026.2.26.

Security experts are urging organisations to update immediately, audit the permissions held by their AI agents, and apply strict governance policies, treating AI agents as non-human identities that require the same oversight as human users or service accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!