Sovereign AI becomes a strategic question for governments

Governments across the world are increasingly treating AI as a strategic capability that shapes economic development, public services and national security. Momentum behind the idea of ‘sovereign AI’ is growing as countries reassess who controls the chips, cloud infrastructure, data and models powering modern technology.

Complete control over the entire AI stack remains unrealistic for most economies because of the enormous financial and technological costs involved. Global infrastructure continues to rely heavily on US technology firms, which still operate a large share of data centres and AI systems worldwide.

Policy makers are therefore exploring different approaches to sovereignty across the AI ecosystem rather than pursuing total independence. Strategies range from building domestic computing capacity to adapting global AI models for national languages, regulations and public services.

Several countries already illustrate different approaches. The EU is investing billions in AI infrastructure, Canada protects sensitive computing resources while using global models, and India prioritises applications that serve its multilingual population through public digital systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI adoption and jobs debated at India summit

Governments, companies and international organisations gathered in India in February for the AI Impact Summit to discuss the future of AI governance and adoption. Participants in India focused on economic impacts, labour market changes and sector specific uses of AI.

Delegates in India also highlighted growing interest in international cooperation on AI governance. Ninety one countries endorsed a declaration supporting shared tools, global collaboration and people centred development of AI.

Language diversity became a central topic during discussions in India. India’s government announced eight foundation AI models designed to support generative AI across the country’s 22 recognised languages.

Debate in India also reflected the growing influence of the Global South in AI policy discussions. Policymakers and experts in India emphasised infrastructure gaps, language diversity and local economic realities shaping AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ECB reports minor impact of AI on employment

AI has so far had only a small effect on employment across Europe, according to economists at the European Central Bank. A comparison of 5,000 firms- both AI users and non-users- showed no significant difference in job creation or reduction.

Some firms that use AI intensively were even four percent more likely to hire new staff than average.

Economists noted that AI investment has not replaced existing jobs. In some cases, firms are hiring additional employees to develop and implement AI systems or to scale up operations more efficiently.

Only a minority of firms, around 15 percent, reported reducing labour costs as a motivation for AI adoption.

Despite limited impacts so far, the ECB cautioned that AI could have more significant effects as technology matures. Firms that specifically invest in AI to cut jobs may indeed reduce employment, and the long-term consequences for production processes and labour markets remain uncertain.

The findings come amid rising concern over AI-driven job losses, with companies such as Amazon and Allianz citing AI as a reason for recent cuts. Markets reacted negatively last week after a viral post predicted widespread layoffs, though current evidence shows only minor effects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Growing risks from AI meeting transcription tools

Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent.

Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work.

AI services such as Otter, Zoom transcription and Microsoft Copilot can record discussions involving performance reviews, health information and legal matters. Companies in the US and Europe face legal exposure when third-party platforms store recordings without strict controls.

Governance teams in the US and Europe are being urged to introduce clear rules on meeting recordings and retention of transcripts. Stronger policies may include consent requirements, limits on recording sensitive meetings and stricter data storage oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm pushes Europe to take the lead in the 6G revolution

Europe is being urged to take a leading role in developing sixth-generation wireless technology as global competition intensifies over the future of connectivity and AI.

Speaking at the Mobile World Congress in Barcelona, Wassim Chourbaji of Qualcomm argued that 6G will represent a technological revolution rather than a gradual improvement over existing networks.

The company expects early pre-commercial deployments to begin around 2028, with broader commercialisation targeted for 2029.

Next-generation wireless networks are expected to support physical AI systems capable of interacting with the real world, including robotics, smart glasses, connected vehicles, and advanced sensing technologies.

High-capacity uploads and faster processing between devices and data centres will allow AI systems to analyse video streams and real-time data more efficiently.

Qualcomm has also launched a coalition aimed at accelerating 6G development with partners including Nokia, Ericsson, Amazon, Google and Microsoft.

Advocates argue that combining European industrial strengths with advanced wireless and AI technologies could allow the continent to secure a leading position in the next phase of global digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China expands oversight of youth online safety

China has introduced new measures to regulate online information that could affect the physical and mental health of minors. Authorities in China said the rules will take effect on 1 March and aim to improve protection for young internet users.

The regulators identified four categories of online information that may harm minors. The authorities have also addressed emerging risks linked to algorithmic recommendations and generative AI technologies.

The framework in China requires internet platforms and content creators to prevent and respond to harmful material. Regulators said companies must strengthen the monitoring and governance of content affecting minors.

Authorities said the measures are designed to create a cleaner online environment for children. Officials also stressed greater responsibility for platforms that manage digital content used by minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X suspends creators over undisclosed AI armed conflict videos

Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.

Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.

Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.

The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.

Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic introduces powerful and transformative voice mode for Claude Code

Anthropic has introduced a voice mode capability for Claude Code, its AI coding assistant for developers. The feature enables users to interact with the system through spoken commands, marking a step toward more conversational and hands-free coding workflows.

Voice interaction allows developers to execute programming tasks using natural language. By activating voice mode, users can verbally request actions, reflecting a broader shift toward intuitive human-AI collaboration in software development.

The rollout is currently limited, with voice mode available to a small percentage of users before wider deployment. Technical details remain unclear, including potential usage limits and whether external voice AI providers contributed to the feature’s development.

The update builds on Anthropic’s earlier integration of voice interaction in its Claude chatbot. This expansion suggests a wider strategy to embed voice interfaces across AI tools and enhance multimodal interaction experiences.

Competition in AI coding assistants continues to intensify, with multiple technology companies developing similar tools. Within this environment, Claude Code has gained strong adoption and a growing market presence among developers.

User growth and revenue indicators highlight the growing momentum of Anthropic’s AI ecosystem. The company also experienced heightened public visibility following its decision to restrict certain military uses of its AI systems, contributing to a surge in app popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI training data is influencing what users believe

A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.

Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.

The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.

Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.

Senior author Daniel Karell warned that whilst the effects are modest in isolation, they could compound significantly for users who regularly consult chatbots for information.

Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models favour Bitcoin over fiat in landmark study

A new study from the Bitcoin Policy Institute, testing 36 AI models across more than 9,000 responses, found that AI agents overwhelmingly prefer Bitcoin over other forms of money.

Bitcoin was the most frequently selected monetary instrument overall, chosen in 48.3% of all responses, whilst almost 91% of responses favoured some form of digital currency over traditional fiat, with no model ranking fiat as its top overall preference.

The preference for Bitcoin was especially pronounced in long-term savings scenarios, where 79.1% of AI responses chose it as the best way to preserve purchasing power over multi-year horizons. For payments and cross-border transfers, however, stablecoins edged ahead, selected in 53.2% of responses compared to Bitcoin’s 36%.

The Bitcoin Policy Institute acknowledged that the study’s methodology had limitations, noting that scenario framing may have influenced results and that the models’ preferences reflect patterns in training data rather than real-world adoption.

Anthropic models showed the strongest Bitcoin preference at 68%, compared to 43% for Google, 39% for xAI, and 26% for OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!