Google launches small AI model for mobiles and IoT

Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.

Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.

The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.

Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.

Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.

Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.

Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cohere secures $500m funding to expand secure enterprise AI

Cohere has secured $500 million in new funding, lifting its valuation to $6.8 billion and reinforcing its position as a secure, enterprise-grade AI specialist.

The Toronto-based firm, which develops large language models tailored for business use, attracted backing from AMD, Nvidia, Salesforce, and other investors.

Its flagship multilingual model, Aya 23, supports 23 languages and is designed to help companies adopt AI without the risks linked to open-source tools, reflecting growing demand for privacy-conscious, compliant solutions.

The round marks renewed support from chipmakers AMD and Nvidia, who had previously invested in the company.

Salesforce Ventures’ involvement hints at potential integration with enterprise software platforms, while other backers include Radical Ventures, Inovia Capital, PSP Investments, and the Healthcare of Ontario Pension Plan.

The company has also strengthened its leadership, appointing former Meta AI research head Joelle Pineau as Chief AI Scientist, Instagram co-founder Mike Krieger as Chief Product Officer, and ex-Uber executive Saroop Bharwani as Chief Technology Officer for Applied R&D.

Cohere intends to use the funding to advance agentic AI, systems capable of performing tasks autonomously, while focusing on security and ethical development.

With over $1.5 billion raised since its 2019 founding, the company targets adoption in regulated sectors such as healthcare and finance.

The investment comes amid a broader surge in AI spending, with industry leaders betting that secure, customisable AI will become essential for enterprise operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

State-controlled messaging alters crypto usage in Russia

The Russian government limits secure calls on WhatsApp and Telegram, citing terrorism and fraud concerns. The measures aim to push users toward state-controlled platforms like MAX, raising privacy concerns.

With over 100 million users relying on encrypted messaging, these restrictions threaten the anonymity essential for cryptocurrency transactions. Government-monitored channels may let authorities track crypto transactions, deterring users and businesses from adopting digital currencies.

State-backed messaging platforms also open the door to regulatory oversight, complicating private crypto exchanges and noncustodial wallets.

In response, fintech startups and SMEs may turn to decentralised applications and privacy-focused tools, including zero-knowledge proofs, to maintain secure communication and financial operations.

The clampdown could boost crypto payroll adoption in Russia, reducing costs and shielding firms from economic instability. Using decentralised finance tools in alternative channels allows companies to protect privacy and support cross-border payments and remote work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers explore brain signals to restore speech for disabled patients

Researchers have developed a brain-computer interface (BCI) that can decode ‘inner speech’ in patients with severe paralysis, potentially enabling faster and more comfortable communication.

The system, tested by a team led by Stanford University’s Frank Willett, records brain activity from the motor cortex using microelectrode arrays smaller than a baby aspirin, translating neural patterns into words via machine learning.

Unlike earlier BCIs that rely on attempted speech, which can be slow or tiring, the new approach focuses on silent imagined speech. Tests with four participants showed that inner speech produces clear, consistent brain signals, though at a smaller scale than attempted speech.

While accuracy is lower, the findings suggest that future systems could restore rapid communication through thought alone.

Privacy concerns have been addressed through methods that prevent unintended decoding. Current BCIs can be trained to ignore inner speech, and a ‘password’ approach for next-generation devices ensures decoding begins only when a specific imagined phrase is used.

Such safeguards are designed to avoid accidental capture of thoughts the user never intended to express.

The technology remains in early development and is subject to strict regulation.

Researchers are now exploring improved, wireless hardware and additional brain regions linked to language and hearing, aiming to enhance decoding accuracy and make the systems more practical in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Employee data compromised in cyberattack on Canada’s parliament

Canada’s House of Commons is investigating a data breach after a cyberattack reportedly exploited a Microsoft vulnerability, granting unauthorised access to a database for managing parliamentary computers and mobile devices. Staff were notified of the breach this past Monday via internal communications.

The compromised information includes employees’ names, job titles, office locations, email addresses, and device-related details. Authorities have warned individuals to be alert for potential impersonation or phishing attempts using the stolen data.

Canada’s Communications Security Establishment (CSE) supports the investigation and confirms its involvement. No attribution has been made yet, as identifying specific threat actors remains challenging.

While the exact Microsoft vulnerability has not been publicly confirmed, cybersecurity experts point to a critical SharePoint zero-day (CVE-2025-53770), which has seen wide exploitation. The attack underscores the pressing need for robust cyber defence across government essential infrastructures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S grapples with lingering IT fallout from cyberattack

Marks & Spencer is still grappling with the after-effects of the cyberattack experienced during the Easter bank holiday weekend in April.

While customer-facing services, including click and collect, have been restored, internal systems used by buying and merchandising teams remain affected, hampering smooth operations.

The attack, which disabled contactless payments and forced the temporary shutdown of online orders, has had severe financial consequences. M&S estimates a hit to group operating profits of approximately £300 million, though mitigation is expected through insurance and cost controls.

While the rest of its e-commerce operations have largely resumed, lingering technical problems within internal systems continue to disrupt critical back-office functions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!