Cohere secures $500m funding to expand secure enterprise AI

Cohere has secured $500 million in new funding, lifting its valuation to $6.8 billion and reinforcing its position as a secure, enterprise-grade AI specialist.

The Toronto-based firm, which develops large language models tailored for business use, attracted backing from AMD, Nvidia, Salesforce, and other investors.

Its flagship multilingual model, Aya 23, supports 23 languages and is designed to help companies adopt AI without the risks linked to open-source tools, reflecting growing demand for privacy-conscious, compliant solutions.

The round marks renewed support from chipmakers AMD and Nvidia, who had previously invested in the company.

Salesforce Ventures’ involvement hints at potential integration with enterprise software platforms, while other backers include Radical Ventures, Inovia Capital, PSP Investments, and the Healthcare of Ontario Pension Plan.

The company has also strengthened its leadership, appointing former Meta AI research head Joelle Pineau as Chief AI Scientist, Instagram co-founder Mike Krieger as Chief Product Officer, and ex-Uber executive Saroop Bharwani as Chief Technology Officer for Applied R&D.

Cohere intends to use the funding to advance agentic AI, systems capable of performing tasks autonomously, while focusing on security and ethical development.

With over $1.5 billion raised since its 2019 founding, the company targets adoption in regulated sectors such as healthcare and finance.

The investment comes amid a broader surge in AI spending, with industry leaders betting that secure, customisable AI will become essential for enterprise operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tokenised stocks bring limited benefits and high risks

The cryptocurrency sector has promoted tokenised stocks, allowing shares to be traded via blockchain. While fractional ownership and 24/7 trading are possible, most brokers already offer commission-free fractional shares, limiting the benefits for individual investors.

Tokenised stocks require a custodian to hold the underlying asset, a digital token representing the share, and smart contracts granting rights such as dividends and voting. Platforms like Kraken and Robinhood now offer tokenised trading, while asset managers like BlackRock explore tokenised funds.

Proponents cite transparency, security, and direct access to companies as advantages.

Risks remain significant. Transactions may be irrevocable, and uncertain legal protections, and smart contracts cannot cover all scenarios. Experts warn that tokenisation may bypass securities laws, risking market trust and investor protections.

Many analysts suggest the crypto industry’s push for tokenisation is driven more by a desire to integrate with traditional finance and attract institutional capital than by benefits to retail investors. Advantages are limited while risks, including regulatory uncertainty and potential fraud, are substantial.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

State-controlled messaging alters crypto usage in Russia

The Russian government limits secure calls on WhatsApp and Telegram, citing terrorism and fraud concerns. The measures aim to push users toward state-controlled platforms like MAX, raising privacy concerns.

With over 100 million users relying on encrypted messaging, these restrictions threaten the anonymity essential for cryptocurrency transactions. Government-monitored channels may let authorities track crypto transactions, deterring users and businesses from adopting digital currencies.

State-backed messaging platforms also open the door to regulatory oversight, complicating private crypto exchanges and noncustodial wallets.

In response, fintech startups and SMEs may turn to decentralised applications and privacy-focused tools, including zero-knowledge proofs, to maintain secure communication and financial operations.

The clampdown could boost crypto payroll adoption in Russia, reducing costs and shielding firms from economic instability. Using decentralised finance tools in alternative channels allows companies to protect privacy and support cross-border payments and remote work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New framework planned for crypto asset flows in South Africa

South Africa is preparing a new regulatory framework for cross-border cryptocurrency transactions, according to Finance Minister Enoch Godongwana. The South African Reserve Bank will release the framework this year, focusing on cross-border crypto asset transfers.

The move comes after a High Court ruling left cryptocurrencies exempt from exchange control regulations. Instead of a broad exemption framework for exchanges, authorities aim to regulate the activities of crypto asset service providers involved in moving value across borders.

The framework will set conditions, administrative duties, and reporting requirements to curb illicit flows and prevent regulatory loopholes.

SARB works closely with the National Treasury, the Financial Sector Conduct Authority, and other financial bodies to finalise the rules.

Officials say the goal is to align South Africa’s exchange control laws with the realities of the digital asset market while addressing the risks identified by the Intergovernmental Fintech Working Group.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek delays next AI model amid Huawei chip challenges

Chinese AI company DeepSeek has postponed the launch of its R2 model after repeated technical problems using Huawei’s Ascend processors for training. The delay highlights Beijing’s ongoing struggle to replace US-made chips with domestic alternatives.

Authorities had encouraged DeepSeek to shift from Nvidia hardware to Huawei’s chips after the release of its R1 model in January. However, training failures, slower inter-chip connections, stability issues, and weaker software performance led the start-up to revert to Nvidia chips for training, while continuing to explore Ascend for inference tasks.

Despite Huawei deploying engineers to assist on-site, DeepSeek was unable to complete a successful training run using Ascend processors. The company is also contending with extended data-labelling timelines for its updated model, adding to the delays.

The situation underscores how far Chinese chip technology lags behind Nvidia for advanced AI development, even as Beijing pressures domestic firms to use local products. Industry observers say Huawei is facing “growing pains” but could close the gap over time. Meanwhile, competitors like Alibaba’s Qwen3 have integrated elements of DeepSeek’s design more efficiently, intensifying market pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S grapples with lingering IT fallout from cyberattack

Marks & Spencer is still grappling with the after-effects of the cyberattack experienced during the Easter bank holiday weekend in April.

While customer-facing services, including click and collect, have been restored, internal systems used by buying and merchandising teams remain affected, hampering smooth operations.

The attack, which disabled contactless payments and forced the temporary shutdown of online orders, has had severe financial consequences. M&S estimates a hit to group operating profits of approximately £300 million, though mitigation is expected through insurance and cost controls.

While the rest of its e-commerce operations have largely resumed, lingering technical problems within internal systems continue to disrupt critical back-office functions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!