Microsoft and NVIDIA expand partnership with Anthropic

Microsoft, NVIDIA, and Anthropic have announced new strategic partnerships to expand access to Anthropic’s rapidly growing Claude AI models. Claude will scale on Microsoft Azure with NVIDIA support, offering enterprise customers broader model choices and enhanced capabilities.

Anthropic has committed to purchase $30 billion of Azure compute capacity and additional capacity up to one gigawatt. NVIDIA and Anthropic will optimise Claude models for performance, efficiency, and cost, while aligning future NVIDIA architectures with Anthropic workloads.

The partnerships also extend Claude access across Microsoft Foundry, including frontier models like Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5.

Microsoft Copilot products, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio, will continue to feature Claude capabilities, providing enterprise users with integrated AI tools.

Microsoft and NVIDIA have committed $5 billion and $10 billion respectively to support Anthropic’s growth. The partnership makes Claude the only frontier AI model on all three top cloud platforms, boosting enterprise AI adoption and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Electricity bills surge as data centres drive up costs across the US

Massive new data centres, built to power the AI industry, are being blamed for a dramatic rise in electricity costs across the US. Residential utility bills in states with high concentrations of these facilities, such as Virginia and Illinois, are surging far beyond the national average.

The escalating energy demand has caused a major capacity crisis on large grids like the PJM Interconnection, with data centre load identified as the primary reason for a multi-billion pound spike in future power costs. These extraordinary increases are being passed directly to consumers, making affordability a central issue for politicians ahead of upcoming elections.

Lawmakers are now targeting tech companies and AI labs, promising to challenge what they describe as ‘sweetheart deals’ and to make the firms contribute more to the infrastructure they rely upon.

Although rising costs are also attributed to an ageing grid and inflation, experts warn that utility bills are unlikely to decrease this decade due to the unprecedented demand from rapid data centre expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare buys AI platform Replicate

Cloudflare has agreed to purchase Replicate, a platform simplifying the deployment and running of AI models. The technology aims to cut down on GPU hardware and infrastructure needs typically required for complex AI.

The acquisition will integrate Replicate’s extensive library of over 50,000 AI models into the Cloudflare platform. Developers can then access and deploy any AI model globally using just a single line of code for rapid implementation.

Matthew Prince, Cloudflare’s chief executive, stated the acquisition will make his company the ‘most seamless, all-in-one shop for AI development’. The move abstracts away infrastructure complexities so developers can focus only on delivering amazing products.

Replicate had previously raised $40m in venture funding from prominent investors in the US. Integrating Replicate’s community and models with Cloudflare’s global network will create a singular platform for building tomorrow’s next big AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Outage at Cloudflare takes multiple websites offline worldwide

Cloudflare has suffered a major outage, disrupting access to multiple high-profile websites, including X and Letterboxd. Users encountered internal server error messages linked to Cloudflare’s network, prompting concerns of a broader infrastructure failure.

The problems began around 11.30 a.m. UK time, with some sites briefly loading after refreshes. Cloudflare issued an update minutes later, confirming that it was aware of an incident affecting multiple customers but did not identify a cause or timeline for resolution.

Outage tracker Down Detector was also intermittently unavailable, later showing a sharp rise in reports once restored. Affected sites displayed repeated error messages advising users to try again later, indicating partial service degradation rather than full shutdowns.

Cloudflare provides core internet infrastructure, including traffic routing and cyberattack protection, which means failures can cascade across unrelated services. Similar disruption followed an AWS incident last month, highlighting the systemic risk of centralised web infrastructure.

The company states that it is continuing to investigate the issue. No mitigation steps or source of failure have yet been disclosed, and Cloudflare has warned that further updates will follow once more information becomes available.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP unveils new models and tools shaping enterprise AI

The German multinational software company, SAP, used its TechEd event in Berlin to reveal a significant expansion of its Business AI portfolio, signalling a decisive shift toward an AI-native future across its suite.

The company expects to deliver 400 AI use cases by the end of 2025, building on more than 300 already in place.

It also argues that its early use cases already generate substantial returns, offering meaningful value for firms seeking operational gains instead of incremental upgrades.

A firm that places AI-native architecture at the centre of its strategy. SAP HANA Cloud now supports richer model grounding through multi-model engines, long-term agentic memory, and automated knowledge graph creation.

SAP aims to integrate these tools with SAP Business Data Cloud and Snowflake through zero-copy data sharing next year.

The introduction of SAP-RPT-1, a new relational foundation model designed for structured enterprise data rather than general language tasks, is presented as a significant step toward improving prediction accuracy across finance, supply chains, and customer analytics.

SAP also seeks to empower developers through a mix of low-code and pro-code tools, allowing companies to design and orchestrate their own Joule Agents.

Agent governance is strengthened through the LeanIX agent hub. At the same time, new interoperability efforts based on the agent-to-agent protocol are expected to enable SAP systems to work more smoothly with models and agents from major partners, including AWS, Google, Microsoft, and ServiceNow.

Improvements in ABAP development, including the introduction of SAP-ABAP-1 and a new Visual Studio Code extension, aim to support developers who prefer modern, AI-enabled workflows over older, siloed environments.

Physical AI also takes a prominent role. SAP demonstrated how Joule Agents already operate inside autonomous robots for tasks linked to logistics, field services, and asset performance.

Plans extend from embodied AI to quantum-ready business algorithms designed to enhance complex decision-making without forcing companies to re-platform.

SAP frames the overall strategy as a means to support Europe’s digital sovereignty, which is strengthened through expanded infrastructure in Germany and cooperation with Deutsche Telekom under the Industrial AI Cloud project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Scientist Kosmos links every conclusion to code and citations

OpenAI chief Sam Altman has praised Future House’s new AI Scientist, Kosmos, calling it an exciting step toward automated discovery. The platform upgrades the earlier Robin system and is now operated by Edison Scientific, which plans a commercial tier alongside free access for academics.

Kosmos addresses a key limitation in traditional models: the inability to track long reasoning chains while processing scientific literature at scale. It uses structured world models to stay focused on a single research goal across tens of millions of tokens and hundreds of agent runs.

A single Kosmos run can analyse around 1,500 papers and more than 40,000 lines of code, with early users estimating that this replaces roughly six months of human work. Internal tests found that almost 80 per cent of its conclusions were correct.

Future House reported seven discoveries made during testing, including three that matched known results and four new hypotheses spanning genetics, ageing, and disease. Edison says several are now being validated in wet lab studies, reinforcing the system’s scientific utility.

Kosmos emphasises traceability, linking every conclusion to specific code or source passages to avoid black-box outputs. It is priced at $200 per run, with early pricing guarantees and free credits for academics, though multiple runs may still be required for complex questions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA brings RDMA acceleration to S3 object storage for AI workloads

AI workloads are driving unprecedented data growth, with enterprises projected to generate almost 400 zettabytes annually by 2028. NVIDIA says traditional storage models cannot match the speed and scale needed for modern training and inference systems.

The company is promoting RDMA for S3-compatible storage, which accelerates object data transfers by bypassing host CPUs and removing bottlenecks associated with TCP networking. The approach promises higher throughput per terabyte and reduced latency across AI factories and cloud deployments.

Key benefits include lower storage costs, workload portability across environments and faster access for training, inference and vector database workloads. NVIDIA says freeing CPU resources also improves overall GPU utilisation and project efficiency.

RDMA client libraries run directly on GPU compute nodes, enabling faster object retrieval during training. While initially optimised for NVIDIA hardware, the architecture is open and can be extended by other vendors and users seeking higher storage performance.

Cloudian, Dell and HPE are integrating the technology into products such as HyperStore, ObjectScale and Alletra Storage MP X10000. NVIDIA is working with partners to standardise the approach, arguing that accelerated object storage is now essential for large-scale AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM gains automated Deep Research tool and wider file support

Google is expanding NotebookLM with Deep Research, a tool designed to handle complex online inquiries and produce structured, source-grounded reports. The feature acts like a dedicated researcher, planning its own process and gathering material across the web.

Users can enter a question, choose a research style, and let Deep Research browse relevant sites before generating a detailed briefing. The tool runs in the background, allowing additional sources to be added without disrupting the workflow or leaving the notebook.

NotebookLM now supports more file types, including Google Sheets, Drive URLs, PDFs stored in Drive, and Microsoft Word documents. Google says this enables tasks such as summarising spreadsheets and quickly importing multiple Drive files for analysis.

The update continues the service’s gradual expansion since its late-2023 launch, which has brought features such as Video Overviews for turning dense materials into visual explainers. These follow earlier additions, such as Audio Overviews, which create podcast-style summaries of shared documents.

Google also released NotebookLM apps for Android and iOS earlier this year, extending access beyond desktop. The company says the latest enhancements should reach all users within a week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Qwen relaunch aims to unify Alibaba’s mobile AI ecosystem

Alibaba is preparing a major overhaul of its mobile AI apps, renaming Tongyi as Qwen and adding early agentic features. The update aims to make Qwen resemble leading chatbots while linking AI tools to Taobao and other services. Alibaba also plans a global version once the new design stabilises.

Over one hundred developers are working on the project as part of wider AI investments. Alibaba hopes Qwen can anchor its consumer AI strategy and regain momentum in a crowded market. It still trails Doubao and Yuanbao in user popularity and needs a clearer consumer path.

Monetisation remains difficult in China because consumers rarely pay for digital services. Alibaba thinks shopping features will boost adoption by linking AI directly to e-commerce use. Qwen will stay free for now, allowing the company to scale its user base before adding paid options.

Alibaba wants to streamline its overlapping apps by directing users to one unified Qwen interface. Consolidation is meant to strengthen brand visibility and remove confusion around different versions. A single app could help Alibaba stand out as Chinese firms race to deploy agentic AI.

Chinese and US companies continue to expand spending on frontier AI models, cloud infrastructure, and agent tools. Alibaba reported strong cloud growth and rising demand for AI products in its latest quarter. The Qwen relaunch is its largest attempt to turn technical progress into a viable consumer business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!