EU examines Amazon and Microsoft influence in cloud services

European regulators have launched three market investigations into cloud computing amid growing concerns about sector concentration.

The European Commission will assess whether Amazon Web Services and Microsoft Azure should be designated as gatekeepers for their cloud services under the Digital Markets Act, despite not meeting the formal threshold criteria.

Officials argue that cloud infrastructure now underpins AI development and many digital services, so competition must remain open and fair.

A move that signals a broader shift in EU oversight of strategic technologies. Rather than focusing solely on size, investigators will examine whether the two providers act as unavoidable gateways between businesses and users.

They will analyse network effects, switching costs and the role of corporate structures that might deepen market dominance. If the inquiries confirm gatekeeper status, both companies will face the DMA’s full obligations and a six-month compliance period.

A parallel investigation will explore whether existing DMA rules adequately address cloud-specific risks that might limit competition. Regulators aim to clarify whether obstacles to interoperability, restricted access to data, tying of services and imbalanced contractual terms require updated obligations.

Insights gathered from industry, public bodies and civil society will feed into a final report within 18 months, potentially leading to changes via a delegated act.

EU officials underline that Europe’s competitiveness, technological resilience and future AI capacity rely on a fair cloud environment. They argue that a transparent and contestable market will strengthen Europe’s strategic autonomy and encourage innovation.

The inquiries will shape how digital platforms are regulated as cloud services become increasingly central to economic and social life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare outage disrupts leading crypto platforms

Cloudflare experienced a significant network outage on Tuesday, which disrupted access to major cryptocurrency platforms, including Coinbase, Kraken, Etherscan, and several DeFi services, resulting in widespread ‘500 Internal Server Error’ messages.

The company acknowledged the issue as an internal service degradation across parts of its global network and began rolling out a fix. However, users continued to face elevated error rates during the process.

Major Bitcoin and Ethereum platforms, as well as Aave, DeFiLlama, and several blockchain explorers, were impacted. The disruption spread beyond crypto, affecting several major Web2 platforms, while services like BlueSky and Reddit stayed fully operational.

Cloudflare shares dropped 3.5% in pre-market trading as the company investigated whether scheduled maintenance at specific data centres played any role.

The incident marks the third significant Cloudflare disruption affecting crypto platforms since 2019, highlighting the industry’s ongoing reliance on centralised infrastructure providers despite its focus on decentralisation.

Industry experts pointed to recent outages from Cloudflare and Amazon Web Services as evidence that critical digital services cannot rely solely on a single vendor for reliability. Kraken restored access ahead of many peers, while Cloudflare stated that the issue was resolved and would continue to monitor for full stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft and NVIDIA expand partnership with Anthropic

Microsoft, NVIDIA, and Anthropic have announced new strategic partnerships to expand access to Anthropic’s rapidly growing Claude AI models. Claude will scale on Microsoft Azure with NVIDIA support, offering enterprise customers broader model choices and enhanced capabilities.

Anthropic has committed to purchase $30 billion of Azure compute capacity and additional capacity up to one gigawatt. NVIDIA and Anthropic will optimise Claude models for performance, efficiency, and cost, while aligning future NVIDIA architectures with Anthropic workloads.

The partnerships also extend Claude access across Microsoft Foundry, including frontier models like Claude Sonnet 4.5, Claude Opus 4.1, and Claude Haiku 4.5.

Microsoft Copilot products, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio, will continue to feature Claude capabilities, providing enterprise users with integrated AI tools.

Microsoft and NVIDIA have committed $5 billion and $10 billion respectively to support Anthropic’s growth. The partnership makes Claude the only frontier AI model on all three top cloud platforms, boosting enterprise AI adoption and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Electricity bills surge as data centres drive up costs across the US

Massive new data centres, built to power the AI industry, are being blamed for a dramatic rise in electricity costs across the US. Residential utility bills in states with high concentrations of these facilities, such as Virginia and Illinois, are surging far beyond the national average.

The escalating energy demand has caused a major capacity crisis on large grids like the PJM Interconnection, with data centre load identified as the primary reason for a multi-billion pound spike in future power costs. These extraordinary increases are being passed directly to consumers, making affordability a central issue for politicians ahead of upcoming elections.

Lawmakers are now targeting tech companies and AI labs, promising to challenge what they describe as ‘sweetheart deals’ and to make the firms contribute more to the infrastructure they rely upon.

Although rising costs are also attributed to an ageing grid and inflation, experts warn that utility bills are unlikely to decrease this decade due to the unprecedented demand from rapid data centre expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare buys AI platform Replicate

Cloudflare has agreed to purchase Replicate, a platform simplifying the deployment and running of AI models. The technology aims to cut down on GPU hardware and infrastructure needs typically required for complex AI.

The acquisition will integrate Replicate’s extensive library of over 50,000 AI models into the Cloudflare platform. Developers can then access and deploy any AI model globally using just a single line of code for rapid implementation.

Matthew Prince, Cloudflare’s chief executive, stated the acquisition will make his company the ‘most seamless, all-in-one shop for AI development’. The move abstracts away infrastructure complexities so developers can focus only on delivering amazing products.

Replicate had previously raised $40m in venture funding from prominent investors in the US. Integrating Replicate’s community and models with Cloudflare’s global network will create a singular platform for building tomorrow’s next big AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Outage at Cloudflare takes multiple websites offline worldwide

Cloudflare has suffered a major outage, disrupting access to multiple high-profile websites, including X and Letterboxd. Users encountered internal server error messages linked to Cloudflare’s network, prompting concerns of a broader infrastructure failure.

The problems began around 11.30 a.m. UK time, with some sites briefly loading after refreshes. Cloudflare issued an update minutes later, confirming that it was aware of an incident affecting multiple customers but did not identify a cause or timeline for resolution.

Outage tracker Down Detector was also intermittently unavailable, later showing a sharp rise in reports once restored. Affected sites displayed repeated error messages advising users to try again later, indicating partial service degradation rather than full shutdowns.

Cloudflare provides core internet infrastructure, including traffic routing and cyberattack protection, which means failures can cascade across unrelated services. Similar disruption followed an AWS incident last month, highlighting the systemic risk of centralised web infrastructure.

The company states that it is continuing to investigate the issue. No mitigation steps or source of failure have yet been disclosed, and Cloudflare has warned that further updates will follow once more information becomes available.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP unveils new models and tools shaping enterprise AI

The German multinational software company, SAP, used its TechEd event in Berlin to reveal a significant expansion of its Business AI portfolio, signalling a decisive shift toward an AI-native future across its suite.

The company expects to deliver 400 AI use cases by the end of 2025, building on more than 300 already in place.

It also argues that its early use cases already generate substantial returns, offering meaningful value for firms seeking operational gains instead of incremental upgrades.

A firm that places AI-native architecture at the centre of its strategy. SAP HANA Cloud now supports richer model grounding through multi-model engines, long-term agentic memory, and automated knowledge graph creation.

SAP aims to integrate these tools with SAP Business Data Cloud and Snowflake through zero-copy data sharing next year.

The introduction of SAP-RPT-1, a new relational foundation model designed for structured enterprise data rather than general language tasks, is presented as a significant step toward improving prediction accuracy across finance, supply chains, and customer analytics.

SAP also seeks to empower developers through a mix of low-code and pro-code tools, allowing companies to design and orchestrate their own Joule Agents.

Agent governance is strengthened through the LeanIX agent hub. At the same time, new interoperability efforts based on the agent-to-agent protocol are expected to enable SAP systems to work more smoothly with models and agents from major partners, including AWS, Google, Microsoft, and ServiceNow.

Improvements in ABAP development, including the introduction of SAP-ABAP-1 and a new Visual Studio Code extension, aim to support developers who prefer modern, AI-enabled workflows over older, siloed environments.

Physical AI also takes a prominent role. SAP demonstrated how Joule Agents already operate inside autonomous robots for tasks linked to logistics, field services, and asset performance.

Plans extend from embodied AI to quantum-ready business algorithms designed to enhance complex decision-making without forcing companies to re-platform.

SAP frames the overall strategy as a means to support Europe’s digital sovereignty, which is strengthened through expanded infrastructure in Germany and cooperation with Deutsche Telekom under the Industrial AI Cloud project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Scientist Kosmos links every conclusion to code and citations

OpenAI chief Sam Altman has praised Future House’s new AI Scientist, Kosmos, calling it an exciting step toward automated discovery. The platform upgrades the earlier Robin system and is now operated by Edison Scientific, which plans a commercial tier alongside free access for academics.

Kosmos addresses a key limitation in traditional models: the inability to track long reasoning chains while processing scientific literature at scale. It uses structured world models to stay focused on a single research goal across tens of millions of tokens and hundreds of agent runs.

A single Kosmos run can analyse around 1,500 papers and more than 40,000 lines of code, with early users estimating that this replaces roughly six months of human work. Internal tests found that almost 80 per cent of its conclusions were correct.

Future House reported seven discoveries made during testing, including three that matched known results and four new hypotheses spanning genetics, ageing, and disease. Edison says several are now being validated in wet lab studies, reinforcing the system’s scientific utility.

Kosmos emphasises traceability, linking every conclusion to specific code or source passages to avoid black-box outputs. It is priced at $200 per run, with early pricing guarantees and free credits for academics, though multiple runs may still be required for complex questions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA brings RDMA acceleration to S3 object storage for AI workloads

AI workloads are driving unprecedented data growth, with enterprises projected to generate almost 400 zettabytes annually by 2028. NVIDIA says traditional storage models cannot match the speed and scale needed for modern training and inference systems.

The company is promoting RDMA for S3-compatible storage, which accelerates object data transfers by bypassing host CPUs and removing bottlenecks associated with TCP networking. The approach promises higher throughput per terabyte and reduced latency across AI factories and cloud deployments.

Key benefits include lower storage costs, workload portability across environments and faster access for training, inference and vector database workloads. NVIDIA says freeing CPU resources also improves overall GPU utilisation and project efficiency.

RDMA client libraries run directly on GPU compute nodes, enabling faster object retrieval during training. While initially optimised for NVIDIA hardware, the architecture is open and can be extended by other vendors and users seeking higher storage performance.

Cloudian, Dell and HPE are integrating the technology into products such as HyperStore, ObjectScale and Alletra Storage MP X10000. NVIDIA is working with partners to standardise the approach, arguing that accelerated object storage is now essential for large-scale AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!