Outage at Cloudflare takes multiple websites offline worldwide

Cloudflare has suffered a major outage, disrupting access to multiple high-profile websites, including X and Letterboxd. Users encountered internal server error messages linked to Cloudflare’s network, prompting concerns of a broader infrastructure failure.

The problems began around 11.30 a.m. UK time, with some sites briefly loading after refreshes. Cloudflare issued an update minutes later, confirming that it was aware of an incident affecting multiple customers but did not identify a cause or timeline for resolution.

Outage tracker Down Detector was also intermittently unavailable, later showing a sharp rise in reports once restored. Affected sites displayed repeated error messages advising users to try again later, indicating partial service degradation rather than full shutdowns.

Cloudflare provides core internet infrastructure, including traffic routing and cyberattack protection, which means failures can cascade across unrelated services. Similar disruption followed an AWS incident last month, highlighting the systemic risk of centralised web infrastructure.

The company states that it is continuing to investigate the issue. No mitigation steps or source of failure have yet been disclosed, and Cloudflare has warned that further updates will follow once more information becomes available.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SAP unveils new models and tools shaping enterprise AI

The German multinational software company, SAP, used its TechEd event in Berlin to reveal a significant expansion of its Business AI portfolio, signalling a decisive shift toward an AI-native future across its suite.

The company expects to deliver 400 AI use cases by the end of 2025, building on more than 300 already in place.

It also argues that its early use cases already generate substantial returns, offering meaningful value for firms seeking operational gains instead of incremental upgrades.

A firm that places AI-native architecture at the centre of its strategy. SAP HANA Cloud now supports richer model grounding through multi-model engines, long-term agentic memory, and automated knowledge graph creation.

SAP aims to integrate these tools with SAP Business Data Cloud and Snowflake through zero-copy data sharing next year.

The introduction of SAP-RPT-1, a new relational foundation model designed for structured enterprise data rather than general language tasks, is presented as a significant step toward improving prediction accuracy across finance, supply chains, and customer analytics.

SAP also seeks to empower developers through a mix of low-code and pro-code tools, allowing companies to design and orchestrate their own Joule Agents.

Agent governance is strengthened through the LeanIX agent hub. At the same time, new interoperability efforts based on the agent-to-agent protocol are expected to enable SAP systems to work more smoothly with models and agents from major partners, including AWS, Google, Microsoft, and ServiceNow.

Improvements in ABAP development, including the introduction of SAP-ABAP-1 and a new Visual Studio Code extension, aim to support developers who prefer modern, AI-enabled workflows over older, siloed environments.

Physical AI also takes a prominent role. SAP demonstrated how Joule Agents already operate inside autonomous robots for tasks linked to logistics, field services, and asset performance.

Plans extend from embodied AI to quantum-ready business algorithms designed to enhance complex decision-making without forcing companies to re-platform.

SAP frames the overall strategy as a means to support Europe’s digital sovereignty, which is strengthened through expanded infrastructure in Germany and cooperation with Deutsche Telekom under the Industrial AI Cloud project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Scientist Kosmos links every conclusion to code and citations

OpenAI chief Sam Altman has praised Future House’s new AI Scientist, Kosmos, calling it an exciting step toward automated discovery. The platform upgrades the earlier Robin system and is now operated by Edison Scientific, which plans a commercial tier alongside free access for academics.

Kosmos addresses a key limitation in traditional models: the inability to track long reasoning chains while processing scientific literature at scale. It uses structured world models to stay focused on a single research goal across tens of millions of tokens and hundreds of agent runs.

A single Kosmos run can analyse around 1,500 papers and more than 40,000 lines of code, with early users estimating that this replaces roughly six months of human work. Internal tests found that almost 80 per cent of its conclusions were correct.

Future House reported seven discoveries made during testing, including three that matched known results and four new hypotheses spanning genetics, ageing, and disease. Edison says several are now being validated in wet lab studies, reinforcing the system’s scientific utility.

Kosmos emphasises traceability, linking every conclusion to specific code or source passages to avoid black-box outputs. It is priced at $200 per run, with early pricing guarantees and free credits for academics, though multiple runs may still be required for complex questions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA brings RDMA acceleration to S3 object storage for AI workloads

AI workloads are driving unprecedented data growth, with enterprises projected to generate almost 400 zettabytes annually by 2028. NVIDIA says traditional storage models cannot match the speed and scale needed for modern training and inference systems.

The company is promoting RDMA for S3-compatible storage, which accelerates object data transfers by bypassing host CPUs and removing bottlenecks associated with TCP networking. The approach promises higher throughput per terabyte and reduced latency across AI factories and cloud deployments.

Key benefits include lower storage costs, workload portability across environments and faster access for training, inference and vector database workloads. NVIDIA says freeing CPU resources also improves overall GPU utilisation and project efficiency.

RDMA client libraries run directly on GPU compute nodes, enabling faster object retrieval during training. While initially optimised for NVIDIA hardware, the architecture is open and can be extended by other vendors and users seeking higher storage performance.

Cloudian, Dell and HPE are integrating the technology into products such as HyperStore, ObjectScale and Alletra Storage MP X10000. NVIDIA is working with partners to standardise the approach, arguing that accelerated object storage is now essential for large-scale AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM gains automated Deep Research tool and wider file support

Google is expanding NotebookLM with Deep Research, a tool designed to handle complex online inquiries and produce structured, source-grounded reports. The feature acts like a dedicated researcher, planning its own process and gathering material across the web.

Users can enter a question, choose a research style, and let Deep Research browse relevant sites before generating a detailed briefing. The tool runs in the background, allowing additional sources to be added without disrupting the workflow or leaving the notebook.

NotebookLM now supports more file types, including Google Sheets, Drive URLs, PDFs stored in Drive, and Microsoft Word documents. Google says this enables tasks such as summarising spreadsheets and quickly importing multiple Drive files for analysis.

The update continues the service’s gradual expansion since its late-2023 launch, which has brought features such as Video Overviews for turning dense materials into visual explainers. These follow earlier additions, such as Audio Overviews, which create podcast-style summaries of shared documents.

Google also released NotebookLM apps for Android and iOS earlier this year, extending access beyond desktop. The company says the latest enhancements should reach all users within a week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Qwen relaunch aims to unify Alibaba’s mobile AI ecosystem

Alibaba is preparing a major overhaul of its mobile AI apps, renaming Tongyi as Qwen and adding early agentic features. The update aims to make Qwen resemble leading chatbots while linking AI tools to Taobao and other services. Alibaba also plans a global version once the new design stabilises.

Over one hundred developers are working on the project as part of wider AI investments. Alibaba hopes Qwen can anchor its consumer AI strategy and regain momentum in a crowded market. It still trails Doubao and Yuanbao in user popularity and needs a clearer consumer path.

Monetisation remains difficult in China because consumers rarely pay for digital services. Alibaba thinks shopping features will boost adoption by linking AI directly to e-commerce use. Qwen will stay free for now, allowing the company to scale its user base before adding paid options.

Alibaba wants to streamline its overlapping apps by directing users to one unified Qwen interface. Consolidation is meant to strengthen brand visibility and remove confusion around different versions. A single app could help Alibaba stand out as Chinese firms race to deploy agentic AI.

Chinese and US companies continue to expand spending on frontier AI models, cloud infrastructure, and agent tools. Alibaba reported strong cloud growth and rising demand for AI products in its latest quarter. The Qwen relaunch is its largest attempt to turn technical progress into a viable consumer business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Private AI Compute for secure cloud-AI

In a move that underscores the evolving balance between capability and privacy in AI, Google today introduced Private AI Compute. This new cloud-based processing platform supports its most advanced models, such as those in the Gemini family, while maintaining what it describes as on-device-level data security.

The blog post explains that many emerging AI tasks now exceed the capabilities of on-device hardware alone. To solve this, Google built Private AI Compute to offload heavy computation to its cloud, powered by custom Tensor Processing Units (TPUs) and wrapped in a fortified enclave environment called Titanium Intelligence Enclaves (TIE).

The system uses remote attestation, encryption and IP-blinding relays to ensure user data remains private and inaccessible; ot even Google’s supposed to gain access.

Google identifies initial use-cases in its Pixel devices: features such as Magic Cue and Recorder will benefit from the extra compute, enabling more timely suggestions, multilingual summarisation and advanced context-aware assistance.

At the same time, the company says this platform ‘opens up a new set of possibilities for helpful AI experiences’ that go beyond what on-device AI alone can fully achieve.

This announcement is significant from both a digital policy and platform economy perspective. It illustrates how major technology firms are reconciling user privacy demands with the computational intensity of next-generation AI.

For organisations and governments focused on AI governance and digital diplomacy, the move raises questions about data sovereignty, transparency of remote enclaves and the true nature of ‘secure ‘cloud processing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft brings smarter search to Copilot

Microsoft is expanding Copilot with more precise citations that link directly to publisher sources. Users can also open aggregated references for each answer to review context. The emphasis sits on trust, control, and transparent sourcing throughout the experience.

A new dedicated search mode within Copilot delivers more detailed results when queries require specific information.

Summaries appear alongside links, enabling users to verify evidence and make informed decisions quickly. Industry coverage highlights the stronger focus on verifiable sources and publisher visibility.

The right pane offers a ‘Show all’ list of sources used in responses. Source-based citation pills replace opaque markers to aid credibility checks and exploration. Design choices aim to empower people to stay in control while navigating complex topics.

Updates are live across copilot.com, mobile apps, and Copilot in Edge, with more refinements expected. Microsoft positions the changes within a human-centred strategy where AI supports curiosity safely. Broader Copilot enhancements across Windows and Edge continue in parallel roadmaps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Private AI Compute by Google blends cloud power with on-device privacy

Google introduced Private AI Compute, a cloud platform that combines the power of Gemini with on-device privacy. It delivers faster AI while ensuring that personal data remains private and inaccessible, even to Google. The system builds on Google’s privacy-enhancing innovations across AI experiences.

As AI becomes more anticipatory, Private AI Compute enables advanced reasoning that exceeds the limits of local devices. It runs on Google’s custom TPUs and Titanium Intelligence Enclaves, securely powering Gemini models in the cloud. The design keeps all user data isolated and encrypted.

Encrypted attestation links a user’s device to sealed processing environments, allowing only the user to access the data. Features like Magic Cue and Recorder on Pixel now perform smarter, multilingual actions privately. Google says this extends on-device protection principles into secure cloud operations.

The platform’s multi-layered safeguards follow Google’s Secure AI Framework and Privacy Principles. Private AI Compute enables enterprises and consumers to utilise Gemini models without exposing sensitive inputs. It reinforces Google’s vision for privacy-centric infrastructure in cloud-enabled AI.

By merging local and cloud intelligence, Google says Private AI Compute opens new paths for private, personalised AI. It will guide the next wave of Gemini capabilities while maintaining transparency and safety. The company positions it as a cornerstone of responsible AI innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!