Web services recover after Cloudflare restores its network systems

Cloudflare has resolved a technical issue that briefly disrupted access to major platforms, including X, ChatGPT, and Letterboxd. Users had earlier reported internal server error messages linked to Cloudflare’s network, indicating that pages could not be displayed.

The disruption began around midday UK time, with some sites loading intermittently as the problem spread across the company’s infrastructure. Cloudflare confirmed it was investigating an incident affecting multiple customers and issued rolling updates as engineers worked to identify the fault.

Outage tracker Down Detector also experienced difficulties during the incident, later showing a sharp rise in reports once it came back online. The pattern pointed to a broad network-level failure rather than isolated platform issues.

Users saw repeated internal server error warnings asking them to try again, though services began recovering as Cloudflare isolated the cause. The company has not yet released full technical details, but said the fault has been fixed and that systems are stabilising.

Cloudflare provides routing, security, and reliability tools for a wide range of online services, making a single malfunction capable of cascading globally. The company said it would share further information on the incident and steps taken to prevent similar failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Misconfigured database triggered global Cloudflare failure, CEO says

Cloudflare says its global outage on 18 November was caused by an internal configuration error, not a cyberattack. CEO Matthew Prince apologised to users after a permissions update to a ClickHouse cluster generated a malformed feature file that caused systems worldwide to crash.

The oversized file exceeded a hard limit in Cloudflare’s routing software, triggering failures across its global edge. Intermittent recoveries during the first hours of the incident led engineers to suspect a possible attack, as the network randomly stabilised when a non-faulty file propagated.

Confusion intensified when Cloudflare’s externally hosted status page briefly became inaccessible, raising fears of coordinated targeting. The root cause was later traced to metadata duplication from an unexpected database source, which doubled the number of machine-learning features in the file.

The outage affected Cloudflare’s CDN, security layers, and ancillary services, including Turnstile, Workers KV, and Access. Some legacy proxies kept limited traffic moving, but bot scores and authentication systems malfunctioned, causing elevated latencies and blocked requests.

Engineers halted the propagation of the faulty file by mid-afternoon and restored a clean version before restarting affected systems. Prince called it Cloudflare’s most serious failure since 2019 and said lessons learned will guide major improvements to the company’s infrastructure resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google enters a new frontier with Gemini 3

A new phase of its AI strategy has begun for Google with the release of Gemini 3, which arrives as the company’s most advanced model to date.

The new system prioritises deeper reasoning and more subtle multimodal understanding, enabling users to approach difficult ideas with greater clarity instead of relying on repetitive prompting. It marks a major step for Google’s long-term project to integrate stronger intelligence into products used by billions.

Gemini 3 Pro is already available in preview across the Gemini app, AI Mode in Search, AI Studio, Vertex AI and Google’s new development platform known as Antigravity.

A model that performs at the top of major benchmarks in reasoning, mathematics, tool use and multimodal comprehension, offering substantial improvements compared with Gemini 2.5 Pro.

Deep Think mode extends the model’s capabilities even further, reaching new records on demanding academic and AGI-oriented tests, although Google is delaying wider release until additional safety checks conclude.

Users can rely on Gemini 3 to learn complex topics, analyse handwritten material, decode long academic texts or translate lengthy videos into interactive guides instead of navigating separate tools.

Developers benefit from richer interactive interfaces, more autonomous coding agents and the ability to plan tasks over longer horizons.

Google Antigravity enhances this shift by giving agents direct control of the development environment, allowing them to plan, write and validate code independently while remaining under human supervision.

Google emphasises that Gemini 3 is its most extensively evaluated model, supported by independent audits and strengthened protections against manipulation. The system forms the foundation for Google’s next era of agentic, personalised AI and will soon expand with additional models in the Gemini 3 series.

The company expects the new generation to reshape how people learn, build and organise daily tasks instead of depending on fragmented digital services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok launches new tools to manage AI-generated content

TikTok has announced new tools to help users shape and understand AI-generated content (AIGC) in their feeds. A new ‘Manage Topics’ control will let users adjust how much AI content appears in their For You feeds alongside keyword filters and the ‘not interested’ option.

The aim is to personalise content rather than remove it entirely.

To strengthen transparency, TikTok is testing ‘invisible watermarking’ for AI-generated content created with TikTok tools or uploaded using C2PA Content Credentials. Combined with creator labels and AI detection, these watermarks help track and identify content even if edited or re-uploaded.

The platform has launched a $2 million AI literacy fund to support global experts in creating educational content on responsible AI. TikTok collaborates with industry partners and non-profits like Partnership on AI to promote transparency, research, and best practices.

Investments in AI extend beyond moderation and labeling. TikTok is developing innovative features such as Smart Split and AI Outline to enhance creativity and discovery, while using AI to protect user safety and improve the well-being of its trust and safety teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare outage disrupts leading crypto platforms

Cloudflare experienced a significant network outage on Tuesday, which disrupted access to major cryptocurrency platforms, including Coinbase, Kraken, Etherscan, and several DeFi services, resulting in widespread ‘500 Internal Server Error’ messages.

The company acknowledged the issue as an internal service degradation across parts of its global network and began rolling out a fix. However, users continued to face elevated error rates during the process.

Major Bitcoin and Ethereum platforms, as well as Aave, DeFiLlama, and several blockchain explorers, were impacted. The disruption spread beyond crypto, affecting several major Web2 platforms, while services like BlueSky and Reddit stayed fully operational.

Cloudflare shares dropped 3.5% in pre-market trading as the company investigated whether scheduled maintenance at specific data centres played any role.

The incident marks the third significant Cloudflare disruption affecting crypto platforms since 2019, highlighting the industry’s ongoing reliance on centralised infrastructure providers despite its focus on decentralisation.

Industry experts pointed to recent outages from Cloudflare and Amazon Web Services as evidence that critical digital services cannot rely solely on a single vendor for reliability. Kraken restored access ahead of many peers, while Cloudflare stated that the issue was resolved and would continue to monitor for full stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore’s HTX boosts Home Team AI capabilities with Mistral partnership

HTX has signed a new memorandum of understanding with France’s Mistral AI to accelerate joint research on large language and multimodal models for public safety. The partnership will expand into embodied AI, video analytics, cybersecurity, and automated fire safety systems.

The deal builds on earlier work co-developing Phoenix, HTX’s internal LLM series, and a Home Team safety benchmark for evaluating model behaviour. The organisations will now collaborate on specialised models for robots, surveillance platforms, and cyber defence tools.

Planned capabilities include natural-language control of robotic systems, autonomous navigation in unfamiliar environments, and object retrieval. Video AI tools will support predictive tracking and proactive crime alerts across multiple feeds.

Cybersecurity applications include automated architecture reviews and on-demand vulnerability testing. Fire safety tools will use multimodal comprehension to analyse architectural plans and flag compliance issues without manual checks.

The partnership forms part of the HTxAI movement, which aims to strengthen Home Team AI capacity through research collaborations with industry and academia. Mistral’s flagship models, Mistral Medium 3.1 and Magistral, are currently among the top performers in multilingual and multimodal benchmarks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Outage at Cloudflare takes multiple websites offline worldwide

Cloudflare has suffered a major outage, disrupting access to multiple high-profile websites, including X and Letterboxd. Users encountered internal server error messages linked to Cloudflare’s network, prompting concerns of a broader infrastructure failure.

The problems began around 11.30 a.m. UK time, with some sites briefly loading after refreshes. Cloudflare issued an update minutes later, confirming that it was aware of an incident affecting multiple customers but did not identify a cause or timeline for resolution.

Outage tracker Down Detector was also intermittently unavailable, later showing a sharp rise in reports once restored. Affected sites displayed repeated error messages advising users to try again later, indicating partial service degradation rather than full shutdowns.

Cloudflare provides core internet infrastructure, including traffic routing and cyberattack protection, which means failures can cascade across unrelated services. Similar disruption followed an AWS incident last month, highlighting the systemic risk of centralised web infrastructure.

The company states that it is continuing to investigate the issue. No mitigation steps or source of failure have yet been disclosed, and Cloudflare has warned that further updates will follow once more information becomes available.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!