Web services recover after Cloudflare restores its network systems

Cloudflare has resolved a technical issue that briefly disrupted access to major platforms, including X, ChatGPT, and Letterboxd. Users had earlier reported internal server error messages linked to Cloudflare’s network, indicating that pages could not be displayed.

The disruption began around midday UK time, with some sites loading intermittently as the problem spread across the company’s infrastructure. Cloudflare confirmed it was investigating an incident affecting multiple customers and issued rolling updates as engineers worked to identify the fault.

Outage tracker Down Detector also experienced difficulties during the incident, later showing a sharp rise in reports once it came back online. The pattern pointed to a broad network-level failure rather than isolated platform issues.

Users saw repeated internal server error warnings asking them to try again, though services began recovering as Cloudflare isolated the cause. The company has not yet released full technical details, but said the fault has been fixed and that systems are stabilising.

Cloudflare provides routing, security, and reliability tools for a wide range of online services, making a single malfunction capable of cascading globally. The company said it would share further information on the incident and steps taken to prevent similar failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Misconfigured database triggered global Cloudflare failure, CEO says

Cloudflare says its global outage on 18 November was caused by an internal configuration error, not a cyberattack. CEO Matthew Prince apologised to users after a permissions update to a ClickHouse cluster generated a malformed feature file that caused systems worldwide to crash.

The oversized file exceeded a hard limit in Cloudflare’s routing software, triggering failures across its global edge. Intermittent recoveries during the first hours of the incident led engineers to suspect a possible attack, as the network randomly stabilised when a non-faulty file propagated.

Confusion intensified when Cloudflare’s externally hosted status page briefly became inaccessible, raising fears of coordinated targeting. The root cause was later traced to metadata duplication from an unexpected database source, which doubled the number of machine-learning features in the file.

The outage affected Cloudflare’s CDN, security layers, and ancillary services, including Turnstile, Workers KV, and Access. Some legacy proxies kept limited traffic moving, but bot scores and authentication systems malfunctioned, causing elevated latencies and blocked requests.

Engineers halted the propagation of the faulty file by mid-afternoon and restored a clean version before restarting affected systems. Prince called it Cloudflare’s most serious failure since 2019 and said lessons learned will guide major improvements to the company’s infrastructure resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The future of the EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

On 19 November, the European Commission is expected to present its official simplification package. This section will be updated once the final text is published.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand strains electrical grids

Microsoft CEO Satya Nadella recently delivered a key insight, stating that the biggest hurdle to deploying new AI solutions is now electrical power, not chip supply. The massive energy requirements for running large language models (LLMs) have created a critical bottleneck for major cloud providers.

Nadella specified that Microsoft currently has a ‘bunch of chips sitting in inventory’ that cannot be plugged in and utilised. The problem is a lack of ‘warm shells’, meaning data centre buildings that are fully equipped with the necessary power and cooling capacity.

The escalating power requirements of AI infrastructure are placing extreme pressure on utility grids and capacity. Projections from the Lawrence Berkeley National Laboratory indicate that US data centres could consume up to 12 percent of the nation’s total electricity by 2028.

The disclosure should serve as a warning to investors, urging them to evaluate the infrastructure challenges alongside AI’s technological promise. This energy limitation could create a temporary drag on the sector, potentially slowing the massive projected returns on the $5 trillion investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI threatens global knowledge diversity

AI systems are increasingly becoming the primary source of global information, yet they rely heavily on datasets dominated by Western languages and institutions.

Such reliance creates significant blind spots that threaten to erase centuries of indigenous wisdom and local traditions not currently found in digital archives.

Dominant language models often overlook oral histories and regional practices, including specific ecological knowledge essential for sustainable living in tropical climates.

Experts warn of a looming ‘knowledge collapse’ where alternative viewpoints fade away simply because they are statistically less prevalent in training data.

Future generations may find themselves disconnected from vital human insights as algorithms reinforce a homogenised worldview through recursive feedback loops.

Preserving diverse epistemologies remains crucial for addressing global challenges, such as the climate crisis, rather than relying solely on Silicon Valley’s version of intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CelcomDigi convergence project earns ZTE top 5G service honour

ZTE has won the Best Mobile/5G Service Innovation award at the 2025 Global Connectivity Awards for its work on Malaysia’s CelcomDigi dual-network convergence. The project integrates network assets across four regions and six operators, marking the largest deployment of its kind in the country.

The company introduced an intelligent, integrated, and connected management model built on big-data platforms for site deployment, optimisation, and value analysis. Eight smart tools support planning, commissioning, and operations, enabling end-to-end oversight of project delivery and performance.

Phase-one results show a 15 percent rise in coverage, 25 percent faster downloads, higher traffic, and a more than 60 percent reduction in complaints. ZTE also deployed AI-based energy-saving systems to reduce emissions and advance sustainability goals across the network.

The project incorporates talent-building measures by prioritising localisation and working with Malaysian universities. ZTE says this approach supports long-term sector resilience alongside near-term performance gains.

CAPACITY’s Global Connectivity Awards, held in Malaysia, evaluate innovation, execution, and industry impact. ZTE states that it will continue to develop new project management models and partner globally to build more efficient, intelligent, and sustainable communications networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore’s HTX boosts Home Team AI capabilities with Mistral partnership

HTX has signed a new memorandum of understanding with France’s Mistral AI to accelerate joint research on large language and multimodal models for public safety. The partnership will expand into embodied AI, video analytics, cybersecurity, and automated fire safety systems.

The deal builds on earlier work co-developing Phoenix, HTX’s internal LLM series, and a Home Team safety benchmark for evaluating model behaviour. The organisations will now collaborate on specialised models for robots, surveillance platforms, and cyber defence tools.

Planned capabilities include natural-language control of robotic systems, autonomous navigation in unfamiliar environments, and object retrieval. Video AI tools will support predictive tracking and proactive crime alerts across multiple feeds.

Cybersecurity applications include automated architecture reviews and on-demand vulnerability testing. Fire safety tools will use multimodal comprehension to analyse architectural plans and flag compliance issues without manual checks.

The partnership forms part of the HTxAI movement, which aims to strengthen Home Team AI capacity through research collaborations with industry and academia. Mistral’s flagship models, Mistral Medium 3.1 and Magistral, are currently among the top performers in multilingual and multimodal benchmarks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Berlin summit links digital strategy to wider European security

German Chancellor Friedrich Merz and French President Emmanuel Macron will host a Berlin summit to reduce Europe’s reliance on US tech platforms and to shape a more independent EU digital strategy. The meeting coincides with planned revisions to EU AI and data rules.

The push for digital independence reflects growing concern that Europe risks falling behind the US in strategic technologies. Leaders argue that regulatory changes must support competitiveness while maintaining core privacy and safety principles.

Germany is also hosting a two-day European security conference in Berlin, featuring German Defence Minister Boris Pistorius. The parallel agendas highlight how digital strategy and geopolitical security are increasingly linked in EU policy debates.

The German foreign minister, Johann Wadephul, has meanwhile backed the EU enlargement in the Western Balkans during a visit to Montenegro, signalling continued geopolitical outreach alongside internal reforms.

The Berlin discussions are expected to shape Europe’s stance ahead of upcoming AI and data proposals, setting the tone for broader talks on industrial policy, technology sovereignty, and regional security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU aviation regulator opens debate on AI oversight and safety

EASA has issued its first regulatory proposal on AI in aviation, opening a three-month consultation for industry feedback. The draft focuses on trustworthy, data-driven AI systems and anticipates applications ranging from basic assistance to human–AI teaming.

The move comes amid wider criticism of EU AI rules from major tech firms and political leaders. Aviation stakeholders are now assessing whether compliance costs and operational demands could slow development or disrupt competitive positioning across the sector.

Experts warn that adapting to the framework may require significant investment, particularly for companies with limited resources. Others may accelerate AI adoption to preserve market advantage, especially where safety gains or efficiency improvements justify rapid deployment.

EASA stresses that consultation is essential to balance strict assurance requirements with the flexibility needed for innovation. Privacy and personal data issues remain contentious, shaping expectations for acceptable AI use in safety-critical environments.

Meanwhile, Airbus is pushing to reach 75 A320-family deliveries per month by 2027, driven by the A321neo’s strong order book. In parallel, Mitsui OSK Lines continues to lead the global LNG carrier market, reflecting broader momentum across adjacent transport sectors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!