What the Cloudflare outage taught us: Tracing ones that shaped the internet of today

The internet has become part of almost everything we do. It helps us work, stay in touch with friends and family, buy things, plan trips, and handle tasks that would have felt impossible until recently. Most people cannot imagine getting through the day without it.

But there is a hidden cost to all this convenience. Most of the time, online services run smoothly, with countless systems working together in the background. But every now and then, though, a key cog slips out of place.

When that happens, the effects can spread fast, taking down apps, websites, and even entire industries within minutes. These moments remind us how much we rely on digital services, and how quickly everything can unravel when something goes wrong. It raises an uncomfortable question. Is digital dependence worth the convenience, or are we building a house of cards that could collapse, pulling us back into reality?

Warning shots of the dot-com Era and the infancy of Cloud services

In its early years, the internet saw several major malfunctions that disrupted key online services. Incidents like the Morris worm in 1988, which crashed about 10 percent of all internet-connected systems, and the 1996 AOL outage that left six million users offline, revealed how unprepared the early infrastructure was for growing digital demand.

A decade later, the weaknesses were still clear. In 2007, Skype, then with over 270 million users, went down for nearly two days after a surge in logins triggered by a Windows update overwhelmed its network. Since video calls were still in their early days, the impact was not as severe, and most users simply waited it out, postponing chats with friends and family until the issue was fixed.

As the dot-com era faded and the 2010s began, the shift to cloud computing introduced a new kind of fragility. When Amazon’s EC2 and EBS systems in the US-East region went down in 2011, the outage took down services like Reddit, Quora, and IMDb for days, exposing how quickly failures in shared infrastructure can cascade.

A year later, GoDaddy’s DNS failure took millions of websites offline, while large-scale Gmail disruptions affected users around the world, early signs that the cloud’s growing influence came with increasingly high stakes.

By the mid-2010s, it was clear that the internet had evolved from a patchwork of standalone services to a heavily interconnected ecosystem. When cloud or DNS providers stumbled, their failures rippled simultaneously across countless platforms. The move to centralised infrastructure made development faster and more accessible, but it also marked the beginning of an era where a single glitch could shake the entire web.

Centralised infrastructure and the age of cascading failures

The late 2000s and early 2010s saw a rapid rise in internet use, with nearly 2 billion people worldwide online. As access grew, more businesses moved into the digital space, offering e-commerce, social platforms, and new forms of online entertainment to a quickly expanding audience.

With so much activity shifting online, the foundation beneath these services became increasingly important, and increasingly centralised, setting the stage for outages that could ripple far beyond a single website or app.

The next major hit came in 2016, when a massive DDoS attack crippled major websites across the USA and Europe. Platforms like Netflix, Reddit, Twitter, and CNN were suddenly unreachable, not because they were directly targeted, but because Dyn, a major DNS provider, had been overwhelmed.

The attack used the Mirai botnet malware to hijack hundreds of thousands of insecure IoT devices and flood Dyn’s servers with traffic. It was one of the clearest demonstrations yet that knocking out a single infrastructure provider could take down major parts of the internet in one stroke.

In 2017, another major outage occurred, with Amazon at the centre once again. On 28 February, the company’s Simple Storage Service (S3) went down for about 4 hours, disrupting access across a large part of the US-EAST-1 region. While investigating a slowdown in the billing system, an Amazon engineer accidentally entered a typo in a command, taking more servers offline than intended.

That small error was enough to knock out services like Slack, Quora, Coursera, Expedia and countless other websites that relied on S3 for storage or media delivery. The financial impact was substantial; S&P 500 companies alone were estimated to have lost roughly 150 million dollars during the outage.

Amazon quickly published a clear explanation and apology, but transparency could not undo the economic damage nor (yet another) sudden reminder that a single mistake in a centralised system could ripple across the entire web.

Outages in the roaring 2020s

The S3 incident made one thing clear. Outages were no longer just about a single platform going dark. As more services leaned on shared infrastructure, even small missteps could take down enormous parts of the internet. And this fragility did not stop at cloud storage.

Over the next few years, attention shifted to another layer of the online ecosystem: content delivery networks and edge providers that most people had never heard of but that nearly every website depended on.

The 2020s opened with one of the most memorable outages to date. On 4 October 2021, Facebook and its sister platforms, Instagram, WhatsApp, and Messenger, vanished from the internet for nearly 7 hours after a faulty BGP configuration effectively removed the company’s services from the global routing table.

Millions of users flocked to other platforms to vent their frustration, overwhelming Twitter, Telegram, Discord, and Signal’s servers and causing performance issues across the board. It was a rare moment when a single company’s outage sent measurable shockwaves across the entire social media ecosystem.

But what happens when outages hit industries far more essential than social media? In 2023, the Federal Aviation Administration was forced to delay more than 10,000 flights, the first nationwide grounding of air traffic since the aftermath of September 11.

A corrupted database file brought the agency’s Notice to Air Missions (NOTAM) system to a standstill, leaving pilots without critical safety updates and forcing the entire aviation network to pause. The incident sent airline stocks dipping and dealt another blow to public confidence, showing just how disruptive a single technical failure can be when it strikes at the heart of critical infrastructure.

Outages that defined 2025

The year 2025 saw an unprecedented wave of outages, with server overloads, software glitches and coding errors disrupting services across the globe. The Microsoft 365 suite outage in January, the Southwest Airlines and FAA synchronisation failure in April, and the Meta messaging blackout in July all stood out for their scale and impact.

But the most disruptive failures were still to come. In October, Amazon Web Services suffered a major outage in its US-East-1 region, knocking out everything from social apps to banking services and reminding the world that a fault in a single cloud region can ripple across thousands of platforms.

Just weeks later, the Cloudflare November outage became the defining digital breakdown of the year. A logic bug inside its bot management system triggered a cascading collapse that took down social networks, AI tools, gaming platforms, transit systems and countless everyday websites in minutes. It was the clearest sign yet that when core infrastructure falters, the impact is immediate, global and largely unavoidable.

And yet, we continue to place more weight on these shared foundations, trusting they will hold because they usually do. Every outage, whether caused by a typo, a corrupted file, or a misconfigured update, exposes how quickly things can fall apart when one key piece gives way.

Going forward, resilience needs to matter as much as innovation. That means reducing single points of failure, improving transparency, and designing systems that can fail without dragging everything down. The more clearly we see the fragility of the digital ecosystem, the better equipped we are to strengthen it.

Outages will keep happening, and no amount of engineering can promise perfect uptime. But acknowledging the cracks is the first step toward reinforcing what we’ve built — and making sure the next slipped cog does not bring the whole machine to a stop.

The smoke and mirrors of the digital infrastructure

The internet is far from destined to collapse, but resilience can no longer be an afterthought. Redundancy, decentralisation and smarter oversight need to be part of the discussion, not just for engineers, but for policymakers as well.

Outages do not just interrupt our routines. They reveal the systems we have quietly built our lives around. Each failure shows how deeply intertwined our digital world has become, and how fast everything can stop when a single piece gives way.

Will we learn enough from each one to build a digital ecosystem that can absorb the next shock instead of amplifying it? Only time will tell.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT layout blends audio, text and maps in one view

OpenAI has unveiled an updated ChatGPT interface that combines voice and text features in a single view. Users can speak naturally at any point in a chat and receive responses in text, audio, or images. The new layout also introduces real-time map displays.

The redesign adds a scrolling transcript within the chat window. It allows users to revisit earlier exchanges and move easily between reading and listening. OpenAI states that the goal is to support voice-led tasks without compromising clarity.

With the unified experience, conversations no longer require switching modes. ChatGPT can deliver audio, written, and visual replies simultaneously. Maps and images appear directly alongside the voice response.

Every spoken message is automatically transcribed. However, this helps users follow more extended discussions and keep a record for later reference. OpenAI says the feature supports both accessibility and everyday convenience.

The update is rolling out gradually across web and mobile platforms. Users who prefer the earlier voice-only layout can revert to it in settings. OpenAI says the unified mode will remain the default as development continues.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Claude Opus 4.5 brings smarter AI to apps and developers

Anthropic has launched Claude Opus 4.5, now available on apps, API, and major cloud platforms. Priced at $ 5 per million tokens and $25 per million tokens, the update makes Opus-level AI capabilities accessible to a broader range of users, teams, and enterprises.

Alongside the model, updates to Claude Developer Platform and Claude Code introduce new tools for longer-running agents and enhanced integration with Excel, Chrome, and desktop apps.

Early tests indicate that Opus 4.5 can handle complex reasoning and problem-solving with minimal guidance. It outperforms previous versions on coding, vision, reasoning, and mathematics benchmarks, and even surpasses top human candidates in technical take-home exams.

The model demonstrates creative approaches to multi-step problems while remaining aligned with safety and policy constraints.

Significant improvements have been made to robustness and security. Claude Opus 4.5 resists prompt injection and handles complex tasks with less intervention through effort controls, context compaction, and multi-agent coordination.

Users can manage token usage more efficiently while achieving superior performance.

Claude Code now offers Plan Mode and desktop functionality for multiple simultaneous sessions, and consumer apps support uninterrupted long conversations. Beta access for Excel and Chrome lets enterprise and team users fully utilise Opus 4.5’s workflow improvements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s results fail to ease AI bubble fears

Record profits and year-on-year revenue growth above 60 percent have put Nvidia at the centre of debate over whether the surge in AI spending signals a bubble or a long-term boom.

CEO Jensen Huang and CFO Colette Kress dismissed concerns about the bubble, highlighting strong demand and expectations of around $65 billion in revenue for the next quarter.

Executives forecast global AI infrastructure spending could reach $3–4 trillion annually by the end of the decade as both generative AI and traditional cloud computing workloads increasingly run on GPUs.

Widespread adoption by major partners, including Meta, Anthropic and Salesforce, suggests lasting momentum rather than short-term hype.

Analysts generally agree that Nvidia’s performance remains robust, but questions persist over the sustainability of heavy investment in AI. Investors continue to monitor whether Big Tech can maintain this pace and if highly leveraged customers might expose Nvidia to future risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ireland confronts rising energy strain from data centres

Ireland faces mounting pressure over soaring electricity use from data centres clustered around Dublin. Facilities powering global tech giants have grown into a major energy consumer, accounting for over a fifth of national demand.

The load could reach 30 percent by 2030 as expanding cloud and AI services drive further growth. Analysts warn that rising consumption threatens climate commitments and places significant strain on grid stability.

Campaigners argue that data centres monopolise renewable capacity while pushing Ireland towards potential EU emissions penalties. Some local authorities have already blocked developments due to insufficient grid capacity and limited on-site green generation.

Sector leaders fear stalled projects and uncertain policy may undermine Ireland’s role as a digital hub. Investment risks remain high unless upgrades, clearer rules and balanced planning reduce the pressure on national infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nokia to invest 4 billion in AI-ready US networks

Nokia has announced a $4 billion expansion of its US research, development, and manufacturing operations to accelerate AI-ready networking technologies. The move builds on Nokia’s earlier $2.3 billion US investment via Infinera and semiconductor manufacturing plans.

The expanded investment will support mobile, fixed access, IP, optical, data centre networking, and defence solutions. Approximately $3.5 billion will be allocated for R&D, with $500 million dedicated to manufacturing and capital expenditures in Texas, New Jersey, and Pennsylvania.

Nokia aims to advance AI-optimised networks with enhanced security, productivity, and energy efficiency. The company will also focus on automation, quantum-safe networks, semiconductor testing, and advanced material sciences to drive innovation.

Officials highlight the strategic impact of Nokia’s US investment. Secretary of Commerce Howard Lutnick praised the plan for boosting US tech capacity, while CEO Justin Hotard said it would secure the future of AI-driven networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smarter AI processing could lead to cleaner air, say UCR engineers

As AI continues to scale rapidly, the environmental cost of powering massive data centres is becoming increasingly urgent. Machines require substantial amounts of electricity and water to stay cool, and a significant portion of this energy comes from fossil-fuel sources.

Scientists at UC Riverside’s Bourns College of Engineering, led by Professors Mihri and Cengiz Ozkan, have proposed a novel solution called Federated Carbon Intelligence (FCI). Their system doesn’t just prioritise low-carbon energy; it also monitors the health of servers in real-time to decide where and when AI tasks should be run.

Using simulations, the team found that FCI could reduce carbon dioxide emissions by up to 45 percent over five years and extend the operational life of hardware by about 1.6 years.

Their model takes into account server temperature, age and physical wear, and dynamically routes computing workloads to optimise both environmental and machine-health outcomes.

Unlike other approaches that only shift workloads to regions with cleaner energy, FCI also addresses the embodied emissions of manufacturing new servers. Keeping current hardware running longer and more efficiently helps reduce the carbon footprint associated with production.

If adopted by cloud providers, this adaptive system could mark a significant milestone in the sustainable development of AI infrastructure, one that aligns compute demand with both performance and ecological goals. The researchers are now calling for pilots in real data centres.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Armenia promotes AI partnership during ambassador’s meeting with Apple in Cupertino

Armenia’s ambassador, Narek Mkrtchyan, has met senior Apple representatives in Cupertino to discuss expanding the company’s activities in the country. The visit included talks with Jason Lundgaard, Apple’s senior director for international cooperation at corporate government affairs.

The ambassador outlined the ArmeniaUS memorandum on AI and semiconductor cooperation signed on 8 August and highlighted Armenia’s technology ecosystem and investment potential. Both sides explored areas for collaboration and the conditions under which Apple could expand its presence.

Apple plans to send a delegation to Armenia in the coming period to assess opportunities for growth and engagement with local institutions. The discussions signalled early steps toward a more structured partnership.

During the meeting, the ambassador thanked Mr Lundgaard for supporting the launch of Apple’s first educational programme at the Armenian College of Creative Technologies. The initiative forms part of a wider effort to strengthen skills development in Armenia’s digital sector.

Both sides reiterated their commitment to deepen cooperation and expand the educational partnership as Armenia positions itself as a regional hub for advanced technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI data centre boom drives global spike in memory chip prices

The rapid expansion of AI data centres is pushing up memory chip prices and straining an already tight supply chain. DRAM costs are rising as manufacturers prioritise high-bandwidth memory for AI systems, leaving fewer components available for consumer devices.

The shift is squeezing supply across sectors that depend on standard DRAM, from PCs and smartphones to cars and medical equipment. Analysts say the imbalance is driving up component prices quickly, with Samsung reportedly raising some memory prices by as much as 60%.

Rising demand for HBM reflects the needs of AI clusters, which rely on vast memory pools alongside GPUs, CPUs and storage. But with only a handful of major suppliers, including Samsung, SK Hynix, and Micron, the surge is pushing prices across the market higher.

Industry researchers warn that rising memory costs will likely be passed on to consumers, especially in lower-priced laptops and embedded systems. Makers may switch to cheaper parts or push suppliers for concessions, but the overall price trend remains upward.

While memory is known for cyclical booms and busts, analysts say the global race to build AI data centres makes it difficult to predict when supply will stabilise. Until then, higher memory prices look set to remain a feature of the market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tech groups welcome EU reforms as privacy advocates warn of retreat

The EU has unveiled plans to scale back certain aspects of its AI and data privacy rules to revive innovation and alleviate regulatory pressure on businesses. The Digital Omnibus package delays stricter oversight for high-risk AI until 2027 and permits the use of anonymised personal data for model training.

The reforms amend the AI Act and several digital laws, cutting cookie pop-ups and simplifying documentation requirements for smaller firms. EU tech chief Henna Virkkunen says the aim is to boost competitiveness by removing layers of rigid regulation that have hindered start-ups and SMEs.

US tech lobby groups welcomed the overall direction. Still, they criticised the package for not going far enough, particularly on compute thresholds for systemic-risk AI and copyright provisions with cross-border effects. They argue the reforms only partially address industry concerns.

Privacy and digital rights advocates sharply opposed the changes, warning they represent a significant retreat from Europe’s rights-centric regulatory model. Groups including NOYB accused Brussels of undermining hard-won protections in favour of Big Tech interests.

Legal scholars say the proposals could shift Europe closer to a more permissive, industry-driven approach to AI and data use. They warn that the reforms may dilute the EU’s global reputation as a standard-setter for digital rights, just as the world seeks alternatives to US-style regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!