AWS outage shows the cost of cloud concentration

A single fault can bring down the modern web. During the outage on Monday, 20 October 2025, millions woke to broken apps, games, banking, and tools after database errors at Amazon Web Services rippled outward. When a shared backbone stumbles, the blast radius engulfs everything from chat to commerce.

The outage underscored cloud concentration risk. Roblox, Fortnite, Pokémon Go, Snapchat, and workplace staples like Slack and Monday.com stumbled together because many depend on the same region and data layer. Failover, throttling, and retries help, but simultaneous strain can swamp safeguards.

On Friday, 19 July 2024, a faulty CrowdStrike update crashed Windows machines worldwide, triggering blue screens that grounded flights, delayed surgeries, and froze point-of-sale systems. The fix was simple; recovery wasn’t. Friday patches gained a new cautionary tale.

Earlier shocks foreshadowed today’s scale. In 1997, a Network Solutions glitch briefly hobbled .com and .net. In 2018, malware in Alaska’s Matanuska-Susitna knocked services offline, sending a community of 100,000 back to paper. Each incident showed how mundane errors cascade into civic life.

Resilience now means multi-region designs, cross-cloud failovers, tested runbooks, rate-limit backstops, and graceful read-only modes. Add regulatory stress tests, clear incident comms, and sector drills with hospitals, airlines, and banks. The internet will keep breaking; our job is to make it bend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored pricing is here and personal data is the price signal

AI is quietly changing how prices are set online. Beyond demand-based shifts, companies increasingly tailor offers to individuals, using browsing history, purchase habits, device, and location to predict willingness to pay. Two shoppers may see different prices for the same product at the same moment.

Dynamic pricing raises or lowers prices for everyone as conditions change, such as school-holiday airfares or hotel rates during major events. Personalised pricing goes further by shaping offers for specific users, rewarding cart-abandoners with discounts while charging rarer shoppers a premium.

Platforms mine clicks, time on page, past purchases, and abandoned baskets to build profiles. Experiments show targeted discounts can lift sales while capping promo spend, proving engineered prices scale. The result: you may not see a ‘standard’ price, but one designed for you.

The risks are mounting. Income proxies such as postcode or device can entrench inequality, while hidden algorithms erode trust when buyers later find cheaper prices. Accountability is murky if tailored prices mislead, discriminate, or breach consumer protections without clear disclosure.

Regulators are moving. A competition watchdog in Australia has flagged transparency gaps, unfair trading risks, and the need for algorithmic disclosure. Businesses now face a twin test: deploy AI pricing with consent, explainability, and opt-outs, and prove it delivers value without crossing ethical lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS glitch triggers widespread outages across major apps

A major internet outage hit some of the world’s biggest apps and sites from about 9 a.m. CET Monday, with issues traced to Amazon Web Services. Tracking sites reported widespread failures across the US and beyond, disrupting consumer and enterprise services.

AWS cited ‘significant error rates’ in DynamoDB requests in the US-EAST-1 region, impacting additional services in Northern Virginia. Engineers are mitigating while investigating root cause, and some customers couldn’t create or update Support Cases.

Outages clustered around Virginia’s dense data-centre corridor but rippled globally. Impacted brands included Amazon, Google, Snapchat, Roblox, Fortnite, Canva, Coinbase, Slack, Signal, Vodafone and the UK tax authority HMRC.

Coinbase told users ‘all funds are safe’ as platforms struggled to authenticate, fetch data and serve content tied to affected back-ends. Third-party monitors noted elevated failure rates across APIs and app logins.

The incident underscores heavy reliance on hyperscale infrastructure and the blast radius when core data services falter. Full restoration and a formal post-mortem are pending from AWS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

An awards win for McAfee’s consumer-first AI defence

McAfee won ‘Best Use of AI in Cybersecurity’ at the 2025 A.I. Awards for its Scam Detector. The tool, which McAfee says is the first to automate deepfake, email, and text-scam detection, underscores a consumer-focused defence. The award recognises its bid to counter fast-evolving online fraud.

Scams are at record levels, with one in three US residents reporting victimisation and average losses of $1,500. Threats now range from fake job offers and text messages to AI-generated deepfakes, increasing the pressure on tools that can act in real time across channels.

McAfee’s Scam Detector uses advanced AI to analyse text, email, and video, blocking dangerous links and flagging deepfakes before they cause harm. It is included with core McAfee plans and available on PC, mobile, and web, positioning it as a default layer for everyday protection.

Adoption has been rapid, with the product crossing one million users in its first months, according to the company. Judges praised its proactive protection and emphasis on accuracy and trust, citing its potential to restore user confidence as AI-enabled deception becomes more sophisticated.

McAfee frames the award as validation of its responsible, consumer-first AI strategy. The company says it will expand Scam Detector’s capabilities while partnering with the wider ecosystem to keep users a step ahead of emerging threats, both online and offline.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

A common EU layer for age verification without a single age limit

Denmark will push for EU-wide age-verification rules to avoid a patchwork of national systems. As Council presidency, Copenhagen prioritises child protection online while keeping flexibility on national age limits. The aim is coordination without a single ‘digital majority’ age.

Ministers plan to give the European Commission a clear mandate for interoperable, privacy-preserving tools. An updated blueprint is being piloted in five states and aligns with the EU Digital Identity Wallet, which is due by the end of 2026. Goal: seamless, cross-border checks with minimal data exposure.

Copenhagen’s domestic agenda moves in parallel with a proposed ban on under-15 social media use. The government will consult national parties and EU partners on the scope and enforcement. Talks in Horsens, Denmark, signalled support for stronger safeguards and EU-level verification.

The emerging compromise separates ‘how to verify’ at the EU level from ‘what age to set’ at the national level. Proponents argue this avoids fragmentation while respecting domestic choices; critics warn implementation must minimise privacy risks and platform dependency.

Next steps include expanding pilots, formalising the Commission’s mandate, and publishing impact assessments. Clear standards on data minimisation, parental consent, and appeals will be vital. Affordable compliance for SMEs and independent oversight can sustain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot.

Ethernet wins in raw security, but Wi-Fi can compete with the right setup

The way you connect to the internet matters, not just the speed, but also your privacy and security. That’s the main takeaway from a recent Fox News report comparing Ethernet and Wi-Fi security.

At its core, Ethernet is inherently more secure in many scenarios because it requires physical access. Data travels along a cable directly to your router, reducing risks of eavesdropping or intercepting signals mid-air.

Wi-Fi, by contrast, sends data through the air. That makes it more vulnerable, especially if a network uses weak passwords or outdated encryption standards. Attackers within signal range might exploit poorly secured networks.

But Ethernet isn’t a guaranteed fortress. The Fox article emphasises that security depends largely on your entire setup. A Wi-Fi network with strong encryption (ideally WPA3), robust passwords, regular firmware updates, and a well-configured router can approach the network security level of wired connections.

Each device you connect, smartphones, smart home gadgets, IoT sensors, increases your network’s exposure. Wi-Fi amplifies that risk since more devices can join wirelessly. Ethernet limits the number of direct connection points, which reduces the attack surface.

In short, Ethernet gives you a baseline security advantage, but a well-secured Wi-Fi network can be quite robust. The critical factor is how carefully you manage your network settings and devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft ends support for Windows 10

Windows 10 support ends on Tuesday, 14 October 2025, and routine security patches and fixes will no longer be provided. Devices will face increased cyber risk without updates. Microsoft urges upgrades to Windows 11 where possible.

Windows powers more than 1.4 billion devices, with Windows 10 still widely used. UK consumer group Which? estimates 21 million local users. Some plan to continue regardless, citing cost, waste, and working hardware.

Upgrade to Windows 11 is free for eligible PCs via the Settings app. Others can enrol in Extended Security Updates, which deliver security fixes only until October 2026. ESU offers no technical support or feature updates.

Personal users in the European Economic Area can register for ESU at no charge. Elsewhere, eligibility may unlock ESU for free, or it costs $30 or 1,000 Microsoft Rewards points. Businesses pay $61 per device for year one.

Unsupported systems become easier targets for malware and scams, and some software may degrade over time. Organisations risk compliance issues running out-of-support platforms. Privacy-minded users may also dislike Windows 11’s tighter Microsoft account requirements.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!