Allianz breach affects most US customers

Allianz Life has confirmed a major cyber breach that exposed sensitive data from most of its 1.4 million customers in North America.

The attack was traced back to 16 July, when a threat actor accessed a third-party cloud system using social engineering tactics.

The cybersecurity breach affected a customer relationship management platform but did not compromise the company’s core network or policy systems.

Allianz Life acted swiftly by notifying the FBI and other regulators, including the attorney general’s office in Maine.

Those impacted are offered two years of credit monitoring and identity theft protection. The company has begun contacting affected individuals but declined to reveal the full number involved due to an ongoing investigation.

No other Allianz subsidiaries were affected by the breach. Allianz Life employs around 2,000 staff in the US and remains a key player within the global insurer’s North American operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Viasat launches global IoT satellite service

Viasat has unveiled a new global connectivity service designed to improve satellite-powered internet of things (IoT) communication, even in remote environments. The new offering, IoT Nano, supports industries like agriculture, mining, transport with reliable, low-data and low-power two-way messaging.

The service builds on Orbcomm’s upgraded OGx platform, delivering faster message speeds, greater data capacity and improved energy efficiency. It maintains compatibility with older systems while allowing for advanced use cases through larger messages and reduced power needs.

Executives at Viasat and Orbcomm believe IoT Nano opens up new opportunities by combining wider satellite coverage with smarter, more frequent data delivery. The service is part of Viasat’s broader effort to expand its scalable and energy-efficient satellite IoT portfolio.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK enforces age checks to block harmful online content for children

The United Kingdom has introduced new age verification laws to prevent children from accessing harmful online content, marking a significant shift in digital child protection.

The measures, enforced by media regulator Ofcom, require websites and apps to implement strict age checks such as facial recognition and credit card verification.

Around 6,000 pornography websites have already agreed to the new regulations, which stem from the 2023 Online Safety Act. The rules also target content related to suicide, self-harm, eating disorders and online violence, instead of just focusing on pornography.

Companies failing to comply risk fines of up to £18 million or 10% of global revenue, and senior executives could face criminal charges if they ignore Ofcom’s directives.

Technology Secretary Peter Kyle described the move as a turning point, saying children will now experience a ‘different internet for the first time’.

Ofcom data shows that around 500,000 children aged eight to fourteen encountered online pornography in just one month, highlighting the urgency of the reforms. Campaigners, including the NSPCC, called the new rules a ‘milestone’, though they warned loopholes could remain.

The UK government is also exploring further restrictions, including a potential daily two-hour time limit on social media use for under-16s. Kyle has promised more announcements soon, as Britain moves to hold tech platforms accountable instead of leaving children exposed to harmful content online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI forces rethink of cloud infrastructure

Cybersecurity experts warn that reliance on traditional firewalls and legacy VPNs may pose greater risks than protection. These outdated tools often lack timely updates, making them prime entry points for cyber attackers exploiting AI-powered techniques.

Many businesses depend on ageing infrastructure, unaware that unpatched VPNs and web servers expose them to significant cybersecurity threats. Experts urge companies to abandon these legacy systems and modernise their defences with more adaptive, zero-trust models.

Meanwhile, OpenAI’s reported plans for a productivity suite challenge Microsoft’s dominance, promising simpler interfaces powered by generative AI. The shift could reshape daily workflows by integrating document creation directly with AI tools.

Agentic AI, which performs autonomous tasks without human oversight, also redefines enterprise IT demands. Experts believe traditional cloud tools cannot support such complex systems, prompting calls to rethink cloud strategies for more tailored, resilient platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US push for AI dominance through openness

In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a sweeping AI Action Plan with 103 recommendations. At its core lies an intriguing paradox: the push for open-source AI, typically associated with collaboration and transparency, is now being positioned as a strategic weapon.

As Jovan Kurbalija points out, this plan marks a turning point where open-weight models are framed not just as tools of innovation, but as instruments of geopolitical influence, with the US aiming to seed the global AI ecosystem with American-built systems rooted in ‘national values.’

The plan champions Silicon Valley by curbing regulations, limiting federal scrutiny, and shielding tech giants from legal liability—potentially reinforcing monopolies. It also underlines a national security-first mentality, urging aggressive safeguards against foreign misuse of AI, cyber threats, and misinformation. Notably, it proposes DARPA-led initiatives to unravel the inner workings of large language models, acknowledging that even their creators often can’t fully explain how these systems function.

Internationally, the plan takes a competitive, rather than cooperative, stance. Allies are expected to align with US export controls and values, while multilateral forums like the UN and OECD are dismissed as bureaucratic and misaligned. That bifurcation risks alienating global partners—particularly the EU, which favours heavy AI regulation—while increasing pressure on countries like India and Japan to choose sides in the US–China tech rivalry.

Despite its combative framing, the strategy also nods to inclusion and workforce development, calling for tax-free employer-sponsored AI training, investment in apprenticeships, and growing military academic hubs. Still, as Kurbalija warns, the promise of AI openness may clash with the plan’s underlying nationalistic thrust—raising questions about whether it truly aims to democratise AI, or merely dominate it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women-only dating app Tea suffers catastrophic data leak

Tea, a women-only dating app, has suffered a massive data breach after its backend was found completely unsecured. Over 72,000 private images and more than 13,000 government-issued IDs were leaked online.

Some documents were dated as recently as 2025, contradicting the company’s claim that only ‘old data’ was affected. The data, totalling 59.3 GB, included verification selfies, DMs, and public posts. It spread rapidly through 4chan and decentralised platforms like BitTorrent.

Critics have blamed Tea’s use of ‘vibe coding’, AI-generated code with no proper review, which reportedly left its Firebase database open with no authentication.

Experts warn that relying on AI tools to build apps without security checks is becoming increasingly risky. Research shows nearly half of AI-generated code contains vulnerabilities, yet many startups still use it for core features. Tea users are now urged to monitor their identity and financial data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NASA launches twin satellites to study space storms

NASA has launched two TRACERS satellites aboard a SpaceX Falcon 9 rocket to study space weather near Earth’s poles.

These identical spacecraft will probe the polar cusps in Earth’s magnetic field better to understand the origins and dynamics of geomagnetic storms.

Magnetic reconnection, a process where solar wind collides with Earth’s magnetosphere, is central in triggering auroras and potentially damaging solar storms.

Using tandem satellites, scientists can now monitor changes in real time, offering insights that single-orbiting spacecraft could not provide.

The mission aims to record around 3,000 reconnection events over the next year, helping researchers determine how solar energy enters Earth’s system.

By doing so, they hope to improve forecasting of disruptive space weather events that can impact GPS, satellites, and power grids.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft replaces the blue screen of death with a sleek black version in Windows 11

Microsoft has officially removed the infamous Blue Screen of Death (BSOD) from Windows 11 and replaced it with a sleeker, black version.

As part of the update KB5062660, the Black Screen of Death now appears briefly—around two seconds—before a restart, showing only a short error message without the sad face or QR code that became symbolic of Windows crashes.

The update, which brings systems to Build 26100.4770, is optional and must be installed manually through Windows Update or the Microsoft Update Catalogue.

It is available for both x64 and arm64 platforms. Microsoft plans to roll out the update more broadly in August 2025 as part of its Windows 11 24H2 feature preview.

In addition to the screen change, the update introduces ‘Recall’ for EU users, a tool designed to operate locally and allow users to block or turn off tracking across apps and websites. The feature aims to comply with European privacy rules while enhancing user control.

Also included is Quick Machine Recovery, which can identify and fix system-wide failures using the Windows Recovery Environment. If a device becomes unbootable, it can download a repair patch automatically to restore functionality instead of requiring manual intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Network failure hits EE, BT and affects other UK providers

Thousands of EE and BT customers across the UK encountered widespread network failures on 24 July, primarily affecting voice services.

The outage, lasting over 24 hours, disrupted mobile and landline calls. Over 2,600 EE users reported issues with Downdetector at peak volume around 2:15 p.m. BST. Despite repair efforts, residual outages were still being logged the following day.

Although Vodafone and Three initially confirmed their networks were stable, users who recently switched carriers or ported numbers from EE experienced failures when making or receiving calls. However, this suggests cross-network routing issues burdened by EE’s technical fault.

Emergency services were briefly impacted; some users could not reach 999, though voice functionality has resumed. BT and EE apologised and said they were working urgently to restore reliable service.

Given statutory obligations around service resilience, Ofcom has opened inquiries into scale and causes. Affected MVNO operators using EE infrastructure, like 1pMobile, reported customer disruptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!