Allianz breach affects most US customers

Allianz Life has confirmed a major cyber breach that exposed sensitive data from most of its 1.4 million customers in North America.

The attack was traced back to 16 July, when a threat actor accessed a third-party cloud system using social engineering tactics.

The cybersecurity breach affected a customer relationship management platform but did not compromise the company’s core network or policy systems.

Allianz Life acted swiftly by notifying the FBI and other regulators, including the attorney general’s office in Maine.

Those impacted are offered two years of credit monitoring and identity theft protection. The company has begun contacting affected individuals but declined to reveal the full number involved due to an ongoing investigation.

No other Allianz subsidiaries were affected by the breach. Allianz Life employs around 2,000 staff in the US and remains a key player within the global insurer’s North American operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK enforces age checks to block harmful online content for children

The United Kingdom has introduced new age verification laws to prevent children from accessing harmful online content, marking a significant shift in digital child protection.

The measures, enforced by media regulator Ofcom, require websites and apps to implement strict age checks such as facial recognition and credit card verification.

Around 6,000 pornography websites have already agreed to the new regulations, which stem from the 2023 Online Safety Act. The rules also target content related to suicide, self-harm, eating disorders and online violence, instead of just focusing on pornography.

Companies failing to comply risk fines of up to £18 million or 10% of global revenue, and senior executives could face criminal charges if they ignore Ofcom’s directives.

Technology Secretary Peter Kyle described the move as a turning point, saying children will now experience a ‘different internet for the first time’.

Ofcom data shows that around 500,000 children aged eight to fourteen encountered online pornography in just one month, highlighting the urgency of the reforms. Campaigners, including the NSPCC, called the new rules a ‘milestone’, though they warned loopholes could remain.

The UK government is also exploring further restrictions, including a potential daily two-hour time limit on social media use for under-16s. Kyle has promised more announcements soon, as Britain moves to hold tech platforms accountable instead of leaving children exposed to harmful content online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI forces rethink of cloud infrastructure

Cybersecurity experts warn that reliance on traditional firewalls and legacy VPNs may pose greater risks than protection. These outdated tools often lack timely updates, making them prime entry points for cyber attackers exploiting AI-powered techniques.

Many businesses depend on ageing infrastructure, unaware that unpatched VPNs and web servers expose them to significant cybersecurity threats. Experts urge companies to abandon these legacy systems and modernise their defences with more adaptive, zero-trust models.

Meanwhile, OpenAI’s reported plans for a productivity suite challenge Microsoft’s dominance, promising simpler interfaces powered by generative AI. The shift could reshape daily workflows by integrating document creation directly with AI tools.

Agentic AI, which performs autonomous tasks without human oversight, also redefines enterprise IT demands. Experts believe traditional cloud tools cannot support such complex systems, prompting calls to rethink cloud strategies for more tailored, resilient platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women-only dating app Tea suffers catastrophic data leak

Tea, a women-only dating app, has suffered a massive data breach after its backend was found completely unsecured. Over 72,000 private images and more than 13,000 government-issued IDs were leaked online.

Some documents were dated as recently as 2025, contradicting the company’s claim that only ‘old data’ was affected. The data, totalling 59.3 GB, included verification selfies, DMs, and public posts. It spread rapidly through 4chan and decentralised platforms like BitTorrent.

Critics have blamed Tea’s use of ‘vibe coding’, AI-generated code with no proper review, which reportedly left its Firebase database open with no authentication.

Experts warn that relying on AI tools to build apps without security checks is becoming increasingly risky. Research shows nearly half of AI-generated code contains vulnerabilities, yet many startups still use it for core features. Tea users are now urged to monitor their identity and financial data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Network failure hits EE, BT and affects other UK providers

Thousands of EE and BT customers across the UK encountered widespread network failures on 24 July, primarily affecting voice services.

The outage, lasting over 24 hours, disrupted mobile and landline calls. Over 2,600 EE users reported issues with Downdetector at peak volume around 2:15 p.m. BST. Despite repair efforts, residual outages were still being logged the following day.

Although Vodafone and Three initially confirmed their networks were stable, users who recently switched carriers or ported numbers from EE experienced failures when making or receiving calls. However, this suggests cross-network routing issues burdened by EE’s technical fault.

Emergency services were briefly impacted; some users could not reach 999, though voice functionality has resumed. BT and EE apologised and said they were working urgently to restore reliable service.

Given statutory obligations around service resilience, Ofcom has opened inquiries into scale and causes. Affected MVNO operators using EE infrastructure, like 1pMobile, reported customer disruptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum tech reshape global business

AI and quantum computing are reshaping global industries as investment surges and innovation accelerates across sectors like finance, healthcare and logistics. Microsoft and Amazon are driving a major shift in AI infrastructure, transforming cloud services into profitable platforms.

Quantum computing is moving beyond theory, with real-world applications emerging in pharmaceuticals and e-commerce. Google’s development of quantum-inspired algorithms for virtual shopping and faster analytics demonstrates its potential to revolutionise decision-making.

Sustainability is also gaining ground, with companies adopting AI-powered solutions for renewable energy and eco-friendly manufacturing. At the same time, digital banks are integrating AI to challenge legacy finance systems, offering personalised, accessible services.

Despite rapid progress, ethical concerns and regulatory challenges are mounting. Data privacy, AI bias, and antitrust issues highlight the need for responsible innovation, with industry leaders urged to balance risk and growth for long-term societal benefit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek and others gain traction in US and EU

A recent survey has found that most US and the EU users are open to using Chinese large language models, even amid ongoing political and cybersecurity scrutiny.

According to the report, 71 percent of respondents in the US and 87 percent in the EU would consider adopting models developed in China.

The findings highlight increasing international curiosity about the capabilities of Chinese AI firms such as DeepSeek, which have recently attracted global attention.

While the technology is gaining credibility, many Western users remain cautious about data privacy and infrastructure control.

More than half of those surveyed said they would only use Chinese AI models if hosted outside China. However, this suggests that while trust in the models’ performance is growing, concerns over data governance remain a significant barrier to adoption.

The results come amid heightened global competition in the AI race, with Chinese developers rapidly advancing to challenge US-based leaders. DeepSeek and similar firms now face balancing global outreach with geopolitical limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Overviews reach 2 billion users monthly, reshaping the web’s future

Google’s AI Overviews, the generative summaries placed above traditional search results, now serve over 2 billion users monthly, a sharp rise from 1.5 billion just last quarter.

First launched in May 2023 and widely available in the US by mid-2024, the feature has rapidly expanded across more than 200 countries and 40 languages.

The widespread use of AI Overviews transforms how people search and who benefits. Google reports that the feature boosts engagement by over 10% for queries where it appears.

However, a study by Pew Research shows clicks on search results drop significantly when AI Overviews are shown, with just 8% of users clicking any link, and only 1% clicking within the overview itself.

While Google claims AI Overviews monetise at the same rate as regular search, publishers are left out unless users click through, which they rarely do.

Google has started testing ads within the summaries and is reportedly negotiating licensing deals with select publishers, hinting at a possible revenue-sharing shift. Meanwhile, regulators in the US and EU are scrutinising whether the feature violates antitrust laws or misuses content.

Industry experts warn of a looming ‘Google Zero’ future — a web where search traffic dries up and AI-generated answers dominate.

As visibility in search becomes more about entity recognition than page ranking, publishers and marketers must rethink how they maintain relevance in an increasingly post-click environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!