Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals shift to stolen credentials and AI-enabled attacks

Ransomware attacks are increasingly relying on stolen passwords rather than traditional malware, according to Cloudflare’s latest annual threat report. Attackers now exploit legitimate account credentials to blend into regular traffic, making breaches harder to detect and contain.

Manufacturing and critical infrastructure organisations account for over half of targeted attacks, reflecting their high operational stakes.

Cloudflare highlighted that AI is enabling attackers to prioritise speed and scale over technical sophistication. Generative AI lets criminals automate fraud, hijacking email threads and targeting a ~$49,000 sweet spot to maximise profit while avoiding scrutiny.

Nation-state actors also leverage legitimate platforms for command-and-control operations, with Russia, China, Iran, and North Korea each following distinct cyber strategies.

Researchers warned that modern ransomware is less a malware crisis and more an identity and access challenge. Attackers using authorised credentials can bypass defences and execute high-impact extortion, marking a significant shift in global threat vectors.

The report urges businesses to strengthen identity security, monitor access, and defend against AI-driven attacks that exploit impersonation and automation at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI Cybersecurity stability framework unlocks advanced Non Human Identity management

AI is increasingly positioned as a key driver of cybersecurity stability. By analysing large volumes of data and detecting anomalies in real time, AI helps organisations strengthen defence systems and respond faster to evolving digital threats.

Modern cybersecurity challenges are closely linked to the rise of Non-Human Identities (NHIs), including machine accounts, tokens, and automated credentials. These identities require continuous monitoring and secure lifecycle management to prevent unauthorised access and data breaches.

The integration of AI with NHI management enables a more proactive security approach. AI improves visibility into access permissions and system behaviour, helping organisations reduce risks and maintain stronger control over their digital environments.

Automation powered by AI enhances operational efficiency across cybersecurity processes. Tasks such as credential rotation, access monitoring, and policy enforcement can be automated, allowing security teams to prioritise strategic decision-making.

AI also strengthens threat intelligence capabilities by identifying patterns and predicting potential attacks before they occur. This predictive capacity helps close security gaps, particularly between development, operations, and security teams.

Across sectors such as finance, healthcare, and technology, AI-driven cybersecurity solutions support compliance and data protection requirements. These systems contribute to building resilient infrastructures capable of adapting to increasingly sophisticated cyber threats.

Finally, combining AI capabilities with structured identity management creates a foundation for long-term cybersecurity resilience. Organisations adopting this approach can improve incident response, enhance adaptability, and secure future digital operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA drives a new era of industrial AI cybersecurity

AI-driven defences are moving deeper into operational technology as NVIDIA leads a shift toward embedded cybersecurity across critical infrastructure.

The company is partnering with firms such as Akamai Technologies, Forescout, Palo Alto Networks, Siemens and Xage Security to protect energy, manufacturing and transport systems that increasingly operate through cloud-linked environments.

Modernisation has expanded capabilities across these sectors, yet it has widened the gap between evolving threats and ageing industrial defences.

Zero-trust adoption in operational environments is gaining momentum as Forescout and NVIDIA develop real-time verification models tailored to legacy devices and safety-critical processes.

Security workloads run on NVIDIA BlueField hardware to keep protection isolated from industrial systems and avoid any interference with essential operations. That approach enables more precise control over lateral movement across networks without disrupting performance.

Industrial automation is also adapting through Siemens and Palo Alto Networks, which are moving security enforcement closer to workloads at the edge. AI-enabled inspection via BlueField enhances visibility in highly time-sensitive environments, improving reliability and uptime.

Akamai and Xage are extending similar models to energy infrastructure and large-scale operational networks, embedding segmentation and identity-based controls where resilience is most critical.

A coordinated architecture is now emerging in which edge-generated operational data feeds central AI analysis, while enforcement remains local to maintain continuity.

The result is a security model designed to meet the pressures of cyber-physical systems, enabling operators to detect threats faster, reinforce operational stability and protect infrastructure that supports global AI expansion.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Gabon imposes indefinite social media shutdown over national security concerns

Gabon’s media regulator, the High Authority for Communication (HAC), has announced a nationwide open-ended suspension of social media, citing online content that it says is fueling tensions and undermining social cohesion. In a statement, the HAC framed the move as a response to material it described as defamatory or hateful and, in some cases, a threat to national security, telling telecom operators and internet service providers to block access to major platforms.

The regulator pointed to what it called a rise in coordinated cyberbullying and the unauthorised sharing of personal data, saying existing moderation measures were not working and that the shutdown was necessary to stop violations of Gabon’s 2016 Communications Code.

The announcement arrives amid mounting labour pressure. Teachers began a high-profile strike in December 2025 over pay, status and working conditions, and the dispute has become one of the most visible signs of broader public-sector discontent. At the same time, the economic stakes are significant: Gabon had an estimated 850,000 active social media users in late 2025 (around a third of the population), and platforms are widely used for marketing and small-business sales.

Why does it matter?

Governments increasingly treat social media suspensions as a rapid-response tool for ‘public order’, but they also reshape information access, civic debate and commerce, especially in countries where mobile apps are a primary channel for news and income. The current announcement comes at a politically sensitive moment, since Gabon has a precedent here: during the 2023 election period, authorities shut down internet access, citing the need to counter calls for violence and misinformation. Gabon is still in transition after the August 2023 coup, and President Brice Oligui Nguema, who led the takeover, won the subsequent presidential election by a landslide in 2025, consolidating power while facing rising expectations for reform and stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Germany drafts reforms expanding offensive cyber powers

Politico reports that Germany is preparing legislative reforms that would expand the legal framework for conducting offensive cyber operations abroad and strengthen authorities to counter hybrid threats.

According to the Interior Ministry, two draft laws are under preparation:

  • One would revise the mandate of Germany’s foreign intelligence service to allow cyber operations outside national territory.
  • A second would grant security services expanded powers to fight back against hybrid threats and what the government describes as active cyber defense.

The discussion in Germany coincides with broader European debates on offensive cyber capabilities. In particular, the Netherlands have incorporated offensive cyber elements into national strategies.

The reforms in Germany remain in draft form and may face procedural and constitutional scrutiny. Adjustments to intelligence mandates could require amendments supported by a two-thirds majority in both the Bundestag and Bundesrat.

The proposed framework for ‘active cyber defense’ would focus on preventing or mitigating serious threats. Reporting by Tagesschau ndicates that draft provisions may allow operational follow-up measures in ‘special national situations,’ particularly where timely police or military assistance is not feasible.

Opposition lawmakers have raised questions regarding legal clarity, implementation mechanisms, and safeguards. Expanding offensive cyber authorities raises longstanding policy questions, including challenges of attribution to identify responsible actors; risks of escalation or diplomatic repercussions; oversight and accountability mechanisms; and compatibility with international law and norms of responsible state behaviour.

The legislative process is expected to continue through the year, with further debate anticipated in parliament.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Shadow AI becomes a new governance challenge for European organisations

Employees are adopting generative tools at work faster than organisations can approve or secure them, giving rise to what is increasingly described as ‘shadow AI‘. Unlike earlier forms of shadow IT, these tools can transform data, infer sensitive insights, and trigger automated actions beyond established controls.

For European organisations, the issue is no longer whether AI should be used, but how to regain visibility and control without undermining productivity, as shadow AI increasingly appears inside approved platforms, browser extensions, and developer tools, expanding risks beyond data leakage.

Security experts warn that blanket bans often push AI use further underground, reducing transparency and trust. Instead, guidance from EU cybersecurity bodies increasingly promotes responsible enablement through clear policies, staff awareness, and targeted technical controls.

Key mitigation measures include mapping AI use across approved and informal tools, defining safe prompt data, and offering sanctioned alternatives, with logging, least-privilege access, and approval steps becoming essential as AI acts across workflows.

With the EU AI Act introducing clearer accountability across the AI value chain, unmanaged shadow AI is also emerging as a compliance risk. As AI becomes embedded across enterprise software, organisations face growing pressure to make safe use the default rather than the exception.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan and the United Kingdom expand cybersecurity cooperation

Japan and the United Kingdom have formalised a Strategic Cyber Partnership focused on strengthening cooperation in cybersecurity, including information sharing, defensive capabilities, and resilience of critical infrastructure. In related high-level discussions between the two leaders, Japan and the UK also agreed on the need to work with like-minded partners to address vulnerabilities in critical mineral supply chains.

The Strategic Cyber Partnership outlines three core areas of cooperation:

  • sharing cyber threat intelligence and enhancing cyber capabilities;
  • supporting whole-of-society resilience through best practices on infrastructure and supply chain protection and alignment on regulatory and standards issues;
  • collaborating on workforce development and emerging cyber technologies.

The agreement is governed through a joint Cyber Dialogue mechanism and is non-binding in nature.

Separately, at a summit meeting in Tokyo, the leaders noted the importance of strengthening supply chains for minerals identified as critical for modern industry and technology, and agreed to coordinate efforts with other partners on this issue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven scams dominate malicious email campaigns

The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.

The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.

Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.

She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.

Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.

In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake AI assistant steals OpenAI credentials from thousands of Chrome users

A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.

The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.

At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.

Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.

Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.

LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.

The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot