UK firms prioritise cyber resilience and AI growth

Cybersecurity is set to receive the largest budget increases over the next 12 months, as organisations respond to rising geopolitical tensions and a surge in high-profile cyber-attacks, according to the KPMG Global Tech Report 2026.

More than half of UK firms plan to lift cybersecurity spending by over 10 percent, outpacing global averages and reflecting heightened concern over digital resilience.

AI and data analytics are also attracting substantial investment, with most organisations increasing budgets as they anticipate stronger returns by the end of 2026. Executives expect AI to shift from an efficiency tool to a core revenue driver, signalling a move toward large-scale deployment.

Despite strong investment momentum, scaling remains a major challenge. Fewer than one in 10 organisations report fully deployed AI or cybersecurity systems today, although around half expect to reach that stage within a year.

Structural barriers, fragmented ownership, and unclear accountability continue to slow execution, highlighting the complexity of translating strategy into operational impact.

Agentic AI is emerging as a central focus, with most organisations already embedding autonomous systems into workflows. Demand for specialist AI roles is rising, alongside closer collaboration to ensure secure deployment, governance, and continuous monitoring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Face scans replace fingerprints in new motorcycle clearance trial at Singapore land border

The Immigration and Checkpoints Authority (ICA) has launched a facial recognition trial for motorcyclists entering Singapore via Woodlands Checkpoint, aiming to speed up cross-border clearance and improve convenience while maintaining security.

The pilot, launched on 26 January 2026, operates in two designated motorcycle lanes in the arrival zone and allows riders to use contactless facial scans rather than traditional fingerprint scans to verify identity.

Eligible users include Singapore residents, long-term pass holders and foreign visitors who have previously entered the country; no prior registration is needed to take part.

Riders simply scan their passport or MyICA QR code, lift their visor, and remove any obstructions (like sunglasses or masks) before looking into the facial recognition camera. ICA officers are on standby to assist and collect feedback to refine the system.

The initiative is part of ICA’s broader use of biometric technologies, including QR code clearance and iris/facial biometrics, to make immigration more efficient and contactless at Singapore’s land checkpoints.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nova ransomware claims breach of KPMG Netherlands

KPMG Netherlands has allegedly become the latest target of the Nova ransomware group, following claims that sensitive data was accessed and exfiltrated.

The incident was reported by ransomware monitoring services on 23 January 2026, with attackers claiming the breach occurred on the same day.

Nova has reportedly issued a ten-day deadline for contact and ransom negotiations, a tactic commonly used by ransomware groups to pressure large organisations.

The group has established a reputation for targeting professional services firms and financial sector entities that manage high-value and confidential client information.

Threat intelligence sources indicate that Nova operates a distributed command and control infrastructure across the Tor network, alongside multiple leak platforms used to publish stolen data. Analysis suggests a standardised backend deployment, pointing to a mature and organised ransomware operation.

KPMG has not publicly confirmed the alleged breach at the time of writing. Clients and stakeholders are advised to follow official communications for clarity on potential exposure, response measures and remediation steps as investigations continue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ten cybersecurity predictions for 2026 from experts: How AI will reshape cyber risks

Evidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental use to routine deployment in malicious cyber operations in 2026. Rather than introducing entirely new threats, AI is expected to accelerate existing attack techniques, reduce operational costs for attackers, and increase the scale and persistence of campaigns.

Security researchers and industry analysts point to ten areas where AI is most likely to reshape the cyber threat landscape over the coming year:

  1. AI-enabled malware is expected to adapt during execution. Threat intelligence reporting indicates that malware using AI models is already capable of modifying behaviour in real time. In 2026, such capabilities are expected to become more common, allowing malicious code to adjust tactics in response to defensive measures.
  2. AI agents are likely to automate key stages of cyberattacks. Researchers expect wider use of agentic AI systems that can independently conduct reconnaissance, exploit vulnerabilities, and maintain persistence, reducing the need for continuous human control.
  3. Prompt injection will be treated as a practical attack technique against AI deployments. As organisations embed AI assistants and agents into workflows, attackers are expected to target the AI layer itself (e.g. through prompt injection, unsafe tool use, and weak guardrails) to trigger unintended actions or expose data.
  4. Threat actors will use AI to target humans at scale. The text emphasises AI-enhanced social engineering: conversational bots, real-time manipulation, and automated account takeover, shifting attacks from isolated human-led attempts to continuous, scalable interaction.
  5. AI will expose APIs as a too-easily-exploited attack surface. The experts argue that AI agents capable of discovering and interacting with software interfaces will lower the barrier to abusing APIs, including undocumented or unintended ones. As agents gain broader permissions and access to cloud services, APIs are expected to become a more frequent point of exploitation and concealment.
  6. Extortion will evolve beyond ransomware encryption. Extortion campaigns are expected to rely less on encryption alone and more on a combination of tactics, including data theft, threats to leak or alter information, and disruption of cloud services, backups, and supply chains.
  7. Cyber incidents will increasingly spread from IT into industrial operations. Ransomware and related intrusions are expected to move beyond enterprise IT systems and disrupt operational technology and industrial control environments, amplifying downtime, supply-chain disruption, and operational impact.
  8. The insider threat will increasingly include imposter employees. Analysts anticipate insider risks will extend beyond malicious or negligent staff to include external actors who gain physical or remote access by posing as legitimate employees, including through hardware implants or direct device access that bypasses end point security.
  9. Nation-state cyber activity will continue to target Western governments and industries. Experts point to continued cyber operations by state-linked actors, including financially motivated campaigns and influence operations, with increased use of social engineering, deception techniques, and AI-enabled tools to scale and refine targeting.
  10. Identity management is expected to remain a primary failure point. The rapid growth of human and machine identities, including AI agents, across SaaS, cloud platforms and third-party environments is likely to reinforce credential misuse as a leading cause of major breaches.

Taken together, these trends suggest that in 2026, cyber risk will increasingly reflect systemic exposure created by the combination of AI adoption, identity sprawl, and interconnected digital infrastructure, rather than isolated technical failures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU classifies WhatsApp as Very Large Online Platform

WhatsApp has been formally designated a Very Large Online Platform under the EU Digital Services Act, triggering the bloc’s most stringent digital oversight regime.

The classification follows confirmation that the messaging service has exceeded 51 million monthly users in the EU, triggering enhanced regulatory scrutiny.

As a VLOP, WhatsApp must take active steps to limit the spread of disinformation and reduce risks linked to the manipulation of public debate. The platform is also expected to strengthen safeguards for users’ mental health, with particular attention placed on the protection of minors and younger audiences.

The European Commission will oversee compliance directly and may impose financial penalties of up to 6 percent of WhatsApp’s global annual turnover if violations are identified. The company has until mid-May to align its systems, policies and risk assessments with the DSA’s requirements.

WhatsApp joins a growing list of major platforms already subject to similar obligations, including Facebook, Instagram, YouTube and X. The move reflects the Commission’s broader effort to apply the Digital Services Act across social media, messaging services and content platforms linked to systemic online risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France proposes EU tools to map foreign tech dependence

France has unveiled a new push to reduce Europe’s dependence on US and Chinese technology suppliers, placing digital sovereignty back at the centre of the EU policy debates.

Speaking in Paris, France’s minister for AI and digital affairs, Anne Le Hénanff, presented initiatives to expose and address the structural reliance on non-EU technologies across public administrations and private companies.

Central to the strategy is the creation of a Digital Sovereignty Observatory, which will map foreign technology dependencies and assess organisational exposure to geopolitical and supply-chain risks.

The body, led by former Europe minister Clément Beaune, is intended to provide the evidence base needed for coordinated action rather than symbolic declarations of autonomy.

France is also advancing a Digital Resilience Index, expected to publish its first findings in early 2026. The index will measure reliance on foreign digital services and products, identifying vulnerabilities linked to cloud infrastructure, AI, cybersecurity and emerging technologies.

Industry data suggests Europe’s dependence on external tech providers costs the continent hundreds of billions of euros annually.

Paris is using the initiative to renew calls for a European preference in public-sector digital procurement and for a standard EU definition of European digital services.

Such proposals remain contentious among member states, yet France argues they are essential for restoring strategic control over critical digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok outages spark fears over data control and censorship in the US

Widespread TikTok disruptions affected users across the US as snowstorms triggered power outages and technical failures, with reports of malfunctioning algorithms and missing content features.

Problems persisted for some users beyond the initial incident, adding to uncertainty surrounding the platform’s stability.

The outage coincided with the creation of a new US-based TikTok joint venture following government concerns over potential Chinese access to user data. TikTok stated that a power failure at a domestic data centre caused the disruption, rather than ownership restructuring or policy changes.

Suspicion grew among users due to overlapping political events, including large-scale protests in Minneapolis and reports of difficulties searching for related content. Fears of censorship spread online, although TikTok attributed all disruptions to infrastructure failure.

The incident also resurfaced concerns over TikTok’s privacy policy, which outlines the collection of sensitive personal data. While some disclosures predated the ownership deal, the timing reinforced broader anxieties over social media surveillance during periods of political tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France’s National Assembly backs under-15 social media ban

France’s National Assembly has backed a bill that would bar children under 15 from accessing social media, citing rising concern over cyberbullying and mental-health harms. MPs approved the text late Monday by 116 votes to 23, sending it next to the Senate before it returns to the lower house for a final vote.

As drafted, the proposal would cover both standalone social networks and ‘social networking’ features embedded inside wider platforms, and it would rely on age checks that comply with the EU rules. The same package also extends France’s existing smartphone restrictions in schools to include high schools, and lawmakers have discussed additional guardrails, such as limits on practices deemed harmful to minors (including advertising and recommendation systems).

President Emmanuel Macron has urged lawmakers to move quickly, arguing that platforms are not neutral spaces for adolescents and linking social media to broader concerns about youth violence and well-being. Support for stricter limits is broad across parties, and polling has pointed in the same direction, but the bill still faces the practical question of how reliably platforms can keep underage users out.

Australia set the pace in December 2025, when its world-first ban on under-16s holding accounts on major platforms came into force, an approach now closely watched abroad. Early experience there has highlighted the same tension France faces, between political clarity (‘no accounts under the age line’) and the messy reality of age assurance and workarounds.

France’s debate is also unfolding in a broader European push to tighten child online safety rules. The European Parliament has called for an EU-wide ‘digital minimum age’ of 16 (with parental consent options for 13–16), while the European Commission has issued guidance for platforms and developed a prototype age-verification tool designed to preserve privacy, signalling that Brussels is trying to square protection with data-minimisation.

Why does it matter?

Beyond the child-safety rationale, the move reflects a broader push to curb platform power, with youth protection framed as a test case for stronger state oversight of Big Tech. At the same time, critics warn that strict age-verification regimes can expand online identification and surveillance, raising privacy and rights concerns, and may push teens toward smaller or less regulated spaces rather than offline life.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google fixes Gmail bug that sent spam into primary inboxes

Gmail experienced widespread email filtering issues on Saturday, sending spam into primary inboxes and mislabelling legitimate messages as suspicious, according to Google’s Workspace status dashboard.

Problems began around 5 a.m. Pacific time, with users reporting disrupted inbox categories, unexpected spam warnings and delays in email delivery. Many said promotional and social emails appeared in primary folders, while trusted senders were flagged as potential threats.

Google acknowledged the malfunction throughout the day, noting ongoing efforts to restore normal service as complaints spread across social media platforms.

By Saturday evening, the company confirmed the issue had been fully resolved for all users, although some misclassified messages and spam warnings may remain visible for emails received before the fix.

Google said it is conducting an internal investigation and will publish a detailed incident analysis to explain what caused the disruption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Georgia moves to curb AI data centre expansion amid energy concerns

The state of Georgia is emerging as the focal point of a growing backlash against the rapid expansion of data centres powering the US’ AI boom.

Lawmakers in several states are now considering statewide bans, as concerns over energy consumption, water use and local disruption move to the centre of economic and environmental debate.

A bill introduced in Georgia would impose a moratorium on new data centre construction until March next year, giving state and municipal authorities time to establish more explicit regulatory rules.

The proposal arrives after Georgia’s utility regulator approved plans for an additional 10 gigawatts of electricity generation, primarily driven by data centre demand and expected to rely heavily on fossil fuels.

Local resistance has intensified as the Atlanta metropolitan area led the country in data centre construction last year, prompting multiple municipalities to impose their own temporary bans.

Critics argue that rapid development has pushed up electricity bills, strained water supplies and delivered fewer tax benefits than promised. At the same time, utility companies retain incentives to expand generation rather than improve grid efficiency.

The issue has taken on broader political significance as Georgia prepares for key elections that will affect utility oversight.

Supporters of the moratorium frame the pause as a chance for public scrutiny and democratic accountability, while backers of the industry warn that blanket restrictions risk undermining investment, jobs and long-term technological competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!