UK remote work still a major data security risk

A new survey reveals that 69% of UK companies reported data breaches to the Information Commissioner’s Office (ICO) over the past year, a steep rise from 53% in 2024.

The research conducted by Apricorn highlights that nearly half of remote workers knowingly compromised data security.

Based on responses from 200 UK IT security leaders, the study found that phishing remains the leading cause of breaches, followed by human error. Despite widespread remote work policies, 58% of organisations believe staff lack the proper tools or skills to protect sensitive data.

The use of personal devices for work has climbed to 56%, while only 19% of firms now mandate company-issued hardware. These trends raise ongoing concerns about end point security, data visibility, and maintaining GDPR compliance in hybrid work environments.

Technical support gaps and unclear encryption practices remain pressing issues, with nearly half of respondents finding it increasingly difficult to manage remote work technology. Apricorn’s Jon Fielding called for a stronger link between written policy and practical security measures to reduce breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time, on-device security: The only way to stop modern mobile Trojans

Mobile banking faces a serious new threat: AI-powered Trojans operating silently within legitimate apps. These advanced forms of malware go beyond stealing login credentials—they use AI to intercept biometrics, manipulate app flows in real-time, and execute fraud without raising alarms.

Today’s AI Trojans adapt on the fly. They bypass signature-based detection and cloud-based threat engines by completing attacks directly on the device before traditional systems can react.

Most current security tools weren’t designed for this level of sophistication, exposing banks and users.

To counter this, experts advocate for AI-native security built directly into mobile apps—systems that operate on the device itself, monitoring user interactions and app behaviour in real-time to detect anomalies and stop fraud before it begins.

As these AI threats grow more common, the message is clear: mobile apps must defend themselves from within. Real-time, on-device protection is now essential to safeguarding users and staying ahead of a rapidly evolving risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto conferences face rising phishing risks

Crypto events have grown rapidly worldwide in recent years. Unfortunately, this expansion has led to an increase in scams targeting attendees, according to Kraken’s chief security officer, Nick Percoco.

Recent conferences have seen lax personal security, with exposed devices and careless sharing of sensitive information. These lapses make it easier for criminals to launch phishing campaigns and impersonation attacks.

Phishing remains the top threat at these events, exploiting typical conference activities such as QR code scanning and networking. Attackers distribute malicious links disguised as legitimate follow-ups, allowing them to gain access to wallets and sensitive data with minimal technical skill.

Use of public Wi-Fi, unverified QR codes, and openly discussing high-value trades in public areas further increase risks. Attendees are urged to use burner wallets and verify every QR code carefully.

The dangers have become very real, highlighted by violent crimes in France, where prominent crypto professionals were targeted in kidnappings and ransom demands. These incidents show that risks are no longer confined to the digital world.

Basic security mistakes such as leaving devices unlocked or oversharing personal information can have severe consequences. Experts call for a stronger security culture at events and beyond, including multi-factor authentication, cautious password management, and heightened situational awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and TikTok contest the EU’s compliance charges

Meta and TikTok have taken their fight against an the EU supervisory fee to Europe’s second-highest court, arguing that the charges are disproportionate and based on flawed calculations.

The fee, introduced under the Digital Services Act (DSA), requires major online platforms to pay 0.05% of their annual global net income to cover the European Commission’s oversight costs.

Meta questioned the Commission’s methodology, claiming the levy was based on the entire group’s revenue instead of the specific EU-based subsidiary.

The company’s lawyer told judges it still lacked clarity on how the fee was calculated, describing the process as opaque and inconsistent with the spirit of the law.

TikTok also criticised the charge, alleging inaccurate and discriminatory data inflated its payment.

Its legal team argued that user numbers were double-counted when people switched between devices. The Commission had wrongly calculated fees based on group profits rather than platform-specific earnings.

The Commission defended its approach, saying group resources should bear the cost when consolidated accounts are used. A ruling is expected from the General Court sometime next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI must protect dignity, say US bishops

The US Conference of Catholic Bishops has urged Congress to centre AI policy on human dignity and the common good.

Their message outlines moral principles rather than technical guidance, warning against misuse of technology that may erode truth, justice, or the protection of the vulnerable.

The bishops caution against letting AI replace human moral judgement, especially in sensitive areas like family life, work, and warfare. They express concern about AI deepening inequality and harming those already marginalised without strict oversight.

Their call includes demands for greater transparency, regulation of autonomous weapons, and stronger protections for children and workers in the US.

Rooted in Catholic social teaching, the letter frames AI not as a neutral innovation but as a force that must serve people, not displace them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Guardz doubles down on SMB protection with $56M funding boost

Cybersecurity startup Guardz has secured $56 million in Series B funding to expand its AI-native platform designed for managed service providers (MSPs).

The round was led by ClearSky, with backing from Phoenix Financial, Glilot Capital Partners, SentinelOne, Hanaco Ventures, and others, bringing the company’s total funding to $84 million in just over two years.

Since emerging from stealth in early 2023, Guardz has built a global presence, partnering with hundreds of MSPs to secure thousands of small and mid-sized businesses.

With the new capital, the company aims to accelerate go-to-market efforts and enhance its platform with more automation, compliance tools, and cyber insurance capabilities.

The Guardz platform integrates threat protection across identities, email, endpoints, cloud, and data into a single engine. Combining AI-driven automation with human-led Managed Detection and Response (MDR), it provides 24/7 monitoring and rapid response to threats.

Seamless integrations with Microsoft 365 and Google Workspace allow MSPs to pre-emptively detect suspicious activity and respond in real time.

‘Our goal is to empower MSPs with enterprise-grade security tools to protect the global economy’s most vulnerable targets — small and mid-sized businesses,’ said Guardz CEO and co-founder Dor Eisner. ‘This funding allows us to further that mission and help businesses thrive in a secure environment.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!