UK remote work still a major data security risk

A new survey reveals that 69% of UK companies reported data breaches to the Information Commissioner’s Office (ICO) over the past year, a steep rise from 53% in 2024.

The research conducted by Apricorn highlights that nearly half of remote workers knowingly compromised data security.

Based on responses from 200 UK IT security leaders, the study found that phishing remains the leading cause of breaches, followed by human error. Despite widespread remote work policies, 58% of organisations believe staff lack the proper tools or skills to protect sensitive data.

The use of personal devices for work has climbed to 56%, while only 19% of firms now mandate company-issued hardware. These trends raise ongoing concerns about end point security, data visibility, and maintaining GDPR compliance in hybrid work environments.

Technical support gaps and unclear encryption practices remain pressing issues, with nearly half of respondents finding it increasingly difficult to manage remote work technology. Apricorn’s Jon Fielding called for a stronger link between written policy and practical security measures to reduce breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time, on-device security: The only way to stop modern mobile Trojans

Mobile banking faces a serious new threat: AI-powered Trojans operating silently within legitimate apps. These advanced forms of malware go beyond stealing login credentials—they use AI to intercept biometrics, manipulate app flows in real-time, and execute fraud without raising alarms.

Today’s AI Trojans adapt on the fly. They bypass signature-based detection and cloud-based threat engines by completing attacks directly on the device before traditional systems can react.

Most current security tools weren’t designed for this level of sophistication, exposing banks and users.

To counter this, experts advocate for AI-native security built directly into mobile apps—systems that operate on the device itself, monitoring user interactions and app behaviour in real-time to detect anomalies and stop fraud before it begins.

As these AI threats grow more common, the message is clear: mobile apps must defend themselves from within. Real-time, on-device protection is now essential to safeguarding users and staying ahead of a rapidly evolving risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santa Clara offers AI training with Silicon Valley focus

Santa Clara University has launched a new master’s programme in AI designed to equip students with technical expertise and ethical insight.

The interdisciplinary degree, offered through the School of Engineering, blends software and hardware tracks to address the growing need for professionals who can manage AI systems responsibly.

The course offers two concentrations: one focusing on algorithms and computation for computer science students and another tailored to engineering students interested in robotics, devices, and AI chip design. Students will also engage in real-world practicums with Silicon Valley companies.

Faculty say the programme integrates ethical training into its core, aiming to produce graduates who can develop intelligent technologies with social awareness. As AI tools increasingly shape society and education, the university hopes to prepare students for both innovation and accountability.

Professor Yi Fang, director of the Responsible AI initiative, said students will leave with a deeper understanding of AI’s societal impact. The initiative reflects a broader trend in higher education, where demand for AI-related skills continues to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Huang: ‘The new programming language is human’

Speaking at London Tech Week, Nvidia CEO Jensen Huang called AI ‘the great equaliser,’ explaining how AI has transformed who can access and control computing power.

In the past, computing was limited to a select few with technical skills in languages like C++ or Python. ‘We had to learn programming languages. We had to architect it. We had to design these computers that are very complicated,’ Huang said.

That’s no longer necessary, he explained. ‘Now, all of a sudden, there’s a new programming language. This new programming language is called ‘human’,’ Huang said, highlighting how AI now understands natural language commands. ‘Most people don’t know C++, very few people know Python, and everybody, as you know, knows human.’

He illustrated his point with an example: asking an AI to write a poem in the style of Shakespeare. The AI delivers, he said—and if you ask it to improve, it will reflect and try again, just like a human collaborator.

For Huang, this shift is not just technical but transformational. It makes the power of advanced computing accessible to billions, not just a trained few.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta bets big on AI, partners with Scale AI in strategic move

Meta Platforms has made a major move in the AI space by investing $14.8 billion in Scale AI, acquiring a 49% stake and pushing the data-labelling startup’s valuation past $29 billion.

As part of the deal, Scale AI founder Alexandr Wang will join Meta’s leadership to head its new superintelligence unit, while continuing to serve on Scale AI’s board. The investment deepens Meta’s commercial ties with Scale and is seen as a strategic step to secure top-tier AI expertise.

Scale AI will use the funds to drive innovation and strengthen client partnerships, while also providing partial liquidity to shareholders and equity holders. Jason Droege, Scale’s Chief Strategy Officer and former Uber Eats executive, will serve as interim CEO.

‘This partnership is a testament to our team’s work and the scale of opportunity ahead,’ said Droege. Wang added, ‘Meta’s investment affirms the limitless path forward for AI and Scale’s role in bridging human values with transformative technologies.’

Scale will remain independent, continuing to support AI labs, corporations, and government agencies with data infrastructure as the race for AI dominance intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions are becoming emotional lifelines

Researchers at Waseda University found that three in four users turn to AI for emotional advice, reflecting growing psychological attachment to chatbot companions. Their new tool, the Experiences in Human-AI Relationships Scale, reveals that many users see AI as a steady presence in their lives.

Two patterns of attachment emerged: anxiety, where users fear being emotionally let down by AI, and avoidance, marked by discomfort with emotional closeness. These patterns closely resemble human relationship styles, despite AI’s inability to reciprocate or abandon its users.

Lead researcher Fan Yang warned that emotionally vulnerable individuals could be exploited by platforms encouraging overuse or financial spending. Sudden disruptions in service, he noted, might even trigger feelings akin to grief or separation anxiety.

The study, based on Chinese participants, suggests AI systems might shape user behaviour depending on design and cultural context. Further research is planned to explore links between AI use and long-term well-being, social function, and emotional regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s V-JEPA 2 teaches AI to think, plan, and act in 3D space

Meta has released V-JEPA 2, an open-source AI model designed to understand and predict real-world environments in 3D. Described as a world model’, it enables machines to simulate physical spaces—offering a breakthrough for robotics, self-driving cars, and intelligent assistants.

Unlike traditional AI that relies on labelled data, V-JEPA 2 learns from unlabelled video clips, building an internal simulation of how the world works. However, now, AI can reason, plan, and act more like humans.

Based on Meta’s JEPA architecture and containing 1.2 billion parameters, the model improves significantly on action prediction and environmental modelling compared to its predecessor.

Meta says this approach mirrors how humans intuitively understand cause and effect—like predicting a ball’s motion or avoiding people in a crowd. V-JEPA 2 helps AI agents develop this same intuition, making them more adaptive in dynamic, unfamiliar situations.

Meta’s Chief AI Scientist Yann LeCun describes world models as ‘abstract digital twins of reality’—vital for machines to understand and predict what comes next. This effort aligns with Meta’s broader push into AI, including a planned $14 billion investment in Scale AI for data labelling.

V-JEPA 2 joins a growing wave of interest in world models. Google DeepMind is building its own called Genie, while AI researcher Fei-Fei Li recently raised $230 million for her startup World Labs, focused on similar goals.

Meta believes V-JEPA 2 brings us closer to machines that can learn, adapt, and operate in the physical world with far greater autonomy and intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!