UK National Cyber Security Centre calls for strategic cybersecurity policy agenda

The United Kingdom’s National Cyber Security Centre (NCSC), part of GCHQ, has called for the adoption of a long-term, strategic policy agenda to address increasing cybersecurity risks. That appeal follows prolonged delays in the introduction of updated cybersecurity legislation by the UK government.

In a blog post, co-authored by Ollie Whitehouse, NCSC’s Chief Technology Officer, and Paul W., the Principal Technical Director, the agency underscored the need for more political engagement in shaping the country’s cybersecurity landscape. Although the NCSC does not possess policymaking powers, its latest message highlights its growing concern over the UK’s limited progress in implementing comprehensive cybersecurity reforms.

Whitehouse has previously argued that the current technology market fails to incentivise the development and maintenance of secure digital products. He asserts that while the technical community knows how to build secure systems, commercial pressures and market conditions often favour speed, cost-cutting, and short-term gains over security. That, he notes, is a structural issue that cannot be resolved through voluntary best practices alone and likely requires legislative and regulatory measures.

The UK government has yet to introduce the long-anticipated Cyber Security and Resilience Bill to Parliament. Initially described by its predecessor as a step toward modernising the country’s cyber legislation, the bill remains unpublished. Another delayed effort is a consultation led by the Home Office on ransomware response policy, which was postponed due to the snap election and is still awaiting an official government response.

The agency’s call mirrors similar debates in the United States, where former Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly advocated for holding software vendors accountable for product security. The Biden administration’s national cybersecurity strategy introduced early steps toward vendor liability, a concept that has gained traction among experts like Whitehouse.

However, the current US administration under President Trump has since rolled back some of these requirements, most notably through a recent executive order eliminating obligations for government contractors to attest to their products’ security.

By contrast, the European Union has advanced several legislative initiatives aimed at strengthening digital security, including the Cyber Resilience Act. Yet, these efforts face challenges of their own, such as reconciling economic priorities with cybersecurity requirements and adapting EU-wide standards to national legal systems.

In its blog post, the NCSC reiterated that the financial and societal burden of cybersecurity failures is currently borne by consumers, governments, insurers, and other downstream actors. The agency argues that addressing these issues requires a reassessment of underlying market dynamics—particularly those that do not reward secure development practices or long-term resilience.

While the NCSC lacks the authority to enforce regulations, its increasingly direct communications reflect a broader shift within parts of the UK’s cybersecurity community toward advocating for more comprehensive policy intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto conferences face rising phishing risks

Crypto events have grown rapidly worldwide in recent years. Unfortunately, this expansion has led to an increase in scams targeting attendees, according to Kraken’s chief security officer, Nick Percoco.

Recent conferences have seen lax personal security, with exposed devices and careless sharing of sensitive information. These lapses make it easier for criminals to launch phishing campaigns and impersonation attacks.

Phishing remains the top threat at these events, exploiting typical conference activities such as QR code scanning and networking. Attackers distribute malicious links disguised as legitimate follow-ups, allowing them to gain access to wallets and sensitive data with minimal technical skill.

Use of public Wi-Fi, unverified QR codes, and openly discussing high-value trades in public areas further increase risks. Attendees are urged to use burner wallets and verify every QR code carefully.

The dangers have become very real, highlighted by violent crimes in France, where prominent crypto professionals were targeted in kidnappings and ransom demands. These incidents show that risks are no longer confined to the digital world.

Basic security mistakes such as leaving devices unlocked or oversharing personal information can have severe consequences. Experts call for a stronger security culture at events and beyond, including multi-factor authentication, cautious password management, and heightened situational awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers target recruiters with fake CVs and malware

A financially driven hacking group known as FIN6 has reversed the usual job scam model by targeting recruiters instead of job seekers. Using realistic LinkedIn and Indeed profiles, the attackers pose as candidates and send malware-laced CVs hosted on reputable cloud platforms.

to type in resume URLs, bypassing email security tools manually. These URLs lead to fake portfolio sites hosted on Amazon Web Services that selectively deliver malware to users who pass as humans.

Victims receive a zip file containing a disguised shortcut that installs the more_eggs malware, which is capable of credential theft and remote access.

However, this JavaScript-based tool, linked to another group known as Venom Spider, uses legitimate Windows utilities to evade detection.

The campaign includes stealthy techniques such as traffic filtering, living-off-the-land binaries, and persistent registry modifications. Domains used include those mimicking real names, allowing attackers to gain trust while launching a powerful phishing operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cisco to reinvent network security for the AI era

Cisco has introduced a major evolution in security policy management, aiming to help enterprises scale securely without increasing complexity. At the centre of this transformation is Cisco’s Security Cloud Control, a unified policy framework designed to simplify and centralise the enforcement of security policies across a wide range of environments and technologies.

With the introduction of the Mesh Policy Engine, organisations can now define a single, intent-based policy that applies seamlessly across Cisco and third-party firewalls. Cisco is also upgrading its network security infrastructure to support AI-ready environments.

The new Hybrid Mesh Firewall includes the high-performance 6100 Series for data centres and the cost-efficient 200 Series for branch deployments, offering advanced threat inspection and integrated SD-WAN. Enforcement is extended across SD-WAN, smart switches, and ACI fabric, ensuring consistent protection.

Additionally, Cisco has deepened its integration with Splunk to enhance threat detection, investigation, and response (TDIR). Firewall log data feeds into Splunk for advanced analytics, while new SOAR integrations automate key responses like host isolation and policy enforcement.

Combined with telemetry from Cisco’s broader ecosystem, these tools provide faster, more informed threat management. Together, these advancements position Cisco as a leader in AI-era cybersecurity, offering a unified and intelligent platform that reduces complexity, improves detection and response, and secures emerging technologies like agentic AI. By embedding policy-driven security into the core of enterprise networks, Cisco is enabling organisations to innovate with AI safely and securely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India unveils AI incident reporting guidelines for critical infrastructure

India is developing AI incident reporting guidelines for companies, developers, and public institutions to report AI-related issues affecting critical infrastructure sectors such as telecommunications, power, and energy. The government aims to create a centralised database to record and classify incidents like system failures, unexpected results, or harmful impacts caused by AI.

That initiative will help policymakers and stakeholders better understand and manage the risks AI poses to vital services, ensuring transparency and accountability. The proposed guidelines will require detailed reporting of incidents, including the AI application involved, cause, location, affected sector, and severity of harm.

The Telecommunications Engineering Centre (TEC) is spearheading the effort, focusing initially on telecom and digital infrastructure, with plans to extend the standard across other sectors and pitch it globally through the International Telecommunication Union. The framework aligns with international initiatives such as the OECD’s AI Incident Monitor and builds on government recommendations to improve oversight while fostering innovation.

Why does it matter?

The draft emphasises learning from incidents rather than penalising reporters, encouraging self-regulation to avoid excessive compliance burdens. The following approach complements broader AI safety goals of India, including the recent launch of the IndiaAI Safety Institute, which works on risk management, ethical frameworks, and detection tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

French police detain suspects in crypto ransom case

French police have arrested several suspects in connection with a series of violent kidnappings aimed at cryptocurrency executives and their families. The latest arrests, made on Tuesday, are part of a broader crackdown on what authorities describe as a highly organised extortion ring.

The group is believed to be behind the 1 May abduction of a crypto entrepreneur’s father, who was kidnapped in broad daylight in Paris by men disguised as delivery workers. The kidnappers reportedly cut off a finger to demand cryptocurrency before police rescued the victim days later.

Investigators suspect Badiss Mohamed Amide Bajjou, a 24-year-old dual French-Moroccan national, of orchestrating the attacks. Moroccan police arrested him in Tangier last week, seizing weapons, electronics, and illicit funds.

He is also linked to the January kidnapping of Ledger co-founder David Balland, with French authorities now seeking his extradition.

By the end of May, prosecutors had charged 25 people, mostly under 24, who were recruited online and promised financial rewards. Many were used as operatives in kidnapping attempts, including a failed effort to abduct the family of Paymium CEO Pierre Noizat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Guardz doubles down on SMB protection with $56M funding boost

Cybersecurity startup Guardz has secured $56 million in Series B funding to expand its AI-native platform designed for managed service providers (MSPs).

The round was led by ClearSky, with backing from Phoenix Financial, Glilot Capital Partners, SentinelOne, Hanaco Ventures, and others, bringing the company’s total funding to $84 million in just over two years.

Since emerging from stealth in early 2023, Guardz has built a global presence, partnering with hundreds of MSPs to secure thousands of small and mid-sized businesses.

With the new capital, the company aims to accelerate go-to-market efforts and enhance its platform with more automation, compliance tools, and cyber insurance capabilities.

The Guardz platform integrates threat protection across identities, email, endpoints, cloud, and data into a single engine. Combining AI-driven automation with human-led Managed Detection and Response (MDR), it provides 24/7 monitoring and rapid response to threats.

Seamless integrations with Microsoft 365 and Google Workspace allow MSPs to pre-emptively detect suspicious activity and respond in real time.

‘Our goal is to empower MSPs with enterprise-grade security tools to protect the global economy’s most vulnerable targets — small and mid-sized businesses,’ said Guardz CEO and co-founder Dor Eisner. ‘This funding allows us to further that mission and help businesses thrive in a secure environment.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!