BT report shows rise in cyber attacks on UK small firms

A BT report has found that 42% of small businesses in the UK suffered a cyberattack in the past year. The study also revealed that 67% of medium-sized firms were targeted, while many lacked basic security measures or staff training.

Phishing was named the most common threat, hitting 85% of businesses in the UK, and ransomware incidents have more than doubled. BT’s new training programme aims to help SMEs take practical steps to reduce risks, covering topics like AI threats, account takeovers and QR code scams.

Tris Morgan from BT highlighted that SMEs face serious risks from cyber attacks, which could threaten their survival. He stressed that security is a necessary foundation and can be achieved without vast resources.

The report follows wider warnings on AI-enabled cyber threats, with other studies showing that few firms feel prepared for these risks. BT’s training is part of its mission to help businesses grow confidently despite digital dangers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NHS patient death linked to cyber attack delays

A patient has died after delays caused by a major cyberattack on NHS services, King’s College Hospital NHS Foundation Trust has confirmed. The attack, targeting pathology services, resulted in a long wait for blood test results that contributed to the patient’s death.

The June 2024 ransomware attack on Synnovis, a provider of blood test services, also delayed 1,100 cancer treatments and postponed more than 1,000 operations. The Russian group Qilin is believed to have been behind the attack that impacted multiple hospital trusts across London.

Healthcare providers struggled to deliver essential services, resorting to using universal O-type blood, which triggered a national shortage. Sensitive data stolen during the attack was later published online, adding to the crisis.

Cybersecurity experts warned that the NHS remains vulnerable because of its dependence on a vast network of suppliers. The incident highlights the human cost of cyber attacks, with calls for stronger protections across critical healthcare systems in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Irish businesses face cybersecurity reality check

Most Irish businesses believe they are well protected from cyberattacks, yet many neglect essential defences. Research from Gallagher shows most firms do not update software regularly or back up data as needed.

The survey of 300 companies found almost two-thirds of Irish firms feel very secure, with another 28 percent feeling quite safe. Despite this, nearly six in ten fail to apply software updates, leaving systems vulnerable to attacks.

Cybersecurity training is provided by just four in ten Irish organisations, even though it is one of the most effective safeguards. Gallagher warns that overconfidence may lead to complacency, putting businesses at risk of disruption and financial loss.

Laura Vickers of Gallagher stressed the importance of basic measures like updates and data backups to prevent serious breaches. With four in ten Irish companies suffering attacks in the past five years, firms are urged to match confidence with action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake video claims Nigeria is sending troops to Israel

A video circulating on TikTok falsely claims that Nigeria has announced the deployment of troops to Israel. Since 17 June, the video has been shared more than 6,100 times and presents a fabricated news segment constructed from artificial intelligence-generated visuals and outdated footage.

No official Nigerian authority has made any such announcement regarding military involvement in the ongoing Middle East crisis.

The video, attributed to a fictitious media outlet called ‘TBC News’, combines visuals of soldiers and aircraft with simulated newsroom graphics. However, no broadcaster by that name exists, and the logo and branding do not correspond to any known or legitimate media source.

Upon closer inspection, several anomalies suggest the use of generative AI. The news presenter’s appearance subtly shifts throughout the segment — with clothing changes, facial inconsistencies, and robotic voiceovers indicating non-authentic production.

Similarly, the footage of military activity lacks credible visual markers. For example, a purported official briefing displays a coat of arms inconsistent with Nigeria’s national emblems, and no standard flags or insignia are typically present at such events.

While two brief aircraft clips appear authentic — originally filmed during a May airshow in Lagos — the remainder seems digitally altered or artificially generated.

In reality, Nigerian officials have issued intense public criticism of Israel’s recent military actions in Iran and have not indicated any intent to provide military support to Israel.

The video in question, therefore, significantly distorts Nigeria’s diplomatic position and risks exacerbating tensions during an already sensitive period in international affairs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime in Africa: Turning research into justice and action

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.

Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.

One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.

Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.

Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.

Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.

Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI governance debated at IGF 2025: Global cooperation meets local needs

At the Internet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of artificial intelligence governance. The discussion, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela of UNESCO, featured a rich exchange between government officials, private sector leaders, civil society voices, and multilateral organisations.

The session highlighted how AI governance is becoming a crowded yet fragmented space, shaped by overlapping frameworks such as the OECD AI Principles, the EU AI Act, UNESCO’s recommendations on AI ethics, and various national and regional strategies. While these efforts reflect progress, they also pose challenges in terms of coordination, coherence, and inclusivity.

IGF session highlights urgent need for democratic resilience online

Melinda Claybaugh, Director of Privacy Policy at Meta, noted the abundance of governance initiatives but warned of disagreements over how AI risks should be measured. ‘We’re at an inflection point,’ she said, calling for more balanced conversations that include not just safety concerns but also the benefits and opportunities AI brings. She argued for transparency in risk assessments and suggested that existing regulatory structures could be adapted to new technologies rather than replaced.

In response, Jhalak Kakkar, Executive Director at India’s Centre for Communication Governance, urged caution against what she termed a ‘false dichotomy’ between innovation and regulation. ‘We need to start building governance from the beginning, not after harms appear,’ she stressed, calling for socio-technical impact assessments and meaningful civil society participation. Kakkar advocated for multi-stakeholder governance that moves beyond formality to real influence.

Mlindi Mashologu, Deputy Director-General at South Africa’s Ministry of Communications and Digital Technology, highlighted the importance of context-aware regulation. ‘There is no one-size-fits-all when it comes to AI,’ he said. Mashologu outlined South Africa’s efforts through its G20 presidency to reduce AI-driven inequality via a new policy toolkit, stressing human rights, data justice, and environmental sustainability as core principles. He also called for capacity-building to enable the Global South to shape its own AI future.

Jovan Kurbalija, Executive Director of the Diplo Foundation, brought a philosophical lens to the discussion, questioning the dominance of ‘data’ in governance frameworks. ‘AI is fundamentally about knowledge, not just data,’ he argued. Kurbalija warned against the monopolisation of human knowledge and advocated for stronger safeguards to ensure fair attribution and decentralisation.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience

The need for transparency, explainability, and inclusive governance remained central themes. Participants explored whether traditional laws—on privacy, competition, and intellectual property—are sufficient or whether new instruments are needed to address AI’s novel challenges.

Audience members added urgency to the discussion. Anna from Mexican digital rights group R3D raised concerns about AI’s environmental toll and extractive infrastructure practices in the Global South. Pilar Rodriguez, youth coordinator for the IGF in Spain, questioned how AI governance could avoid fragmentation while still respecting regional sovereignty.

The session concluded with a call for common-sense, human-centric AI governance. ‘Let’s demystify AI—but still enjoy its magic,’ said Kurbalija, reflecting the spirit of hopeful realism that permeated the discussion. Panelists agreed that while many AI risks remain unclear, global collaboration rooted in human rights, transparency, and local empowerment offers the most promising path forward.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

North Korea-linked hackers deploy fake Zoom malware to steal crypto

North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.

Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.

The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.

When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.

Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.

Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.

Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New SparkKitty malware targets crypto wallets

A new Trojan dubbed SparkKitty is stealing sensitive data from mobile phones, potentially giving hackers access to cryptocurrency wallets.

Cybersecurity firm Kaspersky says the malware hides in fake crypto apps, gambling platforms, and TikTok clones, spread through deceptive installs.

Once installed, SparkKitty accesses photo galleries and uploads images to a remote server, likely searching for screenshots of wallet seed phrases. Though mainly active in China and Southeast Asia, experts warn it could spread globally.

SparkKitty appears linked to the SparkCat spyware campaign, which also targeted seed phrase images.

The malware is found on iOS and Android platforms, joining other crypto-focused threats like Noodlophile and LummaC2.

TRM Labs recently reported that nearly 70% of last year’s $2.2 billion in stolen crypto came from infrastructure attacks involving seed phrase theft.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI data risks prompt new global cybersecurity guidance

A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift.

Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring.

The recommendations include verifying third-party datasets, using secure ingestion protocols, and regularly auditing AI system behaviour. Particular emphasis is placed on preventing model poisoning and tracking data lineage to ensure integrity.

The guidance encourages firms to update their incident response plans to address AI-specific risks, conduct audits of ongoing projects, and establish cross-functional teams involving legal, cybersecurity, and data science experts.

With AI models increasingly central to critical infrastructure, treating data security as a core governance issue is essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NATO summit overshadowed by cyber threats

NATO’s 76th summit opened in The Hague amid rising tensions in Europe and the Middle East, overshadowed by conflict and cyber threats. Leaders gathered as rushers in Ukraine dragged on, and Israel’s strikes on Iran further strained global stability.

European NATO members pledged greater defence spending, but divisions with the US over security commitments and strategy persisted. The summit also highlighted concerns about hybrid threats, with cyberespionage and sabotage by Russia-linked groups remaining a pressing issue.

According to European intelligence agencies, Russian cyber operations targeting critical infrastructure and government networks have intensified. NATO leaders face pressure to enhance collective cyber deterrence, with pro-Russian hacktivists expected to exploit summit declarations in future campaigns.

While Europe pushes to reduce reliance on the US security umbrella, uncertainty over Washington’s focus and support continues. Many fear the summit may end without concrete decisions as the alliance grapples with external threats and internal discord.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!