NHS patient death linked to cyber attack delays

A patient has died after delays caused by a major cyberattack on NHS services, King’s College Hospital NHS Foundation Trust has confirmed. The attack, targeting pathology services, resulted in a long wait for blood test results that contributed to the patient’s death.

The June 2024 ransomware attack on Synnovis, a provider of blood test services, also delayed 1,100 cancer treatments and postponed more than 1,000 operations. The Russian group Qilin is believed to have been behind the attack that impacted multiple hospital trusts across London.

Healthcare providers struggled to deliver essential services, resorting to using universal O-type blood, which triggered a national shortage. Sensitive data stolen during the attack was later published online, adding to the crisis.

Cybersecurity experts warned that the NHS remains vulnerable because of its dependence on a vast network of suppliers. The incident highlights the human cost of cyber attacks, with calls for stronger protections across critical healthcare systems in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Irish businesses face cybersecurity reality check

Most Irish businesses believe they are well protected from cyberattacks, yet many neglect essential defences. Research from Gallagher shows most firms do not update software regularly or back up data as needed.

The survey of 300 companies found almost two-thirds of Irish firms feel very secure, with another 28 percent feeling quite safe. Despite this, nearly six in ten fail to apply software updates, leaving systems vulnerable to attacks.

Cybersecurity training is provided by just four in ten Irish organisations, even though it is one of the most effective safeguards. Gallagher warns that overconfidence may lead to complacency, putting businesses at risk of disruption and financial loss.

Laura Vickers of Gallagher stressed the importance of basic measures like updates and data backups to prevent serious breaches. With four in ten Irish companies suffering attacks in the past five years, firms are urged to match confidence with action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI feature to sum up all the unread messages

WhatsApp has introduced a new feature using Meta AI to help users manage unread messages more easily. Named ‘Message Summaries’, the tool provides quick overviews of missed messages in individual and group chats, assisting users to catch up without scrolling through long threads.

The summaries are generated using Meta’s Private Processing technology, which operates inside a Trusted Execution Environment. The secure cloud-based system ensures that neither Meta nor WhatsApp — nor anyone else in the conversation — can access your messages or the AI-generated summaries.

According to WhatsApp, Message Summaries are entirely private. No one else in the chat can see the summary created for you. If someone attempts to interfere with the secure system, operations will stop immediately, or the change will be exposed using a built-in transparency check.

Meta has designed the system around three principles: secure data handling during processing and transmission, strict enforcement of protections against tampering, and provable transparency to track any breach attempt.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AGI moves closer to reshaping society

There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.

Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.

Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.

Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.

Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.

AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.

Still, the rise of AGI raises difficult questions.

How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.

Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.

The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea-linked hackers deploy fake Zoom malware to steal crypto

North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.

Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.

The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.

When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.

Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.

Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.

Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI data risks prompt new global cybersecurity guidance

A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift.

Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring.

The recommendations include verifying third-party datasets, using secure ingestion protocols, and regularly auditing AI system behaviour. Particular emphasis is placed on preventing model poisoning and tracking data lineage to ensure integrity.

The guidance encourages firms to update their incident response plans to address AI-specific risks, conduct audits of ongoing projects, and establish cross-functional teams involving legal, cybersecurity, and data science experts.

With AI models increasingly central to critical infrastructure, treating data security as a core governance issue is essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salt Typhoon exploits critical Cisco flaw to breach Canadian network

Canadian and US authorities have attributed a cyberattack on a Canadian telecommunications provider to state-sponsored actors allegedly linked to China. The attack exploited a critical vulnerability that had been patched 16 months earlier.

According to a statement issued on Monday by Canada’s Communications Security Establishment (CSE), the breach is attributed to a threat group known as Salt Typhoon, believed to be operating on behalf of the Chinese government.

‘The Cyber Centre is aware of malicious cyber activities currently targeting Canadian telecommunications companies,’ the CSE stated, adding that Salt Typhoon was ‘almost certainly’ responsible. The US FBI released a similar advisory.

Salt Typhoon is one of several threat actors associated with the People’s Republic of China (PRC), with a history of conducting cyber operations against telecommunications and infrastructure targets globally.

In late 2023, security researchers disclosed that over 10,000 Cisco devices had been compromised by exploiting CVE-2023-20198—a vulnerability rated 10/10 in severity.

The exploit targeted Cisco devices running iOS XE software with HTTP or HTTPS services enabled. Despite Cisco releasing a patch in October 2023, the vulnerability remained unaddressed in some systems.

In mid-February 2025, three network devices operated by an unnamed Canadian telecom company were compromised, with attackers retrieving configuration files and modifying at least one to create a GRE tunnel—allowing network traffic to be captured.

Cisco has also linked Salt Typhoon to a broader campaign using multiple patched vulnerabilities, including CVE-2018-0171, CVE-2023-20273, and CVE-2024-20399.

The Cyber Centre noted that the compromise could allow unauthorised access to internal network data or serve as a foothold to breach additional targets. Officials also stated that some activity may have been limited to reconnaissance.

While neither agency commented on why the affected devices had not been updated, the prolonged delay in patching such a high-severity flaw highlights ongoing challenges in maintaining basic cyber hygiene.

The authorities in Canada warned that similar espionage operations are likely to continue targeting the telecom sector and associated clients over the next two years.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NCSC issues new guidance for EU cybersecurity rules

The National Cyber Security Centre (NCSC) has published new guidance to assist organisations in meeting the upcoming EU Network and Information Security Directive (NIS2) requirements.

Ireland missed the October 2024 deadline but is expected to adopt the directive soon.

NIS2 broadens the scope of covered sectors and introduces stricter cybersecurity obligations, including heavier fines and legal consequences for non-compliance. The directive aims to improve security across supply chains in both the public and private sectors.

To help businesses comply, the NCSC unveiled Risk Management Measures. It also launched Cyber Fundamentals, a practical framework designed for organisations of varying sizes and risk levels.

Joseph Stephens, NCSC’s Director of Resilience, noted the challenge of broad application and praised cooperation with Belgium and Romania on a solution for the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

WhatsApp prohibited on US House devices citing data risk

Meta Platforms’ messaging service WhatsApp has been banned from all devices used by the US House of Representatives, according to an internal memo distributed to staff on Monday.

The memo, issued by the Office of the Chief Administrative Officer, stated that the Office of Cybersecurity had classified WhatsApp as a high-risk application.

The assessment cited concerns about the platform’s data protection practices, lack of transparency regarding user data handling, absence of stored data encryption, and associated security risks.

Staff were advised to use alternative messaging platforms deemed more secure, including Microsoft Teams, Amazon’s Wickr, Signal, and Apple’s iMessage and FaceTime.

Meta responded to the decision, stating it ‘strongly disagreed’ with the assessment and maintained that WhatsApp offers stronger security measures than some of the recommended alternatives.

Earlier this year, WhatsApp disclosed that Israeli spyware company Paragon Solutions had targeted numerous users, including journalists and civil society members.

The US House of Representatives has previously restricted other applications due to security concerns. In 2022, it prohibited the use of TikTok on official devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!