Patients notified months after Canopy Healthcare cyber incident

Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.

The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.

Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.

Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.

The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Fortress strengthens European cyber resilience

Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.

Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.

This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.

The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.

Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google removes AI health summaries after safety concerns

Google removed some AI health summaries after a Guardian investigation found they gave misleading and potentially dangerous information. The AI Overviews contained inaccurate liver test data, potentially leading patients to believe they were healthy falsely.

Experts have criticised AI Overviews for oversimplifying complex medical topics, ignoring essential factors such as age, sex, and ethnicity. Charities have warned that misleading AI content could deter people from seeking medical care and erode trust in online health information.

Google removed AI Overviews for some queries, but concerns remain over cancer and mental health summaries that may still be inaccurate or unsafe. Professionals emphasise that AI tools must direct users to reliable sources and advise seeking expert medical input.

The company stated it is reviewing flagged examples and making broad improvements, but experts insist that more comprehensive oversight is needed to prevent AI from dispensing harmful health misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India mandates live identity checks for crypto users

India’s Financial Intelligence Unit has tightened crypto compliance, requiring live identity checks, location verification, and stronger Client Due Diligence. The measures aim to prevent money laundering, terrorist financing, and misuse of digital asset services.

Crypto platforms must now collect multiple identifiers from users, including IP addresses, device IDs, wallet addresses, transaction hashes, and timestamps.

Verification also requires users to provide a Permanent Account Number and a secondary ID, such as a passport, Aadhaar, or voter ID, alongside OTP confirmation for email and phone numbers.

Bank accounts must be validated via a penny-drop mechanism to confirm ownership and operational status.

Enhanced due diligence will apply to high-risk transactions and relationships, particularly those involving users from designated high-risk jurisdictions and tax havens. Platforms must monitor red flags and apply extra scrutiny to comply with the new guidelines.

Industry experts have welcomed the updated rules, describing them as a positive step for India’s crypto ecosystem. The measures are viewed as enhancing transparency, protecting users, and aligning the sector with global anti-money laundering standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram responds to claims of user data exposure

Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.

The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.

A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.

The platform said affected accounts remained secure, with no unauthorised access recorded.

Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.

Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tether and UN join to boost digital security in Africa

Tether has joined the UN Office on Drugs and Crime to enhance cybersecurity and digital asset education across Africa. The collaboration aims to reduce vulnerabilities to cybercrime and safeguard communities against online scams and fraud.

Africa, emerging as the third-fastest-growing crypto region, faces increasing threats from digital asset fraud. A recent Interpol operation uncovered $260 million in illicit crypto and fiat across Africa, highlighting the urgent need for stronger digital security.

The partnership includes several key initiatives. In Senegal, youth will participate in a multi-phase cybersecurity education programme featuring boot camps, mentorship, and micro-grants to support innovative projects.

Civil society organisations across Africa will receive funding to support human trafficking victims in Nigeria, DRC, Malawi, Ethiopia, and Uganda. In Papua New Guinea, universities will host competitions to promote financial inclusion and prevent digital asset fraud using blockchain solutions.

Tether and UNODC aim to create secure digital ecosystems, boost economic opportunities, and equip communities to prevent organised crime. Coordinated action across sectors is considered vital to creating safer and more inclusive environments for vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK outlines approval process for crypto firms

The UK’s Financial Conduct Authority has confirmed that all regulated crypto firms must obtain authorisation under the Financial Services and Markets Act. Both new market entrants and existing operators will be required to comply.

No automatic transition will be available for firms currently registered under anti-money laundering rules. Companies already authorised for other financial services must apply to extend permissions to cover crypto activities and ensure compliance with upcoming regulations.

Pre-application meetings and information sessions will be offered to help firms understand regulatory expectations and enhance the quality of their applications.

An official application window is expected to open in September 2026 and remain active for at least 28 days. Applications submitted during that period are intended to be assessed before the regime formally begins, with further procedural details to be confirmed by the FCA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wegmans faces backlash over facial recognition in US stores

Supermarket chain Wegmans Food Markets is facing scrutiny over its use of facial recognition technology. The issue emerged after New York City stores displayed signs warning that biometric data could be collected for security purposes.

New York law requires businesses to disclose biometric data collection, but the wording of the notices alarmed privacy advocates. Wegmans later said it only uses facial recognition, not voice or eye scans, and only in a small number of higher-risk stores.

According to the US company, the system identifies individuals who have been previously flagged for misconduct, such as theft or threatening behaviour. Wegmans says facial recognition is just one investigative tool and that all actions are subject to human review.

Critics argue the signage suggests broader surveillance than the company admits. Wegmans has not explained why the notices mention eyes and voice if that data is not collected, or when the wording might be revised.

Lawmakers in Connecticut have now proposed a ban on retail facial recognition. Supporters say grocery shopping is essential and that biometric monitoring weakens meaningful customer consent.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!