Patients notified months after Canopy Healthcare cyber incident

Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.

The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.

Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.

Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.

The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India mandates live identity checks for crypto users

India’s Financial Intelligence Unit has tightened crypto compliance, requiring live identity checks, location verification, and stronger Client Due Diligence. The measures aim to prevent money laundering, terrorist financing, and misuse of digital asset services.

Crypto platforms must now collect multiple identifiers from users, including IP addresses, device IDs, wallet addresses, transaction hashes, and timestamps.

Verification also requires users to provide a Permanent Account Number and a secondary ID, such as a passport, Aadhaar, or voter ID, alongside OTP confirmation for email and phone numbers.

Bank accounts must be validated via a penny-drop mechanism to confirm ownership and operational status.

Enhanced due diligence will apply to high-risk transactions and relationships, particularly those involving users from designated high-risk jurisdictions and tax havens. Platforms must monitor red flags and apply extra scrutiny to comply with the new guidelines.

Industry experts have welcomed the updated rules, describing them as a positive step for India’s crypto ecosystem. The measures are viewed as enhancing transparency, protecting users, and aligning the sector with global anti-money laundering standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram responds to claims of user data exposure

Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.

The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.

A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.

The platform said affected accounts remained secure, with no unauthorised access recorded.

Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.

Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tether and UN join to boost digital security in Africa

Tether has joined the UN Office on Drugs and Crime to enhance cybersecurity and digital asset education across Africa. The collaboration aims to reduce vulnerabilities to cybercrime and safeguard communities against online scams and fraud.

Africa, emerging as the third-fastest-growing crypto region, faces increasing threats from digital asset fraud. A recent Interpol operation uncovered $260 million in illicit crypto and fiat across Africa, highlighting the urgent need for stronger digital security.

The partnership includes several key initiatives. In Senegal, youth will participate in a multi-phase cybersecurity education programme featuring boot camps, mentorship, and micro-grants to support innovative projects.

Civil society organisations across Africa will receive funding to support human trafficking victims in Nigeria, DRC, Malawi, Ethiopia, and Uganda. In Papua New Guinea, universities will host competitions to promote financial inclusion and prevent digital asset fraud using blockchain solutions.

Tether and UNODC aim to create secure digital ecosystems, boost economic opportunities, and equip communities to prevent organised crime. Coordinated action across sectors is considered vital to creating safer and more inclusive environments for vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Crypto crime report 2025 reveals record nation-state activity

Illicit crypto activity surged in 2025 as nation states and professional criminal networks expanded on-chain operations. Government-linked actors used infrastructure built for organised cybercrime, increasing risks for regulators and security teams.

Data shows that illicit crypto addresses received at least $154 billion during the year, representing a 162% increase compared to 2024. Sanctioned entities drove much of the growth, with stablecoins making up 84% of illicit transactions due to their liquidity and ease of cross-border transfer.

North Korea remained the most aggressive state actor, with hackers stealing around $2 billion, including the record-breaking Bybit breach. Russia’s ruble-backed A7A5 token saw over $93 billion in sanction-evasion transactions, while Iran-linked networks continued using crypto for illicit trade and financing.

Chinese money laundering networks also emerged as a central force, offering full-service criminal infrastructure to fraud groups, hackers, and sanctioned entities. Links between crypto and physical crime grew, with trafficking and coercion increasingly tied to digital asset transfers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lynx ransomware group claims Regis subsidiary on dark web leak site

Regis Resources, one of Australia’s largest unhedged gold producers, has confirmed it is investigating a cyber incident after its subsidiary was named on a dark web leak site operated by a ransomware group.

The Lynx ransomware group listed McPhillamys Gold on Monday, claiming a cyberattack and publishing the names and roles of senior company executives. The group did not provide technical details or evidence of data theft.

The Australia-based company stated that the intrusion was detected in mid-November 2025 through its routine monitoring systems, prompting temporary restrictions on access to protect internal networks. The company said its cybersecurity controls were designed to isolate threats and maintain business continuity.

A forensic investigation found no evidence of data exfiltration and confirmed that no ransom demand had been received. Authorities were notified, and Regis said the incident had no operational or commercial impact.

Lynx, which first emerged in July 2024, has claimed hundreds of victims worldwide. The group says it avoids targeting critical public services, though it continues to pressure private companies through data leak threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!