The EU has agreed to open talks with the US on sharing sensitive traveller data. The discussions aim to preserve visa-free travel for European citizens.
The proposal is called ‘Enhanced Border Security Partnership‘, and it could allow transfers of biometric data and other sensitive personal information. Legal experts warn that unclear limits may widen access beyond travellers alone.
EU governments have authorised the European Commission to negotiate a shared framework. Member states would later settle details through bilateral agreements with Washington.
Academics and privacy advocates are calling for stronger safeguards and transparency. EU officials insist data protection limits will form part of any final agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.
The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.
Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.
Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.
The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.
Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.
Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.
The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.
Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.
This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.
The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.
Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.
The discussions focus on shared regulatory approaches rather than immediate bans.
X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.
In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.
Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.
X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.
European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google removed some AI health summaries after a Guardian investigation found they gave misleading and potentially dangerous information. The AI Overviews contained inaccurate liver test data, potentially leading patients to believe they were healthy falsely.
Experts have criticised AI Overviews for oversimplifying complex medical topics, ignoring essential factors such as age, sex, and ethnicity. Charities have warned that misleading AI content could deter people from seeking medical care and erode trust in online health information.
Google removed AI Overviews for some queries, but concerns remain over cancer and mental health summaries that may still be inaccurate or unsafe. Professionals emphasise that AI tools must direct users to reliable sources and advise seeking expert medical input.
The company stated it is reviewing flagged examples and making broad improvements, but experts insist that more comprehensive oversight is needed to prevent AI from dispensing harmful health misinformation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India’s Financial Intelligence Unit has tightened crypto compliance, requiring live identity checks, location verification, and stronger Client Due Diligence. The measures aim to prevent money laundering, terrorist financing, and misuse of digital asset services.
Crypto platforms must now collect multiple identifiers from users, including IP addresses, device IDs, wallet addresses, transaction hashes, and timestamps.
Verification also requires users to provide a Permanent Account Number and a secondary ID, such as a passport, Aadhaar, or voter ID, alongside OTP confirmation for email and phone numbers.
Bank accounts must be validated via a penny-drop mechanism to confirm ownership and operational status.
Enhanced due diligence will apply to high-risk transactions and relationships, particularly those involving users from designated high-risk jurisdictions and tax havens. Platforms must monitor red flags and apply extra scrutiny to comply with the new guidelines.
Industry experts have welcomed the updated rules, describing them as a positive step for India’s crypto ecosystem. The measures are viewed as enhancing transparency, protecting users, and aligning the sector with global anti-money laundering standards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.
Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.
Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.
Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.
The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.
The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.
A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.
The platform said affected accounts remained secure, with no unauthorised access recorded.
Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.
Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Supermarket chain Wegmans Food Markets is facing scrutiny over its use of facial recognition technology. The issue emerged after New York City stores displayed signs warning that biometric data could be collected for security purposes.
New York law requires businesses to disclose biometric data collection, but the wording of the notices alarmed privacy advocates. Wegmans later said it only uses facial recognition, not voice or eye scans, and only in a small number of higher-risk stores.
According to the US company, the system identifies individuals who have been previously flagged for misconduct, such as theft or threatening behaviour. Wegmans says facial recognition is just one investigative tool and that all actions are subject to human review.
Critics argue the signage suggests broader surveillance than the company admits. Wegmans has not explained why the notices mention eyes and voice if that data is not collected, or when the wording might be revised.
Lawmakers in Connecticut have now proposed a ban on retail facial recognition. Supporters say grocery shopping is essential and that biometric monitoring weakens meaningful customer consent.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!