The EU has agreed to open talks with the US on sharing sensitive traveller data. The discussions aim to preserve visa-free travel for European citizens.
The proposal is called ‘Enhanced Border Security Partnership‘, and it could allow transfers of biometric data and other sensitive personal information. Legal experts warn that unclear limits may widen access beyond travellers alone.
EU governments have authorised the European Commission to negotiate a shared framework. Member states would later settle details through bilateral agreements with Washington.
Academics and privacy advocates are calling for stronger safeguards and transparency. EU officials insist data protection limits will form part of any final agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is accelerating the creation of digital twins by reducing the time and labour required to build complex models. Consulting firm McKinsey says specialised virtual replicas can take six months or more to develop, but generative AI tools can now automate much of the coding process.
McKinsey analysts say AI can structure inputs and synthesise outputs for these simulations, while the models provide safe testing environments for AI systems. Together, the technologies can reduce costs, shorten development cycles, and accelerate deployment.
Quantum Elements, a startup backed by QNDL Participations and the USC Viterbi School of Engineering, is applying this approach to quantum computing. Its Constellation platform combines AI agents, natural language tools, and simulation software.
The company says quantum systems are hard to model because qubits behave differently across hardware types such as superconducting circuits, trapped ions, and photonics. These variations affect stability, error rates, and performance.
By using digital twins, developers can test algorithms, simulate noise, and evaluate error correction without building physical hardware. Quantum Elements says this can cut testing time from months to minutes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.
The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.
Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.
Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.
The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.
Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.
This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.
The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.
Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.
The discussions focus on shared regulatory approaches rather than immediate bans.
X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.
In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.
Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.
X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.
European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.
Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.
Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.
Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.
The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US administration’s new AI action plan frames global development as an AI race with a single winner. Officials argue AI dominance brings economic, military, and geopolitical advantages. Experts say competition is unfolding across multiple domains.
The United States continues to lead in the development of advanced large language and multimodal models by firms such as OpenAI, Google, and Anthropic. American companies also dominate global computing infrastructure. Control over high-end AI chips and data-centre capacity remains concentrated in US firms.
Chinese companies are narrowing the gap in the practical applications of AI. Models from Alibaba, DeepSeek, and Moonshot AI perform well in tasks such as translation, coding, and customer service. Performance at the cutting edge still lags behind US systems.
Washington’s decision to allow limited exports of Nvidia’s H200 AI chips to China reflects a belief that controlled sales can preserve US leadership. Critics argue the move risks weakening America’s computing advantage. Concerns persist over long-term strategic consequences.
Rather than a decisive victory for either side in the AI race, analysts foresee an era of asymmetric competition in AI. The United States may dominate advanced AI services, but China is expected to lead in large-scale industrial deployment within the evolving AI race.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.
The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.
A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.
The platform said affected accounts remained secure, with no unauthorised access recorded.
Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.
Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
SoftBank Group and OpenAI announced a strategic partnership with SB Energy, involving a combined investment of $1 billion to support the development of large-scale AI data centres and energy infrastructure in the US.
The agreement forms part of the broader Stargate initiative, which aims to expand domestic AI computing capacity.
As part of the arrangement, OpenAI signed a lease for a 1.2 gigawatt data centre project in Milam County, Texas, with SB Energy selected to develop and operate the facility.
The partners stated that the project is designed to support the rising demand for AI computing while minimising water usage and enhancing local energy supply.
SB Energy also secured an additional $800 million in redeemable preferred equity from Ares, strengthening its financial position for further expansion.
The companies stated that the collaboration is expected to generate construction employment, long-term operational roles and investment in grid modernisation, while establishing a scalable model for future AI-focused data centre developments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A series of agreements has been announced by Meta to support nuclear energy projects in the US, aiming to secure up to 6.6 gigawatts of clean and reliable electricity for data centres and AI infrastructure by 2035. The company said the move supports grid stability while reinforcing domestic energy capacity.
The agreements include support for existing nuclear facilities operated by Vistra in Ohio and Pennsylvania, as well as commitments to advanced reactor developers TerraPower and Oklo.
Meta stated that the arrangements are intended to extend the operational life of current plants while accelerating the deployment of next-generation nuclear technologies.
According to Meta, the projects are expected to generate thousands of construction roles and hundreds of long-term operational jobs, while contributing to the firm’s power to regional electricity grids.
The company added that energy costs associated with its data centres are fully covered through corporate agreements, instead of being passed on to US consumers.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!