A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.
The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.
Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.
Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is expanding shopping features inside its Gemini chatbot through partnerships with Walmart and other retailers. Users will be able to browse and buy products without leaving the chat interface.
An instant checkout function allows purchases through linked accounts and selected payment providers. Walmart customers can receive personalised recommendations based on previous shopping activity.
The move was announced at the latest National Retail Federation convention in New York. Tech groups are racing to turn AI assistants into end-to-end retail tools.
Google said the service will launch first in the US before international expansion. Payments initially rely on Google-linked cards, with PayPal support planned.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is accelerating the creation of digital twins by reducing the time and labour required to build complex models. Consulting firm McKinsey says specialised virtual replicas can take six months or more to develop, but generative AI tools can now automate much of the coding process.
McKinsey analysts say AI can structure inputs and synthesise outputs for these simulations, while the models provide safe testing environments for AI systems. Together, the technologies can reduce costs, shorten development cycles, and accelerate deployment.
Quantum Elements, a startup backed by QNDL Participations and the USC Viterbi School of Engineering, is applying this approach to quantum computing. Its Constellation platform combines AI agents, natural language tools, and simulation software.
The company says quantum systems are hard to model because qubits behave differently across hardware types such as superconducting circuits, trapped ions, and photonics. These variations affect stability, error rates, and performance.
By using digital twins, developers can test algorithms, simulate noise, and evaluate error correction without building physical hardware. Quantum Elements says this can cut testing time from months to minutes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.
The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.
Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.
Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.
The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.
Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.
This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.
The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.
Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.
The discussions focus on shared regulatory approaches rather than immediate bans.
X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.
In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.
Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.
X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.
European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google removed some AI health summaries after a Guardian investigation found they gave misleading and potentially dangerous information. The AI Overviews contained inaccurate liver test data, potentially leading patients to believe they were healthy falsely.
Experts have criticised AI Overviews for oversimplifying complex medical topics, ignoring essential factors such as age, sex, and ethnicity. Charities have warned that misleading AI content could deter people from seeking medical care and erode trust in online health information.
Google removed AI Overviews for some queries, but concerns remain over cancer and mental health summaries that may still be inaccurate or unsafe. Professionals emphasise that AI tools must direct users to reliable sources and advise seeking expert medical input.
The company stated it is reviewing flagged examples and making broad improvements, but experts insist that more comprehensive oversight is needed to prevent AI from dispensing harmful health misinformation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India’s Financial Intelligence Unit has tightened crypto compliance, requiring live identity checks, location verification, and stronger Client Due Diligence. The measures aim to prevent money laundering, terrorist financing, and misuse of digital asset services.
Crypto platforms must now collect multiple identifiers from users, including IP addresses, device IDs, wallet addresses, transaction hashes, and timestamps.
Verification also requires users to provide a Permanent Account Number and a secondary ID, such as a passport, Aadhaar, or voter ID, alongside OTP confirmation for email and phone numbers.
Bank accounts must be validated via a penny-drop mechanism to confirm ownership and operational status.
Enhanced due diligence will apply to high-risk transactions and relationships, particularly those involving users from designated high-risk jurisdictions and tax havens. Platforms must monitor red flags and apply extra scrutiny to comply with the new guidelines.
Industry experts have welcomed the updated rules, describing them as a positive step for India’s crypto ecosystem. The measures are viewed as enhancing transparency, protecting users, and aligning the sector with global anti-money laundering standards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.
Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.
Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.
Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.
The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US administration’s new AI action plan frames global development as an AI race with a single winner. Officials argue AI dominance brings economic, military, and geopolitical advantages. Experts say competition is unfolding across multiple domains.
The United States continues to lead in the development of advanced large language and multimodal models by firms such as OpenAI, Google, and Anthropic. American companies also dominate global computing infrastructure. Control over high-end AI chips and data-centre capacity remains concentrated in US firms.
Chinese companies are narrowing the gap in the practical applications of AI. Models from Alibaba, DeepSeek, and Moonshot AI perform well in tasks such as translation, coding, and customer service. Performance at the cutting edge still lags behind US systems.
Washington’s decision to allow limited exports of Nvidia’s H200 AI chips to China reflects a belief that controlled sales can preserve US leadership. Critics argue the move risks weakening America’s computing advantage. Concerns persist over long-term strategic consequences.
Rather than a decisive victory for either side in the AI race, analysts foresee an era of asymmetric competition in AI. The United States may dominate advanced AI services, but China is expected to lead in large-scale industrial deployment within the evolving AI race.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!