Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

Sedgwick breach linked to TridentLocker ransomware attack

Sedgwick has confirmed a data breach at its government-focused subsidiary after the TridentLocker ransomware group claimed responsibility for stealing 3.4 gigabytes of data. The incident underscores growing threats to federal contractors handling sensitive US agency information.

The company said the breach affected only an isolated file transfer system used by Sedgwick Government Solutions, which serves agencies such as DHS, ICE, and CISA. Segmentation reportedly prevented any impact on wider corporate systems or ongoing client operations.

TridentLocker, a ransomware-as-a-service group that appeared in late 2025, listed Sedgwick Government Solutions on its dark web leak site and posted samples of stolen documents. The gang is known for double-extortion tactics, combining data encryption and public exposure threats.

Sedgwick has informed US law enforcement and affected clients while continuing to investigate with external cybersecurity experts. The firm emphasised operational continuity and noted no evidence of intrusion into its claims management servers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare systems face mounting risk from CrazyHunter ransomware

CrazyHunter ransomware has emerged as a growing threat to healthcare organisations, with repeated attacks targeting hospitals and medical service providers. The campaign focuses on critical healthcare infrastructure, raising concerns about service disruption and the exposure of sensitive patient data.

The malware is developed in Go and demonstrates a high level of technical maturity. Attackers gain initial access by exploiting weak Active Directory credentials, then use Group Policy Objects to distribute the ransomware rapidly across compromised networks.

Healthcare institutions in Taiwan have been among the most affected, with multiple confirmed incidents reported by security researchers. The pattern suggests a targeted campaign rather than opportunistic attacks, increasing pressure on regional healthcare providers to strengthen defences.

Once deployed, CrazyHunter turns off security tools and encrypts files to conceal its activity. Analysts note the use of extensive evasion techniques, including memory-based execution and redundant encryption methods, to ensure the delivery of the payload.

CrazyHunter employs a hybrid encryption scheme that combines ChaCha20 and elliptic curve cryptography, utilising partial file encryption to expedite the impact. Encrypted files receive a ‘.Hunter’ extension, with recovery dependent on the attackers’ private keys, reinforcing the pressure to pay ransoms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New UK cyber strategy focuses on trust in online public services

The UK government has announced new measures to strengthen the security and resilience of online public services as more interactions with the state move online. Ministers say public confidence is essential as citizens increasingly rely on digital systems for everyday services.

Backed by more than £210 million, the UK Government Cyber Action Plan outlines how cyber defences and digital resilience will be improved across the public sector. A new Government Cyber Unit will coordinate risk identification, incident response, and action on complex threats spanning multiple departments.

The plan underpins wider efforts to digitise public services, including benefits applications, tax payments, and healthcare access. Officials argue that secure systems can reduce bureaucracy and improve efficiency, but only if users trust that their data is protected.

The announcement coincides with parliamentary debate on the Cyber Security and Resilience Bill, which sets clearer expectations for companies supplying services to the government. The legislation is intended to strengthen cyber resilience across critical supply chains.

Ministers also highlighted new steps to address software supply chain risks, including a Software Security Ambassador Scheme promoting basic security practices. The government says stronger cyber resilience is essential to protect public services and maintain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universal Music Group partners with NVIDIA on AI music strategy

UMG has entered a strategic collaboration with NVIDIA to reshape how billions of fans discover, experience and engage with music by using advanced AI.

An initiative that combines NVIDIA’s AI infrastructure with UMG’s extensive global catalogue, aiming to elevate music interaction instead of relying solely on traditional search and recommendation systems.

The partnership will focus on AI-driven discovery and engagement that interprets music at a deeper cultural and emotional level.

By analysing full-length tracks, the technology is designed to surface music through narrative, mood and context, offering fans richer exploration while helping artists reach audiences more meaningfully.

Artist empowerment sits at the centre of the collaboration, with plans to establish an incubator where musicians and producers help co-design AI tools.

The goal is to enhance originality and creative control instead of producing generic outputs, while ensuring proper attribution and protection of copyrighted works.

Universal Music Group and NVIDIA also emphasise responsible AI development, combining technical safeguards with industry oversight.

By aligning innovation with artist rights and fair compensation, both companies aim to set new standards for how AI supports creativity across the global music ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health offers personalised health support

OpenAI has launched ChatGPT Health, a secure platform linking users’ health information with ChatGPT’s intelligence. The platform supports, rather than replaces, medical care, helping users understand test results, prepare for appointments, and manage their wellness.

ChatGPT Health allows users to safely connect medical records and apps such as Apple Health, Function, and MyFitnessPal. All data is stored in a separate Health space with encryption and enhanced privacy to keep sensitive information secure.

Conversations in Health are not used to train OpenAI’s models.

The platform was developed with input from more than 260 physicians worldwide, ensuring guidance is accurate, clinically relevant, and prioritises safety.

HealthBench, a physician-informed evaluation framework, helps measure quality, clarity, and appropriate escalation in responses, supporting users in making informed decisions about their health.

ChatGPT Health is initially available outside the EEA, Switzerland, and the UK, with wider access coming in the coming weeks. Users can sign up for a waitlist and begin connecting records and wellness apps to receive personalised, context-aware health insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Roblox rolls out facial age checks for chat

The online gaming platform, Roblox, has begun a global rollout requiring facial age checks before users can access chat features, expanding a system first tested in selected regions late last year.

The measure applies wherever chat is available and aims to create age-appropriate communication environments across the platform.

Instead of relying on self-declared ages, Roblox uses facial age estimation to group users and restrict interactions, limiting contact between adults and children under 16. Younger users need parental consent to chat, while verified users aged 13 and over can connect more freely through Trusted Connections.

The company says privacy safeguards remain central, with images deleted immediately after secure processing and no image sharing allowed in chat. Appeals, ID verification and parental controls support accuracy, while ongoing behavioural checks may trigger repeat age verification if discrepancies appear.

Roblox plans to extend age checks beyond chat later in 2026, including creator tools and community features, as part of a broader push to strengthen online safety and rebuild trust in youth-focused digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digi Yatra glitch delays identical twins at Mumbai airport

Identical twins were briefly delayed at Mumbai airport after Digi Yatra facial recognition failed to distinguish between them. The incident occurred during automated entry at Chhatrapati Shivaji Maharaj International Airport.

Mumbai airport staff stepped in quickly, carrying out manual identity checks using physical documents. Both passengers were cleared to travel without missing their flight.

Digi Yatra officials stated that such mismatches are rare and can occur in cases of identical twins. Passengers always retain the option of conventional ID-based verification.

The episode has renewed debate around biometric reliability and the need for human oversight. Experts stress technology must support, not replace, assisted passenger checks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot