Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

Social Security move to digital payments

The US Social Security Administration has ended the routine issuance of paper benefit cheques in favour of electronic payments after a 30 September federal deadline. Electronic methods such as direct deposit or prepaid cards are now standard for most beneficiaries.

US officials say the shift speeds up payment delivery and strengthens security since electronic payments are less likely to be lost or stolen than mailed cheques. The move also aims to help reduce federal costs and fraud risks.

A small number of recipients can still receive paper cheques if they qualify for an exemption by showing they lack access to banking services or digital payment systems. People must contact Treasury to request a waiver.

SSA urges beneficiaries to set up or confirm direct deposit details through their online account or use a prepaid card to avoid delays. Recipients without bank accounts are encouraged to enrol for secure electronic options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-designed sensors open new paths for early cancer detection

MIT and Microsoft researchers have developed AI-designed molecular sensors to detect cancer in its earliest stages. By coating nanoparticles with peptides targeted by cancer-linked enzymes, the sensors produce signals detectable through simple urine tests, potentially even at home.

The AI system, named CleaveNet, generates peptide sequences that are efficiently and selectively cleaved by specific proteases, enzymes overactive in cancer cells. The approach enables faster, more precise detection and can help identify a tumour’s type and location.

CleaveNet, trained on 20,000+ peptide-protease interactions, has designed novel peptides for enzymes like MMP13 that cancer cells use to metastasise. The system may cut the number of peptides needed for diagnostics and reveal key biological pathways.

Researchers plan an at-home kit to detect 30 cancers, with peptides also usable for targeted therapies. The work is part of an ARPA-H-funded initiative and highlights the potential of AI to accelerate early cancer detection and treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sedgwick breach linked to TridentLocker ransomware attack

Sedgwick has confirmed a data breach at its government-focused subsidiary after the TridentLocker ransomware group claimed responsibility for stealing 3.4 gigabytes of data. The incident underscores growing threats to federal contractors handling sensitive US agency information.

The company said the breach affected only an isolated file transfer system used by Sedgwick Government Solutions, which serves agencies such as DHS, ICE, and CISA. Segmentation reportedly prevented any impact on wider corporate systems or ongoing client operations.

TridentLocker, a ransomware-as-a-service group that appeared in late 2025, listed Sedgwick Government Solutions on its dark web leak site and posted samples of stolen documents. The gang is known for double-extortion tactics, combining data encryption and public exposure threats.

Sedgwick has informed US law enforcement and affected clients while continuing to investigate with external cybersecurity experts. The firm emphasised operational continuity and noted no evidence of intrusion into its claims management servers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU pushes for open-source commercialisation to reduce tech dependence

The European Commission is preparing a strategy to commercialise European open-source software in an effort to strengthen digital sovereignty and reduce dependence on foreign technology providers.

The plan follows a consultation highlighting that EU funding has delivered innovation, although commercial scale has often emerged outside Europe instead of within it.

Open-source software plays a strategic role by decentralising development and limiting reliance on dominant technology firms.

Commission officials argue that research funding alone cannot deliver competitive alternatives, particularly when public and private contracts continue to favour proprietary systems operated by non-European companies.

An upcoming strategy, due alongside the Cloud and AI Development Act in early 2026, that will prioritise community upscaling, industrial deployment and market integration.

Governance reforms and stronger supply chain security are expected to address vulnerabilities that can affect widely used open-source components.

Financial sustainability will also feature prominently, with public sector partnerships encouraged to support long-term viability.

Brussels hopes wider public adoption of open-source tools will replace expensive or data-extractive proprietary software, reinforcing Europe’s technological autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare systems face mounting risk from CrazyHunter ransomware

CrazyHunter ransomware has emerged as a growing threat to healthcare organisations, with repeated attacks targeting hospitals and medical service providers. The campaign focuses on critical healthcare infrastructure, raising concerns about service disruption and the exposure of sensitive patient data.

The malware is developed in Go and demonstrates a high level of technical maturity. Attackers gain initial access by exploiting weak Active Directory credentials, then use Group Policy Objects to distribute the ransomware rapidly across compromised networks.

Healthcare institutions in Taiwan have been among the most affected, with multiple confirmed incidents reported by security researchers. The pattern suggests a targeted campaign rather than opportunistic attacks, increasing pressure on regional healthcare providers to strengthen defences.

Once deployed, CrazyHunter turns off security tools and encrypts files to conceal its activity. Analysts note the use of extensive evasion techniques, including memory-based execution and redundant encryption methods, to ensure the delivery of the payload.

CrazyHunter employs a hybrid encryption scheme that combines ChaCha20 and elliptic curve cryptography, utilising partial file encryption to expedite the impact. Encrypted files receive a ‘.Hunter’ extension, with recovery dependent on the attackers’ private keys, reinforcing the pressure to pay ransoms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Liberty Financial files to launch national trust bank for USD1

World Liberty Financial’s WLTC Holdings LLC has applied with the Office of the Comptroller of the Currency to establish World Liberty Trust Company, National Association (WLTC), a national trust bank designed for stablecoin operations.

The move aims to centralise issuance, custody, and conversion of USD1, the company’s dollar-backed stablecoin. USD1 has grown rapidly, reaching over $3.3 billion in circulation during its first year.

The trust company will serve institutional clients, providing stablecoin conversion and secure custody for USD1 and other supported stablecoins.

WLTC will operate under federal supervision, offering fee-free USD1 issuance and redemption, USD conversion, and custody with market-rate conversions. Operations will comply with the GENIUS Act and follow strict AML, sanctions, and cybersecurity protocols.

The stablecoin is fully backed by US dollars and short-duration Treasury obligations, operating across ten blockchain networks, including Ethereum, Solana, and TRON.

By combining regulatory oversight with full-stack stablecoin services, WLTC seeks to provide institutional clients with clarity and efficiency in digital asset operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI assistant and cheaper autonomy headline Ford’s CES 2026 announcements

Ford has unveiled plans for an AI assistant that will launch in its smartphone app in early 2026 before expanding to in-vehicle systems in 2027. The announcement was made at the 2026 Consumer Electronics Show, alongside a preview of a next-generation BlueCruise driver assistance system.

The AI assistant will be hosted on Google Cloud and built using existing large language models, with access to vehicle-specific data. Ford said this will allow users to ask both general questions, such as vehicle capacity, and real-time queries, including oil life and maintenance status.

Ford plans to introduce the assistant first through its redesigned mobile app, with native integration into vehicles scheduled for 2027. The company has not yet specified which models will receive the in-car version first, but said the rollout would expand gradually across its lineup.

Alongside the AI assistant, the vehicle manufacturer previewed an updated version of its BlueCruise system, which it claims will be more affordable to produce and more capable. The new system is expected to debut in 2027 on the first electric vehicle built on Ford’s low-cost Universal Electric Vehicle platform.

Ford said the next-generation BlueCruise could support eyes-off driving by 2028 and enable point-to-point autonomous driving under driver supervision. As with similar systems from other automakers, drivers will still be required to remain ready to take control at any time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!