ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

Sedgwick breach linked to TridentLocker ransomware attack

Sedgwick has confirmed a data breach at its government-focused subsidiary after the TridentLocker ransomware group claimed responsibility for stealing 3.4 gigabytes of data. The incident underscores growing threats to federal contractors handling sensitive US agency information.

The company said the breach affected only an isolated file transfer system used by Sedgwick Government Solutions, which serves agencies such as DHS, ICE, and CISA. Segmentation reportedly prevented any impact on wider corporate systems or ongoing client operations.

TridentLocker, a ransomware-as-a-service group that appeared in late 2025, listed Sedgwick Government Solutions on its dark web leak site and posted samples of stolen documents. The gang is known for double-extortion tactics, combining data encryption and public exposure threats.

Sedgwick has informed US law enforcement and affected clients while continuing to investigate with external cybersecurity experts. The firm emphasised operational continuity and noted no evidence of intrusion into its claims management servers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare systems face mounting risk from CrazyHunter ransomware

CrazyHunter ransomware has emerged as a growing threat to healthcare organisations, with repeated attacks targeting hospitals and medical service providers. The campaign focuses on critical healthcare infrastructure, raising concerns about service disruption and the exposure of sensitive patient data.

The malware is developed in Go and demonstrates a high level of technical maturity. Attackers gain initial access by exploiting weak Active Directory credentials, then use Group Policy Objects to distribute the ransomware rapidly across compromised networks.

Healthcare institutions in Taiwan have been among the most affected, with multiple confirmed incidents reported by security researchers. The pattern suggests a targeted campaign rather than opportunistic attacks, increasing pressure on regional healthcare providers to strengthen defences.

Once deployed, CrazyHunter turns off security tools and encrypts files to conceal its activity. Analysts note the use of extensive evasion techniques, including memory-based execution and redundant encryption methods, to ensure the delivery of the payload.

CrazyHunter employs a hybrid encryption scheme that combines ChaCha20 and elliptic curve cryptography, utilising partial file encryption to expedite the impact. Encrypted files receive a ‘.Hunter’ extension, with recovery dependent on the attackers’ private keys, reinforcing the pressure to pay ransoms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New UK cyber strategy focuses on trust in online public services

The UK government has announced new measures to strengthen the security and resilience of online public services as more interactions with the state move online. Ministers say public confidence is essential as citizens increasingly rely on digital systems for everyday services.

Backed by more than £210 million, the UK Government Cyber Action Plan outlines how cyber defences and digital resilience will be improved across the public sector. A new Government Cyber Unit will coordinate risk identification, incident response, and action on complex threats spanning multiple departments.

The plan underpins wider efforts to digitise public services, including benefits applications, tax payments, and healthcare access. Officials argue that secure systems can reduce bureaucracy and improve efficiency, but only if users trust that their data is protected.

The announcement coincides with parliamentary debate on the Cyber Security and Resilience Bill, which sets clearer expectations for companies supplying services to the government. The legislation is intended to strengthen cyber resilience across critical supply chains.

Ministers also highlighted new steps to address software supply chain risks, including a Software Security Ambassador Scheme promoting basic security practices. The government says stronger cyber resilience is essential to protect public services and maintain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Roblox rolls out facial age checks for chat

The online gaming platform, Roblox, has begun a global rollout requiring facial age checks before users can access chat features, expanding a system first tested in selected regions late last year.

The measure applies wherever chat is available and aims to create age-appropriate communication environments across the platform.

Instead of relying on self-declared ages, Roblox uses facial age estimation to group users and restrict interactions, limiting contact between adults and children under 16. Younger users need parental consent to chat, while verified users aged 13 and over can connect more freely through Trusted Connections.

The company says privacy safeguards remain central, with images deleted immediately after secure processing and no image sharing allowed in chat. Appeals, ID verification and parental controls support accuracy, while ongoing behavioural checks may trigger repeat age verification if discrepancies appear.

Roblox plans to extend age checks beyond chat later in 2026, including creator tools and community features, as part of a broader push to strengthen online safety and rebuild trust in youth-focused digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung puts AI trust and security at the centre of CES 2026

The South Korean tech giant, Samsung, used CES 2026 to foreground a cross-industry debate about trust, privacy and security in the age of AI.

During its Tech Forum session in Las Vegas, senior figures from AI research and industry argued that people will only fully accept AI when systems behave predictably, and users retain clear control instead of feeling locked inside opaque technologies.

Samsung outlined a trust-by-design philosophy centred on transparency, clarity and accountability. On-device AI was presented as a way to keep personal data local wherever possible, while cloud processing can be used selectively when scale is required.

Speakers said users increasingly want to know when AI is in operation, where their data is processed and how securely it is protected.

Security remained the core theme. Samsung highlighted its Knox platform and Knox Matrix to show how devices can authenticate one another and operate as a shared layer of protection.

Partnerships with companies such as Google and Microsoft were framed as essential for ecosystem-wide resilience. Although misinformation and misuse were recognised as real risks, the panel suggested that technological counter-measures will continue to develop alongside AI systems.

Consumer behaviour formed a final point of discussion. Amy Webb noted that people usually buy products for convenience rather than trust alone, meaning that AI will gain acceptance when it genuinely improves daily life.

The panel concluded that AI systems which embed transparency, robust security and meaningful user choice from the outset are most likely to earn long-term public confidence.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

California launches DROP tool to erase data broker records

Residents in California now have a simpler way to force data brokers to delete their personal information.

The state has launched the Delete Requests and Opt-Out Platform, known as DROP, allowing residents to submit one verified deletion request that applies to every registered data broker instead of contacting each company individually.

A system that follows the Delete Act, passed in 2023, and is intended to create a single control point for consumer data removal.

Once a resident submits a request, the data brokers must begin processing it from August 2026 and will have 90 days to act. If data is not deleted, residents may need to provide extra identifying details.

First-party data collected directly by companies can still be retained, while data from public records, such as voter rolls, remains exempt. Highly sensitive data may fall under separate legal protections such as HIPAA.

The California Privacy Protection Agency argues that broader data deletion could reduce identity theft, AI-driven impersonation, fraud risk and unwanted marketing contact.

Penalties for non-compliance include daily fines for brokers who fail to register or ignore deletion orders. The state hopes the tool will make data rights meaningful instead of purely theoretical.

A launch that comes as regulators worldwide examine how personal data is used, traded and exploited.

California is positioning itself as a leader in consumer privacy enforcement, while questions continue about how effectively DROP will operate when the deadline arrives in 2026.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit overtakes TikTok in the UK social media race

In the UK, Reddit has quietly overtaken TikTok to become Britain’s fourth most-visited social media platform, marking a major shift in how people search for information and share opinions online.

Use of the platform among UK internet users has risen sharply over the past two years, driven strongly by younger audiences who are increasingly drawn to open discussion instead of polished influencer content.

Google’s algorithm changes have helped accelerate Reddit’s rise by prioritising forum-based conversations in search results. Partnership deals with major AI companies have reinforced visibility further, as AI tools increasingly cite Reddit threads.

Younger users in the UK appear to value unfiltered and experience-based conversations, creating strong growth across lifestyle, beauty, parenting and relationship communities, alongside major expansion in football-related discussion.

Women now make up more than half of Reddit’s UK audience, signalling a major demographic shift for a platform once associated mainly with male users. Government departments, including ministers, are also using Reddit for direct engagement through public Q&A sessions.

Tension remains part of the platform’s culture, yet company leaders argue that community moderation and voting systems help manage behaviour.

Reddit is now encouraging users to visit directly instead of arriving via search or AI summaries, positioning the platform as a human alternative to automated answers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!