One-click vulnerability in Telegram bypasses VPN and proxy protection

A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.

The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.

Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.

Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.

Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stranger Things fans question AI use in show finale’s script

The creators of Stranger Things have been accused by some fans of using ChatGPT while writing the show’s fifth and final season, following the release of a behind-the-scenes Netflix documentary.

The series ended on New Year’s Eve with a two-hour finale that saw (SPOILER WARNING) Vecna defeated and Eleven apparently sacrificing herself. The ambiguous ending divided viewers, with some disappointed by the lack of closure.

A documentary titled One Last Adventure: The Making Of Stranger Things 5 was released shortly after the finale. One scene showing Matt and Ross Duffer working on scripts drew attention after a screenshot circulated online.

Some viewers claimed a ChatGPT-style tab was visible on a laptop screen. Others questioned the claim, noting the footage may predate the chatbot’s mainstream use.

Netflix has since confirmed two spin-offs are in development, including a new live-action series and an animated project titled Stranger Things: Tales From ’85.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rokid launches screenless AI smart glasses at CES 2026

The global pioneer firm in AR, Rokid, unveiled its new Style smart glasses at CES 2026, opting for a screenless, voice-first design instead of the visual displays standard across competing devices.

Weighing just 38.5 grams, the glasses are designed for everyday wear, with an emphasis on comfort and prescription readiness.

Despite lacking a screen, Rokid Style integrates AI through an open ecosystem that supports platforms such as ChatGPT, DeepSeek and Qwen. Global services, including Google Maps and Microsoft AI Translation, facilitate navigation and provide real-time language assistance across various regions.

The device adopts a prescription-first approach, supporting lenses from plano to ±15.00 diopters alongside photochromic, tinted and protective options.

Rokid has also launched a global online prescription service, promising delivery within seven to ten days.

Design features include titanium alloy hinges, silicone nose pads and a built-in camera capable of 4K video recording.

Battery life reaches up to 12 hours of daily use, with global pricing starting at $299, ahead of an online launch scheduled for January 19.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lynx ransomware group claims Regis subsidiary on dark web leak site

Regis Resources, one of Australia’s largest unhedged gold producers, has confirmed it is investigating a cyber incident after its subsidiary was named on a dark web leak site operated by a ransomware group.

The Lynx ransomware group listed McPhillamys Gold on Monday, claiming a cyberattack and publishing the names and roles of senior company executives. The group did not provide technical details or evidence of data theft.

The Australia-based company stated that the intrusion was detected in mid-November 2025 through its routine monitoring systems, prompting temporary restrictions on access to protect internal networks. The company said its cybersecurity controls were designed to isolate threats and maintain business continuity.

A forensic investigation found no evidence of data exfiltration and confirmed that no ransom demand had been received. Authorities were notified, and Regis said the incident had no operational or commercial impact.

Lynx, which first emerged in July 2024, has claimed hundreds of victims worldwide. The group says it avoids targeting critical public services, though it continues to pressure private companies through data leak threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI earbuds go beyond music

Startups are transforming everyday earbuds into AI assistants that can record meetings, translate languages, or offer cross-platform support, expanding the devices’ role beyond music. Major tech firms, such as Apple and Samsung, laid the groundwork with noise-cancelling and voice features.

At CES, companies such as OSO, Viaim and Timekettle demonstrated professional and educational use cases. Schools utilise translation earbuds to assist non-English-speaking students in following lessons, while professionals can retrieve meeting highlights on demand.

Experts note that earbuds are more accessible than smart glasses, but remain limited by voice-only interaction and reliance on smartphones. Neural earbuds with sensitive sensors could enable hands-free control or internet access for individuals with disabilities.

Although most headphones today still focus on listening, AI earbuds hint at a shift in personal technology, blending convenience, intelligence and accessibility into devices people already wear every day.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare systems face mounting risk from CrazyHunter ransomware

CrazyHunter ransomware has emerged as a growing threat to healthcare organisations, with repeated attacks targeting hospitals and medical service providers. The campaign focuses on critical healthcare infrastructure, raising concerns about service disruption and the exposure of sensitive patient data.

The malware is developed in Go and demonstrates a high level of technical maturity. Attackers gain initial access by exploiting weak Active Directory credentials, then use Group Policy Objects to distribute the ransomware rapidly across compromised networks.

Healthcare institutions in Taiwan have been among the most affected, with multiple confirmed incidents reported by security researchers. The pattern suggests a targeted campaign rather than opportunistic attacks, increasing pressure on regional healthcare providers to strengthen defences.

Once deployed, CrazyHunter turns off security tools and encrypts files to conceal its activity. Analysts note the use of extensive evasion techniques, including memory-based execution and redundant encryption methods, to ensure the delivery of the payload.

CrazyHunter employs a hybrid encryption scheme that combines ChaCha20 and elliptic curve cryptography, utilising partial file encryption to expedite the impact. Encrypted files receive a ‘.Hunter’ extension, with recovery dependent on the attackers’ private keys, reinforcing the pressure to pay ransoms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI assistant and cheaper autonomy headline Ford’s CES 2026 announcements

Ford has unveiled plans for an AI assistant that will launch in its smartphone app in early 2026 before expanding to in-vehicle systems in 2027. The announcement was made at the 2026 Consumer Electronics Show, alongside a preview of a next-generation BlueCruise driver assistance system.

The AI assistant will be hosted on Google Cloud and built using existing large language models, with access to vehicle-specific data. Ford said this will allow users to ask both general questions, such as vehicle capacity, and real-time queries, including oil life and maintenance status.

Ford plans to introduce the assistant first through its redesigned mobile app, with native integration into vehicles scheduled for 2027. The company has not yet specified which models will receive the in-car version first, but said the rollout would expand gradually across its lineup.

Alongside the AI assistant, the vehicle manufacturer previewed an updated version of its BlueCruise system, which it claims will be more affordable to produce and more capable. The new system is expected to debut in 2027 on the first electric vehicle built on Ford’s low-cost Universal Electric Vehicle platform.

Ford said the next-generation BlueCruise could support eyes-off driving by 2028 and enable point-to-point autonomous driving under driver supervision. As with similar systems from other automakers, drivers will still be required to remain ready to take control at any time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New UK cyber strategy focuses on trust in online public services

The UK government has announced new measures to strengthen the security and resilience of online public services as more interactions with the state move online. Ministers say public confidence is essential as citizens increasingly rely on digital systems for everyday services.

Backed by more than £210 million, the UK Government Cyber Action Plan outlines how cyber defences and digital resilience will be improved across the public sector. A new Government Cyber Unit will coordinate risk identification, incident response, and action on complex threats spanning multiple departments.

The plan underpins wider efforts to digitise public services, including benefits applications, tax payments, and healthcare access. Officials argue that secure systems can reduce bureaucracy and improve efficiency, but only if users trust that their data is protected.

The announcement coincides with parliamentary debate on the Cyber Security and Resilience Bill, which sets clearer expectations for companies supplying services to the government. The legislation is intended to strengthen cyber resilience across critical supply chains.

Ministers also highlighted new steps to address software supply chain risks, including a Software Security Ambassador Scheme promoting basic security practices. The government says stronger cyber resilience is essential to protect public services and maintain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!