EU warns X over Grok AI image abuse

The European Commission has warned X to address issues related to its Grok AI tool. Regulators say new features enabled the creation of sexualised images, including those of children.

EU Tech Sovereignty Commissioner Henna Virkkunen has stated that investigators have already taken action under the Digital Services Act. Failure to comply could result in enforcement measures being taken against the platform.

X recently restricted Grok’s image editing functions to paying users after criticism from regulators and campaigners. Irish and EU media watchdogs are now engaging with Brussels on the issue.

UK ministers also plan laws banning non-consensual intimate images and tools enabling their creation. Several digital rights groups argue that existing laws already permit criminal investigations and fines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

One-click vulnerability in Telegram bypasses VPN and proxy protection

A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.

The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.

Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.

Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.

Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Malta plans tougher laws against deepfake abuse

Malta’s government is preparing new legal measures to curb the abusive use of deepfake technology, with existing laws now under review. The planned reforms aim to introduce penalties for the misuse of AI in cases of harassment, blackmail, and bullying.

The move mirrors earlier cyberbullying and cyberstalking laws, extending similar protections to AI-generated content. Authorities are promoting AI while stressing the need for strong public safety and legal safeguards.

AI and youth participation were the main themes discussed during the National Youth Parliament meeting, where Abela highlighted the role of young people in shaping Malta’s long-term development strategy, Vision Malta 2050.

The strategy focuses on the next 25 years and directly affects those entering the workforce or starting families.

Young people were described as key drivers of national policy in areas such as fertility, environmental protection, and work-life balance. Senior officials and members of the Youth Advisory Forum attended the meeting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Betterment confirms data breach after social engineering attack

Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.

The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.

The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.

No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.

Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.

Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.

Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes Europe’s labour market outlook

European labour markets are showing clear signs of cooling after a brief period of employee leverage during the pandemic.

Slower industrial growth, easing wage momentum and increased adoption of AI are encouraging firms to limit hiring instead of expanding headcounts, while workers are becoming more cautious about changing jobs.

Economic indicators suggest employment growth across the EU will slow over the coming years, with fewer vacancies and stabilising migration flows reducing labour market dynamism.

Germany, France, the UK and several central and eastern European economies are already reporting higher unemployment expectations, particularly in manufacturing sectors facing high energy costs and weaker global demand.

Despite broader caution, labour shortages persist in specific areas such as healthcare, logistics, engineering and specialised technical roles.

Southern European countries benefiting from tourism and services growth continue to generate jobs, highlighting uneven recovery patterns instead of a uniform downturn across the continent.

Concerns about automation are further shaping behaviour, as surveys indicate growing anxiety over AI reshaping roles rather than eliminating work.

Analysts expect AI to transform job structures and skill requirements, prompting workers and employers alike to prioritise adaptability instead of rapid expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Multiply Labs targets automation in cell therapy manufacturing

Robotics firm Multiply Labs is introducing automation into cell therapy manufacturing to cut costs by more than 70% and increase output. The startup applies industrial robotics to clean-room environments, replacing slow and contamination-prone manual processes.

Founded in 2016, the San Francisco-based company collaborates with leading cell therapy developers, including Kyverna Therapeutics and Legend Biotech. Its robotic systems perform sterile, precision tasks involved in producing gene-modified cell therapies at scale.

Multiply Labs uses NVIDIA Omniverse to create digital twins of laboratory environments and Isaac Sim to train robots for specialised workflows. Humanoid robots built on NVIDIA’s Isaac GR00T model are also being developed to assist with material handling while maintaining hygiene standards.

Cell therapies involve modifying patient or donor cells to treat various conditions, including cancers, autoimmune diseases, and genetic disorders. The highly customised nature of these treatments makes production costly and sensitive to human error, increasing the risk of failed batches.

By automating thousands of delicate steps, robotics improves consistency, reduces contamination, and preserves expert knowledge. Multiply Labs states that automation could enable the mass production of life-saving therapies at a lower cost and greater availability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over planned EU-US biometrics deal

The EU has agreed to open talks with the US on sharing sensitive traveller data. The discussions aim to preserve visa-free travel for European citizens.

The proposal is called ‘Enhanced Border Security Partnership‘, and it could allow transfers of biometric data and other sensitive personal information. Legal experts warn that unclear limits may widen access beyond travellers alone.

EU governments have authorised the European Commission to negotiate a shared framework. Member states would later settle details through bilateral agreements with Washington.

Academics and privacy advocates are calling for stronger safeguards and transparency. EU officials insist data protection limits will form part of any final agreement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teen victim turns deepfake experience into education

A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.

The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.

Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.

Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Patients notified months after Canopy Healthcare cyber incident

Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.

The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.

Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.

Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.

The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!