The Irish government plans to fast-track laws allowing heavy fines for AI abuse. The move follows controversy involving misuse of image generation tools.
Ministers will transpose an existing EU AI Act into Irish law. The framework defines eight harmful uses breaching rights and public decency.
Penalties could reach €35 million or seven percent of global annual turnover. AI systems would be graded by risk under the enforcement regime.
A dedicated AI office is expected to launch by August to oversee compliance. Irish and UK leaders have pressed platforms to curb harmful AI features.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A UK public sector cyberattack on Kensington and Chelsea Council has exposed the growing vulnerability of government organisations to data breaches. The council stated that personal details linked to hundreds of thousands of residents may have been compromised after attackers targeted the shared IT infrastructure.
Security experts warn that interconnected systems, while cost-efficient, create systemic risks. Dray Agha, senior manager of security operations at Huntress, said a single breach can quickly spread across partner organisations, disrupting essential services and exposing sensitive information.
Public sector bodies remain attractive targets due to ageing infrastructure and the volume of personal data they hold. Records such as names, addresses, national ID numbers, health information, and login credentials can be exploited for fraud, identity theft, and large-scale scams.
Gregg Hardie, public sector regional vice president at SailPoint, noted that attackers often employ simple, high-volume tactics rather than sophisticated techniques. Compromised credentials allow criminals to blend into regular activity and remain undetected for long periods before launching disruptive attacks.
Hardie said stronger identity security and continuous monitoring are essential to prevent minor intrusions from escalating. Investing in resilient, segmented systems could help reduce the impact of future UK public sector cyberattack incidents and protect critical operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has warned X to address issues related to its Grok AI tool. Regulators say new features enabled the creation of sexualised images, including those of children.
EU Tech Sovereignty Commissioner Henna Virkkunen has stated that investigators have already taken action under the Digital Services Act. Failure to comply could result in enforcement measures being taken against the platform.
X recently restricted Grok’s image editing functions to paying users after criticism from regulators and campaigners. Irish and EU media watchdogs are now engaging with Brussels on the issue.
UK ministers also plan laws banning non-consensual intimate images and tools enabling their creation. Several digital rights groups argue that existing laws already permit criminal investigations and fines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.
The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.
Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.
ONE-CLICK TELEGRAM IP ADDRESS LEAK!
In this issue, the secret key is irrelevant. Just like NTLM hash leaks on Windows, Telegram automatically attempts to test the proxy. Here, the secret key does not matter and the IP address is exposed. Example of a link hidden behind a… https://t.co/KTABAiuGYIpic.twitter.com/NJLOD6aQiJ
Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.
Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Spain’s government has approved draft legislation that would tighten consent rules for AI-generated content, aiming to curb deepfakes and strengthen protections for the use of people’s images and voices. The proposal responds to growing concerns in Europe about AI being used to create harmful material, especially sexual content produced without the subject’s permission.
Under the draft, the minimum age to consent to the use of one’s own image would be set at 16, and stricter limits would apply to reusing images found online or reproducing a person’s voice or likeness through AI without authorisation. Spain’s Justice Minister Félix Bolaños warned that sharing personal photos on social media should not be treated as blanket approval for others to reuse them in different contexts.
The reform explicitly targets commercial misuse by classifying the use of AI-generated images or voices for advertising or other business purposes without consent as illegitimate. At the same time, it would still allow creative, satirical, or fictional uses involving public figures, so long as the material is clearly labelled as AI-generated.
Spain’s move aligns with broader EU efforts, as the bloc is working toward rules that would require member states to criminalise non-consensual sexual deepfakes by 2027. The push comes amid rising scrutiny of AI tools and real-world cases that have intensified calls for more precise legal boundaries, including a recent request by the Spanish government for prosecutors to review whether specific AI-generated material could fall under child pornography laws.
The bill is not yet final. It must go through a public consultation process before returning to the government for final approval and then heading to parliament.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.
The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.
The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.
No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.
Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.
Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.
Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.
Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.
The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.
X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.
eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.
Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.
Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.
Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.
A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.
Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.
Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.
Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.
Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Rising living costs and economic instability are the biggest worries for young people worldwide. A World Economic Forum survey shows inflation dominates personal and global concerns.
Many young people fear that AI-driven automation will shrink entry-level job opportunities. Two-thirds expect fewer early career roles despite growing engagement with AI tools.
Nearly 60 per cent already use AI to build skills and improve employability. Side hustles and freelance work are increasingly common responses to economic pressure.
Youth respondents call for quality jobs, better education access and affordable housing. Climate change also ranks among the most serious long-term global risks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU has agreed to open talks with the US on sharing sensitive traveller data. The discussions aim to preserve visa-free travel for European citizens.
The proposal is called ‘Enhanced Border Security Partnership‘, and it could allow transfers of biometric data and other sensitive personal information. Legal experts warn that unclear limits may widen access beyond travellers alone.
EU governments have authorised the European Commission to negotiate a shared framework. Member states would later settle details through bilateral agreements with Washington.
Academics and privacy advocates are calling for stronger safeguards and transparency. EU officials insist data protection limits will form part of any final agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!