Grok controversy fuels political backlash in Northern Ireland

A Northern Ireland politician, Cara Hunter of the Social Democratic and Labour Party (SDLP), has quit X after renewed concerns over Grok AI misuse. She cited failures to protect women and children online.

The decision follows criticism of Grok AI features enabling non-consensual sexualised images. UK regulators have launched investigations under online safety laws.

UK ministers plan to criminalise creating intimate deepfakes and supplying related tools. Ofcom is examining whether X breached its legal duties.

Political leaders and rights groups say enforcement must go further. X says it removes illegal content and has restricted Grok image functions on the social media.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools influence modern personal finance practices

Personal finance assistants powered by AI tools are increasingly helping users manage budgets, analyse spending, and organise financial documents. Popular platforms such as ChatGPT, Google Gemini, Microsoft Copilot, and Claude now offer features designed to support everyday financial tasks.

Rather than focusing on conversational style, users should consider how financial data is accessed and how each assistant integrates with existing systems. Connections to spreadsheets, cloud storage, and secure platforms often determine how effective AI tools are for managing financial workflows.

ChatGPT is commonly used for drafting financial summaries, analysing expenses, and creating custom tools through plugins. Google Gemini is closely integrated with Google Docs and Sheets, making it suitable for users who rely on Google’s productivity ecosystem.

Microsoft Copilot provides strong automation for Excel and Microsoft 365 users, with administrative controls that appeal to organisations. Claude focuses on safety and large context windows, allowing it to process lengthy financial documents with more conservative output.

Choosing the most suitable AI tools for personal finance depends on workflow needs, data governance preferences, and privacy considerations. No single platform dominates every use case; each offers strengths across different financial management tasks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ireland moves to fast-track AI abuse fines

The Irish government plans to fast-track laws allowing heavy fines for AI abuse. The move follows controversy involving misuse of image generation tools.

Ministers will transpose an existing EU AI Act into Irish law. The framework defines eight harmful uses breaching rights and public decency.

Penalties could reach €35 million or seven percent of global annual turnover. AI systems would be graded by risk under the enforcement regime.

A dedicated AI office is expected to launch by August to oversee compliance. Irish and UK leaders have pressed platforms to curb harmful AI features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Government IT vulnerabilities revealed by UK public sector cyberattack

A UK public sector cyberattack on Kensington and Chelsea Council has exposed the growing vulnerability of government organisations to data breaches. The council stated that personal details linked to hundreds of thousands of residents may have been compromised after attackers targeted the shared IT infrastructure.

Security experts warn that interconnected systems, while cost-efficient, create systemic risks. Dray Agha, senior manager of security operations at Huntress, said a single breach can quickly spread across partner organisations, disrupting essential services and exposing sensitive information.

Public sector bodies remain attractive targets due to ageing infrastructure and the volume of personal data they hold. Records such as names, addresses, national ID numbers, health information, and login credentials can be exploited for fraud, identity theft, and large-scale scams.

Gregg Hardie, public sector regional vice president at SailPoint, noted that attackers often employ simple, high-volume tactics rather than sophisticated techniques. Compromised credentials allow criminals to blend into regular activity and remain undetected for long periods before launching disruptive attacks.

Hardie said stronger identity security and continuous monitoring are essential to prevent minor intrusions from escalating. Investing in resilient, segmented systems could help reduce the impact of future UK public sector cyberattack incidents and protect critical operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU warns X over Grok AI image abuse

The European Commission has warned X to address issues related to its Grok AI tool. Regulators say new features enabled the creation of sexualised images, including those of children.

EU Tech Sovereignty Commissioner Henna Virkkunen has stated that investigators have already taken action under the Digital Services Act. Failure to comply could result in enforcement measures being taken against the platform.

X recently restricted Grok’s image editing functions to paying users after criticism from regulators and campaigners. Irish and EU media watchdogs are now engaging with Brussels on the issue.

UK ministers also plan laws banning non-consensual intimate images and tools enabling their creation. Several digital rights groups argue that existing laws already permit criminal investigations and fines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

One-click vulnerability in Telegram bypasses VPN and proxy protection

A newly identified vulnerability in Telegram’s mobile apps allows attackers to reveal users’ real IP addresses with a single click. The flaw, known as a ‘one-click IP leak’, can expose location and network details even when VPNs or proxies are enabled.

The issue comes from Telegram’s automatic proxy testing process. When a user clicks a disguised proxy link, the app initiates a direct connection request that bypasses all privacy protections and reveals the device’s real IP address.

Cybersecurity researcher @0x6rss demonstrated an attack on X, showing that a single click is enough to log a victim’s real IP address. The request behaves similarly to known Windows NTLM leaks, where background authentication attempts expose identifying information without explicit user consent.

Attackers can embed malicious proxy links in chats or channels, masking them as standard usernames. Once clicked, Telegram silently runs the proxy test, bypasses VPN or SOCKS5 protections, and sends the device’s real IP address to the attacker’s server, enabling tracking, surveillance, or doxxing.

Both Android and iOS versions are affected, putting millions of privacy-focused users at risk. Researchers recommend avoiding unknown links, turning off automatic proxy detection where possible, and using firewall tools to block outbound proxy tests. Telegram has not publicly confirmed a fix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Spanish bill targets AI misuse of images and voices

Spain’s government has approved draft legislation that would tighten consent rules for AI-generated content, aiming to curb deepfakes and strengthen protections for the use of people’s images and voices. The proposal responds to growing concerns in Europe about AI being used to create harmful material, especially sexual content produced without the subject’s permission.

Under the draft, the minimum age to consent to the use of one’s own image would be set at 16, and stricter limits would apply to reusing images found online or reproducing a person’s voice or likeness through AI without authorisation. Spain’s Justice Minister Félix Bolaños warned that sharing personal photos on social media should not be treated as blanket approval for others to reuse them in different contexts.

The reform explicitly targets commercial misuse by classifying the use of AI-generated images or voices for advertising or other business purposes without consent as illegitimate. At the same time, it would still allow creative, satirical, or fictional uses involving public figures, so long as the material is clearly labelled as AI-generated.

Spain’s move aligns with broader EU efforts, as the bloc is working toward rules that would require member states to criminalise non-consensual sexual deepfakes by 2027. The push comes amid rising scrutiny of AI tools and real-world cases that have intensified calls for more precise legal boundaries, including a recent request by the Spanish government for prosecutors to review whether specific AI-generated material could fall under child pornography laws.

The bill is not yet final. It must go through a public consultation process before returning to the government for final approval and then heading to parliament.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Betterment confirms data breach after social engineering attack

Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.

The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.

The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.

No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.

Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.

Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.

Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered toys navigate safety concerns after early missteps

Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.

A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.

Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.

Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.

Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.

Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!