EU considers further action against Grok over AI nudification concerns

The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.

The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.

Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.

While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.

Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.

The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.

A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK watchdogs warned over AI risks in financial services

UK regulators and the Treasury face MP criticism over their approach to AI, amid warnings of risks to consumers and financial stability. A new Treasury Select Committee report says authorities have been overly cautious as AI use rapidly expands across financial services.

More than 75% of UK financial firms are already using AI, according to evidence reviewed by the committee, with insurers and international banks leading uptake.

Applications range from automating back-office tasks to core functions such as credit assessments and insurance claims, increasing AI’s systemic importance within the sector.

MPs acknowledge AI’s benefits but warn that readiness for large-scale failures remains insufficient. The committee urges the Bank of England and the FCA to introduce AI-specific stress tests to gauge resilience to AI-driven market shocks.

Further recommendations include more explicit regulatory guidance on AI accountability and faster use of the Critical Third Parties Regime. No AI or cloud providers have been designated as critical, prompting calls for stronger oversight to limit operational and systemic risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

iOS security warnings intensify for older devices

Apple has issued a renewed warning to iPhone users, urging them to install the latest version of iOS to avoid exposure to emerging spyware threats targeting older versions.

Devices running iOS 26 are no longer fully protected by remaining on version 18, even after updating to the latest patch. Apple has indicated that recent attacks exploit vulnerabilities that only the newest operating system can address.

Security agencies in France and the United States recommend regularly powering down smartphones to disrupt certain forms of non-persistent spyware that operate in memory.

A complete shutdown using physical buttons, rather than on-screen controls, is advised as part of a basic security routine, particularly for users who delay major software upgrades.

While restarting alone cannot replace software updates, experts stress that keeping iOS up to date remains the most effective defence against zero-click exploits delivered through everyday apps such as iMessage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gemini flaw exposed Google Calendar data through hidden prompts

A vulnerability in Google Calendar allowed attackers to bypass privacy controls by embedding hidden instructions in standard calendar invitations. The issue exploited how Gemini interprets natural language when analysing user schedules.

Researchers at Miggo found that malicious prompts could be placed inside event descriptions. When Gemini scanned calendar data to answer routine queries, it unknowingly processed the embedded instructions.

The exploit used indirect prompt injection, a technique in which harmful commands are hidden within legitimate content. The AI model treated the text as trusted context rather than a potential threat.

In the proof-of-concept attack, Gemini was instructed to summarise a user’s private meetings and store the information in a new calendar event. The attacker could then access the data without alerting the victim.

Google confirmed the findings and deployed a fix after responsible disclosure. The case highlights growing security risks linked to how AI systems interpret natural language inputs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour MPs press Starmer to consider UK under-16s social media ban

Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures.

The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.

Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.

Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

California moves to halt X AI deepfakes

California has ordered Elon Musk’s AI company xAI to stop creating and sharing non-consensual sexual deepfakes immediately. The move follows a surge in explicit AI-generated images circulating on X.

Attorney General Rob Bonta said xAI’s Grok tool enabled the manipulation of images of women and children without consent. Authorities argue that such activity breaches state decency laws and a new deepfake pornography ban.

The Californian investigation began after researchers found Grok users shared more non-consensual sexual imagery than users of other platforms. xAI introduced partial restrictions, though regulators said the real-world impact remains unclear.

Lawmakers say the case highlights growing risks linked to AI image tools. California officials warned companies could face significant penalties if deepfake creation and distribution continue unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Finnish data breach exposed thousands of patients

A major data breach at Finnish psychotherapy provider Vastaamo exposed the private therapy records of around 33,000 patients in 2020. Hackers demanded bitcoin payments and threatened to publish deeply personal notes if victims refused to pay.

Among those affected was Meri-Tuuli Auer, who described intense fear after learning her confidential therapy details could be accessed online. Stolen records included discussions of mental health, abuse, and suicidal thoughts, causing nationwide shock.

The breach became the largest criminal investigation in Finland, prompting emergency government talks led by then prime minister Sanna Marin. Despite efforts to stop the leak, the full database had already circulated on the dark web.

Finnish courts later convicted cybercriminal Julius Kivimäki, sentencing him to more than six years in prison. Many victims say the damage remains permanent, with trust in therapy and digital health systems severely weakened.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French regulator fines Free and Free Mobile €42 million

France’s data protection regulator CNIL has fined telecom operators Free Mobile and Free a combined €42 million over a major customer data breach. The sanctions follow an October 2024 cyberattack that exposed personal data linked to 24 million subscriber contracts.

Investigators found security safeguards were inadequate, allowing attackers to access sensitive personal data, including bank account details. Weak VPN authentication and poor detection of abnormal system activity were highlighted as key failures under the GDPR.

The French regulator also ruled that affected customers were not adequately informed about the risks they faced. Notification emails lacked sufficient detail to explain potential consequences or protective steps, thereby breaching obligations to clearly communicate data breach impacts.

Free Mobile faced an additional penalty for retaining former customer data longer than permitted. Authorities ordered both companies to complete security upgrades and data clean-up measures within strict deadlines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

WordPress AI team outlines SEO shifts

Industry expectations around SEO are shifting as AI agents increasingly rely on existing search infrastructure, according to James LePage, co-lead of the WordPress AI team at Automattic.

Search discovery for AI systems continues to depend on classic signals such as links, authority and indexed content, suggesting no structural break from traditional search engines.

Publishers are therefore being encouraged to focus on semantic markup, schema and internal linking, with AI optimisation closely aligned to established long-tail search strategies.

Future-facing content strategies prioritise clear summaries, ranked information and progressive detail, enabling AI agents to reuse and interpret material independently of traditional websites.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!