The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.
The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.
Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.
While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.
Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.
The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.
A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India’s first AI-generated travel influencer, Radhika Subramaniam, has begun attracting sustained audience engagement since her launch in mid-2025, signalling growing acceptance of virtual creators in travel content.
Developed by Collective Artists Network, a talent management company based in India, Radhika initially drew attention through curiosity, but followers increasingly interacted with her posts in ways similar to those of human influencers, according to the company’s leadership.
Industry observers say AI travel influencers offer brands greater efficiency, lower production costs, and more control over storytelling, as virtual creators can be deployed without logistical constraints.
Some creators remain sceptical about whether artificial personas can replicate the emotional authenticity and sensory experiences that shape real-world travel storytelling.
Marketing specialists expect AI and human influencers to coexist, with virtual avatars serving as consistent brand voices while human creators retain value through spontaneity, trust, and personal perspective.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK regulators and the Treasury face MP criticism over their approach to AI, amid warnings of risks to consumers and financial stability. A new Treasury Select Committee report says authorities have been overly cautious as AI use rapidly expands across financial services.
More than 75% of UK financial firms are already using AI, according to evidence reviewed by the committee, with insurers and international banks leading uptake.
Applications range from automating back-office tasks to core functions such as credit assessments and insurance claims, increasing AI’s systemic importance within the sector.
MPs acknowledge AI’s benefits but warn that readiness for large-scale failures remains insufficient. The committee urges the Bank of England and the FCA to introduce AI-specific stress tests to gauge resilience to AI-driven market shocks.
Further recommendations include more explicit regulatory guidance on AI accountability and faster use of the Critical Third Parties Regime. No AI or cloud providers have been designated as critical, prompting calls for stronger oversight to limit operational and systemic risk.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has issued a renewed warning to iPhone users, urging them to install the latest version of iOS to avoid exposure to emerging spyware threats targeting older versions.
Devices running iOS 26 are no longer fully protected by remaining on version 18, even after updating to the latest patch. Apple has indicated that recent attacks exploit vulnerabilities that only the newest operating system can address.
Security agencies in France and the United States recommend regularly powering down smartphones to disrupt certain forms of non-persistent spyware that operate in memory.
A complete shutdown using physical buttons, rather than on-screen controls, is advised as part of a basic security routine, particularly for users who delay major software upgrades.
While restarting alone cannot replace software updates, experts stress that keeping iOS up to date remains the most effective defence against zero-click exploits delivered through everyday apps such as iMessage.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Adobe says generative AI is rapidly reshaping India’s creator economy, with 97% of surveyed creators reporting a positive impact. Findings come from the company’s inaugural Creators’ Toolkit Report covering more than 16,000 creators worldwide.
Adoption levels in India are among the highest globally, with almost all creators reporting that AI tools are embedded in their daily workflows. Adobe is commonly used for editing, content enhancement, asset generation and idea development across video, image and social media formats.
Despite enthusiasm, concerns remain around trust and transparency. Many creators fear their work may be used to train AI models without consent, while cost, unclear training methods and inconsistent outputs also limit wider confidence.
Interest in agentic AI is also growing, with most Indian creators expressing optimism about systems that automate tasks and adapt to personal creative styles. Mobile devices continue to gain importance, with creators expecting phone output to increase further.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.
The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.
US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.
Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Millions of browser users installed popular extensions that later became spyware as part of a long-running malware operation. Researchers linked over 100 Chrome, Edge and Firefox extensions to the DarkSpectre hacker group.
Attackers kept extensions legitimate for years before quietly activating malicious behaviour. Hidden code embedded in image files helped bypass security reviews in official browser stores.
The campaign enabled large-scale surveillance by collecting real-time browsing activity and corporate meeting data. Analysts warn that such information supports phishing, impersonation and corporate espionage.
Experts urge users to remove unused extensions and question excessive permission requests. Regular browser updates and cautious extension management remain essential cyber defences.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Digital violence targeting women and girls is spreading across Europe, according to new research highlighting cyberstalking, surveillance and online threats as the most common reported abuses.
Digital tools have expanded opportunities for communication, yet online environments increasingly expose women to persistent harassment instead of safety and accountability.
Image-based abuse has grown sharply, with deepfake pornography now dominating synthetic sexual content and almost exclusively targeting women.
Algorithmic systems accelerate the circulation of misogynistic material, creating enclosed digital spaces where abuse is normalised rather than challenged. Researchers warn that automated recommendation mechanisms can quickly spread harmful narratives, particularly among younger audiences.
Recent generative technologies have further intensified concerns by enabling sexualised image manipulation with limited safeguards.
Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.
The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.
Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.
MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.
The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.
If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.
Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.
Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.
While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.
Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.
The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.
The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.
Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!