EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Italy becomes test case for WhatsApp AI chatbot monetisation

Meta has announced a new pricing model for third-party AI chatbots operating on WhatsApp, where regulators require the company to permit them, starting with Italy.

From 16 February 2026, developers will be charged about $0.0691 (€0.0572/£ 0.0572/£0.0498) per AI-generated response that’s not a predefined template.

This move follows Italy’s competition authority intervening to force Meta to suspend its ban on third-party AI bots on the WhatsApp Business API, which had taken effect in January and led many providers (like OpenAI, Perplexity and Microsoft) to discontinue their chatbots on the platform.

Meta says the fee applies only where legally required to open chatbot access, and this pricing may set a precedent if other markets compel similar access.

WhatsApp already charges businesses for ‘template’ API messages (e.g. notifications, authentication), but this is the first instance of explicit charges tied to AI responses, potentially leading to high costs for high-volume chatbot usage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Historic digital assets regulation bill approved by US Senate committee for the first time

The US Senate Agriculture Committee has voted along party lines to advance legislation on the cryptocurrency market structure, marking the first time such a bill has cleared a Senate committee.

The Digital Commodity Intermediaries Act passed with 12 Republicans voting in favour and 11 Democrats opposing, representing a significant development for digital asset regulation in the United States.

The legislation would grant the Commodity Futures Trading Commission new regulatory authority over digital commodities and establish consumer protections, including safeguards against conflicts of interest.

Chairman John Boozman proceeded with the bill after losing bipartisan support when Senator Cory Booker withdrew backing for the version presented. The Senate Banking Committee must approve the measure before the two versions can be combined and advanced to the Senate floor.

Democrats raised concerns about the legislation, particularly regarding President Donald Trump’s cryptocurrency ventures. Senator Booker stated the bill departed from bipartisan principles established in November, noting Republicans ‘walked away’ from previous agreements.

Democrats offered amendments to ban public officials from engaging in the crypto industry and to address foreign-adversary involvement in digital commodities. Still, all were rejected as outside the committee’s jurisdiction.

Senator Gillibrand expressed optimism about the bill’s advancement, whilst Boozman called the vote ‘a critical step towards creating clear rules’. The Senate Banking Committee’s consideration was postponed following opposition from the crypto industry, with no new hearing date set.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netherlands faces rising digital sovereignty threat, data authority warns

The Dutch data protection authority has urged the government to act swiftly to protect the country’s digital sovereignty, warning that dependence on overseas technology firms could expose vital public services to significant risk.

Concern has intensified after DigiD, the national digital identity system, appeared set for acquisition by a US company, raising questions about long-term control of key infrastructure.

The watchdog argues that the Netherlands relies heavily on a small group of non-European cloud and IT providers, and stresses that public bodies lack clear exit strategies if foreign ownership suddenly shifts.

Additionally, the watchdog criticises the government for treating digital autonomy as an academic exercise rather than recognising its immediate implications for communication between the state and citizens.

In a letter to the economy minister, the authority calls for a unified national approach rather than fragmented decisions by individual public bodies.

It proposes sovereignty criteria for all government contracts and suggests termination clauses that enable the state to withdraw immediately if a provider is sold abroad. It also notes the importance of designing public services to allow smooth provider changes when required.

The watchdog urges the government to strengthen European capacity by investing in scalable domestic alternatives, including a Dutch-controlled government cloud. The economy ministry has declined to comment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK expands free AI training to reach 10 million workers by 2030

The government has expanded the UK joint industry programme offering free AI training to every adult, with the ambition of upskilling 10 million workers by 2030.

Newly benchmarked courses are available through the AI Skills Hub, giving people practical workplace skills while supporting Britain’s aim to become the fastest AI adopter in the G7.

The programme includes short online courses that teach workers in the UK how to use basic AI tools for everyday tasks such as drafting text, managing content and reducing administrative workloads.

Participants who complete approved training receive a government-backed virtual AI foundations badge, setting a national standard for AI capability across sectors.

Public sector staff, including NHS and local government employees, are among the groups targeted as the initiative expands.

Ministers also announced £27 million in funding to support local tech jobs, graduate traineeships and professional practice courses, alongside the launch of a new cross-government unit to monitor AI’s impact on jobs and labour markets.

Officials argue that widening access to AI skills will boost productivity, support economic growth and help workers adapt to technological change. The programme builds on existing digital skills initiatives and brings together government, industry and trade unions to shape a fair and resilient future of work.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Pornhub to block new UK users over tougher age-check rules

Pornhub will begin blocking access for new UK users from 2 February 2026, allowing entry only to people who had already created an account and completed age checks before that date, the company said, framing the move as a protest against how the UK’s Online Safety Act is being enforced.

The UK regime, overseen by Ofcom, requires porn services accessible in Britain to deploy ‘highly effective’ age assurance measures, not simple click-through age gates. Ofcom says traffic to pornography sites has fallen by about a third since the age-check deadline of 25 July 2025, and it has pursued investigations into dozens of services as enforcement ramps up.

Aylo, Pornhub’s parent company, argues the current approach is backfiring: it says users, adults and minors, are shifting toward non-compliant sites, and it is campaigning for device-based age verification, handled at the operating-system or app-store level rather than site-by-site checks. In parallel, UK VPN downloads surged after age checks began, underscoring how quickly users can try to route around country-based controls.

Privacy and security concerns become sharper when adult platforms are turned into identity checkpoints. In December 2025, reporting linked a large leak of Pornhub premium-user analytics data, including emails and viewing/search histories, to a breach involving a third-party analytics provider, underscoring how sensitive such datasets can be when they are collected or retained.

Government and regulator messaging emphasises child protection and the Online Safety Act’s enforcement teeth, including significant penalties and, in extreme cases, access restrictions, while companies like Aylo argue that inconsistent enforcement simply pushes demand to riskier corners of the internet and fuels workarounds like VPNs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Class-action claims challenge WhatsApp end-to-end encryption practices

WhatsApp rejected a class-action lawsuit accusing Meta of accessing encrypted messages, calling such claims false. The company reaffirmed that chats remain protected by device-based Signal protocol encryption.

Filed in a US federal court in California, the complaint alleges Meta misleads more than two billion users by promoting unbreakable encryption while internally storing and analysing message content. Plaintiffs from several countries claim employees can access chats through internal requests.

WhatsApp said no technical evidence accompanies the accusations and stressed that encryption occurs on users’ devices before messages are sent. According to the company, only recipients hold the keys required to decrypt content, which are never accessible to Meta.

The firm described the lawsuit as frivolous and said it will seek sanctions against the legal teams involved. Meta spokespersons reiterated that WhatsApp has relied on independently audited encryption standards for over a decade.

The case highlights ongoing debates about encryption and security, but so far, no evidence has shown that message content has been exposed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google faces new UK rules over AI summaries and publisher rights

The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.

The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.

Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.

The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.

Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.

Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.

The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada’s Cyber Centre flags rising ransomware risks for 2025 to 2027

The national cyber authority of Canada has warned that ransomware will remain one of the country’s most serious cyber threats through 2027, as attacks become faster, cheaper and harder to detect.

The Canadian Centre for Cyber Security, part of Communications Security Establishment Canada, says ransomware now operates as a highly interconnected criminal ecosystem driven by financial motives and opportunistic targeting.

According to the outlook, threat actors are increasingly using AI and cryptocurrency while expanding extortion techniques beyond simple data encryption.

Businesses, public institutions and critical infrastructure in Canada remain at risk, with attackers continuously adapting their tactics, techniques and procedures to maximise financial returns.

The Cyber Centre stresses that basic cyber hygiene still provides strong protection. Regular software updates, multi-factor authentication and vigilance against phishing attempts significantly reduce exposure, even as attack methods evolve.

A report that also highlights the importance of cooperation between government bodies, law enforcement, private organisations and the public.

Officials conclude that while ransomware threats will intensify over the next two years, early warnings, shared intelligence and preventive measures can limit damage.

Canada’s cyber authorities say continued investment in partnerships and guidance remains central to building national digital resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!