TikTok accused of breaching EU digital safety rules

The European Commission has concluded that TikTok’s design breaches the Digital Services Act by encouraging compulsive use and failing to protect users, particularly children and teenagers.

Preliminary findings say the platform relies heavily on features such as infinite scroll, which automatically delivers new videos and makes disengagement difficult.

Regulators argue that such mechanisms place users into habitual patterns of repeated viewing rather than supporting conscious choice. EU officials found that safeguards introduced by TikTok do not adequately reduce the risks linked to excessive screen time.

Daily screen time limits were described as ineffective because alerts are easy to dismiss, even for younger users who receive automatic restrictions. Parental control tools were also criticised for requiring significant effort, technical knowledge and ongoing involvement from parents.

Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy, said addictive social media design can harm the development of young people. European law, she said, makes platforms responsible for the effects their services have on users.

Regulators concluded that compliance with the Digital Services Act would require TikTok to alter core elements of its product, including changes to infinite scroll, recommendation systems and screen break features.

TikTok rejected the findings, calling them inaccurate and saying the company would challenge the assessment. The platform argues that it already offers a range of tools, including sleep reminders and wellbeing features, to help users manage their time.

The investigation remains ongoing and no penalties have yet been imposed. A final decision could still result in enforcement measures, including fines of up to six per cent of TikTok’s global annual turnover.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU split widens over ban on AI nudification apps

European lawmakers remain divided over whether AI tools that generate non-consensual sexual images should face an explicit ban in the EU legislation.

The split emerged as debate intensified over the AI simplification package, which is moving through Parliament and the Council rather than remaining confined to earlier negotiations.

Concerns escalated after Grok was used to create images that digitally undressed women and children.

The EU regulators responded by launching an investigation under the Digital Services Act, and the Commission described the behaviour as illegal under existing European rules. Several lawmakers argue that the AI Act should name pornification apps directly instead of relying on broader legal provisions.

Lead MEPs did not include a ban in their initial draft of the Parliament’s position, prompting other groups to consider adding amendments. Negotiations continue as parties explore how such a restriction could be framed without creating inconsistencies within the broader AI framework.

The Commission appears open to strengthening the law and has hinted that the AI omnibus could be an appropriate moment to act. Lawmakers now have a limited time to decide whether an explicit prohibition can secure political agreement before the amendment deadline passes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Germany voices unease over tech sovereignty with France

A senior German official has voiced frustration over joint tech sovereignty efforts with France, describing the experience as disillusioning. The remarks followed a high profile digital summit hosted by Germany and France in Berlin.

The comments came from Luise Hölscher of Germany, who said approaches to buying European technology differ sharply between Germany and France. Germany tends to accept solutions from across Europe, while France often favours domestic providers.

Despite tensions, Hölscher said the disagreement has not damaged the wider partnership between Germany and France. Germany is now exploring closer cooperation with other European countries.

The debate unfolds as the EU considers new rules on cloud services and AI procurement across Germany and France. European institutions are weighing how far public bodies should prioritise European suppliers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU tests Matrix protocol as sovereign alternative for internal communication

The European Commission is testing a European open source system for its internal communications as worries grow in Brussels over deep dependence on US software.

A spokesperson said the administration is preparing a solution built on the Matrix protocol instead of relying solely on Microsoft Teams.

Matrix is already used by several European institutions, including the French government, German healthcare bodies and armed forces across the continent.

The Commission aims to deploy it as a complement and backup to Teams rather than a full replacement. Officials noted that Signal currently fills that role but lacks the flexibility needed for an organisation of the Commission’s size.

The initiative forms part of a wider push for digital sovereignty within the EU. A Matrix-based tool could eventually link the Commission with other Union bodies that currently lack a unified secure communication platform.

Officials said there is already an operational connection with the European Parliament.

The trial reflects growing sensitivity about Europe’s strategic dependence on non-European digital services.

By developing home-grown communication infrastructure instead of leaning on a single foreign supplier, the Commission hopes to build a more resilient and sovereign technological foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France challenges EU privacy overhaul

The EU’s attempt to revise core privacy rules has faced resistance from France, which argues that the Commission’s proposals would weaken rather than strengthen long-standing protections.

Paris objects strongly to proposed changes to the definition of personal data within the General Data Protection Regulation, which remains the foundation of European privacy law. Officials have also raised concerns about several more minor adjustments included in the broader effort to modernise digital legislation.

These proposals form part of the Digital Omnibus package, a set of updates intended to streamline the EU data rules. France argues that altering the GDPR’s definitions could change the balance between data controllers, regulators and citizens, creating uncertainty for national enforcement bodies.

The national government maintains that the existing framework already includes the flexibility needed to interpret sensitive information.

A disagreement that highlights renewed tension inside the Union as institutions examine the future direction of privacy governance.

Several member states want greater clarity in an era shaped by AI and cross-border data flows. In contrast, others fear that opening the GDPR could lead to inconsistent application across Europe.

Talks are expected to continue in the coming months as EU negotiators weigh the political risks of narrowing or widening the scope of personal data.

France’s firm stance suggests that consensus may prove difficult, particularly as governments seek to balance economic goals with unwavering commitments to user protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU plans a secure military data space by 2030

Institutions in the EU have begun designing a new framework to help European armies share defence information securely, rather than relying on US technology.

A plan centred on creating a military-grade data platform, the European Defence Artificial Intelligence Data Space, is intended to support sensitive exchanges among defence authorities.

Ultimately, the approach aims to replace the current patchwork of foreign infrastructure that many member states rely on to store and transfer national security data.

The European Defence Agency is leading the effort and expects the platform to be fully operational by 2030. The concept includes two complementary elements: a sovereign military cloud for data storage and a federated system that allows countries to exchange information on a trusted basis.

Officials argue that this will improve interoperability, speed up joint decision-making, and enhance operational readiness across the bloc.

A project that aligns with broader concerns about strategic autonomy, as EU leaders increasingly question long-standing dependencies on American providers.

Several European companies have been contracted to develop the early technical foundations. The next step is persuading governments to coordinate future purchases so their systems remain compatible with the emerging framework.

Planning documents suggest that by 2029, member states should begin integrating the data space into routine military operations, including training missions and coordinated exercises. EU authorities maintain that stronger control of defence data will be essential as military AI expands across European forces.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US cloud dominance sparks debate about Europe’s digital sovereignty

European technology leaders are increasingly questioning the long-held assumption that information technology operates outside politics, amid growing concerns about reliance on US cloud providers and digital infrastructure.

At HiPEAC 2026, Nextcloud chief executive Frank Karlitschek argued that software has become an instrument of power, warning that Europe’s dependence on American technology firms exposes organisations to legal uncertainty, rising costs, and geopolitical pressure.

He highlighted conflicts between EU privacy rules and US surveillance laws, predicting continued instability around cross-border data transfers and renewed risks of services becoming legally restricted.

Beyond regulation, Karlitschek pointed to monopoly power among major cloud providers, linking recent price increases to limited competition and warning that vendor lock-in strategies make switching increasingly difficult for European organisations.

He presented open-source and locally controlled cloud systems as a path toward digital sovereignty, urging stronger enforcement of EU competition rules alongside investment in decentralised, federated technology models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!