EU AI Act enforcement begins, reshaping startup compliance landscape

The first enforcement provisions of the EU AI Act entered into force on 2 February 2025, marking a turning point for Europe’s AI startup ecosystem. The initial phase targets ‘unacceptable risk’ systems, including social scoring, real-time biometric surveillance in public spaces, and manipulative AI practices.

Under the regulation, penalties can reach €35 million or 7% of global annual turnover, whichever is higher. Although the current enforcement covers only prohibited practices, the move signals that Europe’s AI rulebook is now operational rather than theoretical.

Broader obligations for high-risk AI systems, such as hiring tools, credit scoring, and medical diagnostics, will apply from August 2026. Separate rules for general-purpose AI models are scheduled to take effect in August 2025.

Surveys from European SME groups indicate that many smaller technology companies feel unprepared. A significant share of reports have not conducted formal risk classification of their AI systems, despite this being a foundational requirement under the EU AI Act’s tiered framework.

While some founders warn that compliance costs could slow innovation, others point to long-term benefits from clearer governance standards. For startups, the coming months will focus on aligning products with AI Act risk tiers and strengthening documentation and oversight before stricter rules apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI slop’s meteoric rise and the impact of synthetic content in 2026

In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word of the Year, reflecting a widespread reaction to AI-generated content online, often referred to as ‘AI slop.’ By choosing ‘slop’, typically associated with unappetising animal feed, they captured unease about the digital clutter created by AI tools.

As LLMs and AI tools became accessible to more people, many saw them as opportunities for profit through the creation of artificial content for marketing or entertainment, or through the manipulation of social media algorithms. However, despite video and image generation advances, there is a growing gap between perceived quality and actual detection: many overestimate how easily AI content evades notice, fueling scepticism about its online value.

As generative AI systems expand, the debate goes beyond digital clutter to deeper concerns about trust, market incentives, and regulatory resilience. How will societies manage the social, economic, and governance impacts of an information ecosystem increasingly shaped by automated abundance? In simplified terms, is AI slop more than a simple digital nuisance, or do we needlessly worry about a transient vogue that will eventually fade away?

The social aspect of AI slop’s influence

The most visible effects of AI slop emerge on large social media platforms such as YouTube, TikTok, and Instagram. Users frequently encounter AI-generated images and videos that appropriate celebrity likenesses without consent, depict fabricated events, or present sensational and misleading scenarios. Comment sections often become informal verification spaces, where some users identify visual inconsistencies and warn others, while many remain uncertain about the content’s authenticity.

However, no platform has suffered the AI slop effect as much as Facebook, and once you take a glance at its demographics, the pieces start to come together. According to multiple studies, Facebook’s user base is mostly populated by adults aged 25-34, but users over the age of 55 make up nearly 24 percent of all users. While seniors do not constitute the majority (yet), younger generations have been steadily migrating to social platforms such as TikTok, Instagram, and X, leaving the most popular platform to the whims of the older generation.

Due to factors such as cognitive decline, positivity bias, or digital (il)literacy, older social media users are more likely to fall for scams and fraud. Such conditions make Facebook an ideal place for spreading low-quality AI slop and false information. Scammers use AI tools to create fake images and videos about made-up crises to raise money for causes that are not real.

The lack of regulation on Meta’s side is the most glaring sore spot, evidenced by the company pushing back against the EU’s Digital Services Act (DSA) and Digital Markets Act (DMA), viewing them as ‘overreaching‘ and stifling innovation. The math is simple: content generates engagement, resulting in more revenue for Facebook and other platforms owned by Meta. Whether that content is authentic and high-quality or low-effort AI slop, the numbers don’t care.

The economics behind AI slop

At its core, AI content is not just a social media phenomenon, but an economic one as well. GenAI tools drastically reduce the cost and time required to produce all types of content, and when production approaches zero marginal cost, the incentive to churn out AI slop seems too good to ignore. Even minimal engagement can generate positive returns through advertising, affiliate marketing, or platform monetisation schemes.

AI content production goes beyond exploiting social media algorithms and monetisation policies. SEO can now be automated at scale, thus generating thousands of keyword-optimised articles within hours. Affiliate link farming allows creators to monetise their products or product recommendations with minimal editorial input.

On video platforms like TikTok and YouTube, synthetic voice-overs and AI-generated visuals are on full display, banking on trending topics and using AI-generated thumbnails to garner more views on a whim. Thanks to AI tools, content creators can post relevant AI-generated content in minutes, enabling them to jump on the hottest topics and drive clicks faster than with any other authentic content creation method.

To add salt to the wound, YouTube content creators share the sentiment that they are victims of the platform’s double standards in enforcing its strict community guidelines. Even the largest YouTube Channels are often flagged for a plethora of breaches, including copyright claims and depictions of dangerous or illegal activities, and harmful speech, to name a few. On the other hand, AI slop videos seem to fly under YouTube’s radar, leading to more resentment towards AI-generated content.

Businesses that rely on generative AI tools to market their services online are also finding AI to be the way to go, as most users are still not too keen on distinguishing authentic content, nor do they give much importance to those aspects. Instead of paying voice-over artists and illustrators, it is way cheaper to simply create a desired post in under a few minutes, adding fuel to an already raging fire. Some might call it AI slop, but again, the numbers are what truly matter.

The regulatory challenge of AI slop

AI slop is not only a social and economic issue, but also a regulatory one. The problem is not a single AI-generated post that promotes harmful behaviour or misleading information, but the sheer scale of synthetic content entering digital platforms. When large volumes of low-value or deceptive material circulate on the web, they can distort information ecosystems and make moderation a tough challenge. Such a predicament shifts the focus from individual violations to broader systemic effects.

In the EU, the DSA requires very large online platforms to assess and mitigate the systemic risks linked to their services. While the DSA does not specifically target AI slop, its provisions on transparency, content recommendation algorithms, and risk mitigation could apply if AI content significantly affects public discourse or enables fraud. The challenge lies in defining when content volume prevails over quality control, becoming a systemic issue rather than isolated misuse.

Debates around labelling AI slop and transparency also play a large role. Policymakers and platforms have explored ways to flag AI-generated content throughout disclosures or watermarking. For example, OpenAI’s Sora generates videos with a faint Sora watermark, although it is hardly visible to an uninitiated user. Nevertheless, labelling alone may not address deeper concerns if recommendation systems continue to prioritise engagement above all else, with the issue not only being whether users know the content is AI-generated, but how such content is ranked, amplified, and monetised.

More broadly, AI slop highlights the limits of traditional content moderation. As generative tools make production faster and cheaper, enforcement systems may struggle to keep pace. Regulation, therefore, faces a structural question: can existing digital governance frameworks preserve information quality in an environment where automated content production continues to grow?

Building resilience in the era of AI slop

Humans are considered the most adaptable species on Earth, and for good reason. While AI slop has exposed weaknesses in platform design, monetisation models, and moderation systems, it may also serve as a catalyst for adaptation. Unless regulatory bodies unite under one banner and agree to ban AI content for good, it is safe to say that synthetic content is here to stay. However, sooner or later, systemic regulations will evolve to address this new AI craze and mitigate its negative effects.

The AI slop bubble is bound to burst at some point, as online users will come to favour meticulously crafted content – whether authentic or artificial over low-quality content. Consequently, incentives may also evolve along with content saturation, leading to a greater focus on quality rather than quantity. Advertisers and brands often prioritise credibility and brand safety, which could encourage platforms to refine their ranking systems to reward originality, reliability, and verified creators.

Transparency requirements, systemic risk assessments, and discussions around provenance disclosure mechanisms imply that governance is responding to the realities of generative AI. Instead of marking the deterioration of digital spaces, AI slop may represent a transitional phase in which platforms, policymakers, and users are challenged to adjust their expectations and norms accordingly.

Finally, the long-term outcome will depend entirely on whether innovation, market incentives, and governance structures can converge around information quality and resilience. In that sense, AI slop may ultimately function less as a permanent state of affairs and more as a stress test to separate the wheat from the chaff. In the upcoming struggle between user experience and generative AI tools, the former will have the final say, which is an encouraging thought.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Commission delays high risk AI guidance

The European Commission has confirmed it will again delay publishing guidance on high-risk AI systems under the EU AI Act. The guidelines were due by 2 February 2026, but will now follow a revised timeline.

According to Euractiv, the document is intended to clarify which AI systems fall into the high-risk category and therefore face stricter obligations. Officials said more time is needed to incorporate significant stakeholder feedback.

The delay marks the second missed deadline and adds to broader implementation setbacks surrounding the EU AI Act. Several member states have yet to designate national enforcement bodies, complicating oversight preparations.

Brussels is also considering postponing the application of high-risk rules through a digital simplification package. Parliament and Council appear supportive of moving the August deadline back by more than a year, easing pressure on companies awaiting guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU DSA fine against X heads to court in key test case

X Corp., owned by Elon Musk, has filed an appeal with the General Court of the European Union against a €120 million fine imposed by the European Commission for breaching the Digital Services Act. The penalty, issued in December, marks the first enforcement action under the 2022 law.

The Commission concluded that X violated transparency obligations and misled users through its verification design, arguing that paid blue checkmarks made it harder to assess account authenticity. Officials also cited concerns about advertising transparency and researchers’ access to platform data.

Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security, and democracy, said deceptive verification and opaque advertising had no place online. The Commission opened its probe in December 2023, examining risk management, moderation practices, and alleged dark patterns.

X Corp. argued that the decision followed an incomplete investigation and a flawed reading of the DSA, citing procedural errors and due-process concerns. It said the appeal could shape future enforcement standards and penalty calculations under the regulation.

The EU is also assessing whether X mitigated systemic risks, including deepfaked content and child sexual abuse material linked to its Grok chatbot. US critics describe DSA enforcement as a threat to free speech, while EU officials say it strengthens accountability across the digital single market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU–US draft data pact allows automated decisions on travellers

A draft data-sharing agreement between the EU and the US Department of Homeland Security would allow automated decisions about European travellers to continue under certain conditions, despite attempts to tighten protections.

The text permits such decisions when authorised under domestic law and relies on safeguards that let individuals request human intervention instead of leaving outcomes entirely to algorithms.

A deal designed to preserve visa-free travel would require national authorities to grant access to biometric databases containing fingerprints and facial scans.

Negotiators are attempting to reconcile the framework with the General Data Protection Regulation, even though the draft states that the new rules would supplement and supersede earlier bilateral arrangements.

Sensitive information, including political views, trade union membership and biometric identifiers, could be transferred as long as protective conditions are applied.

EU countries face a deadline at the end of 2026 to conclude individual agreements, and failure to do so could result in suspension from the US Visa Waiver Program.

A separate clause keeps disputes firmly outside judicial scrutiny by requiring disagreements to be resolved through a Joint Committee instead of national or international courts.

The draft also restricts onward sharing, obliging US authorities to seek explicit consent before passing European-supplied data to third parties.

Further negotiations are expected, with the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs preparing to hold a closed-door review of the talks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU drops revised GDPR personal data definition amid regulatory pressure

Governments across the EU have withdrawn the revised definition of personal data from the GDPR omnibus package, softening earlier proposals that had prompted strong resistance from regulators and civil society.

A decision that signals a preference for maintaining the original scope of the General Data Protection Regulation instead of reopening sensitive debates that risked weakening long-standing protections.

Greater attention is now placed on the forthcoming pseudonymisation guidelines prepared by the European Data Protection Board. These guidelines are expected to shape how organisations interpret key safeguards, offering practical direction instead of altering the legal definition of personal data.

The updated prominence given to the guidance reflects a broader trend within the Council towards regulatory clarity rather than legislative redesign.

The compromise text also maintains links with the wider review of the ePrivacy Directive, keeping future updates aligned with existing digital-rights rules.

Member states appear increasingly cautious about reopening foundational privacy concepts, opting to strengthen enforcement through guidance and implementation rather than altering core definitions in law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Digital addiction in Italy sparks debate over social media bans

Italy has warned that digital addiction among teenagers is rising sharply, as health authorities link excessive social media and gaming use to family and educational challenges. Officials say bans alone will not resolve the issue.

According to Italy’s National Institute of Health, about 100,000 young people aged 15 to 18 are at risk of social media addiction. A further 500,000 are estimated to suffer from gaming disorder, recognised by the World Health Organisation as a medical condition.

A survey by digital ethics group Social Warning found that 77 percent of Italian teenagers consider themselves addicted to their devices. However, many say they lack the tools or support to change their behaviour.

Research by ‘Con i Bambini’, which funds projects tackling educational poverty in Italy, links digital dependency to isolation and strained parental relationships. The organisation says legislative measures can protect minors but cannot replace structured education and family support.

The debate extends across the EU. The European Parliament has called for a minimum age of 16 for social media platforms, while France, Italy, and Spain are considering national restrictions. Experts argue that prevention and digital literacy must complement regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU turns to AI tools to strengthen defences against disinformation

Institutions, researchers, and media organisations in the EU are intensifying efforts to use AI to counter disinformation, even as concerns grow about the wider impact on media freedom and public trust.

Confidence in journalism has fallen sharply across the EU, a trend made more severe by the rapid deployment of AI systems that reshape how information circulates online.

Brussels is attempting to respond with a mix of regulation and strategic investment. The EU’s AI Act is entering its implementation phase, supported by the AI Continent Action Plan and the Apply AI Strategy, both introduced in 2025 to improve competitiveness while protecting rights.

Yet manipulation campaigns continue to spread false narratives across platforms in multiple languages, placing pressure on journalists, fact-checkers and regulators to act with greater speed and precision.

Within such an environment, AI4TRUST has emerged as a prominent Horizon Europe initiative. The consortium is developing an integrated platform that detects disinformation signals, verifies content, and maps information flows for professionals who need real-time insight.

Partners stress the need for tools that strengthen human judgment instead of replacing it, particularly as synthetic media accelerates and shared realities become more fragile.

Experts speaking in Brussels warned that traditional fact-checking cannot absorb the scale of modern manipulation. They highlighted the geopolitical risks created by automated messaging and deepfakes, and argued for transparent, accountable systems tailored to user needs.

European officials emphasised that multiple tools will be required, supported by collaboration across institutions and sustained regulatory frameworks that defend democratic resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The European marathon towards digital sovereignty

Derived from the Latin word ‘superanus’, through the French word ‘souveraineté’, sovereignty can be understood as: ‘the ultimate overseer, or authority, in the decision-making process of the state and in the maintenance of order’ – Britannica. Digital sovereignty, specifically European digital sovereignty, refers to ‘Europe’s ability to act independently in the digital world’.

In 2020, the European Parliament already identified the consequences of reliance on non-EU technologies. From the economic and social influence of non-EU technology companies, which can undermine user control over their personal data, to the slow growth of the EU technology companies and a limitation on the enforcement of European laws.

Today, these concerns persist. From Romanian election interference on TikTok’s platform, Microsoft’s interference with the ICC, to the Dutch government authentication platform being acquired by a US firm, and booming American and Chinese LLMs compared to European LLMs. The EU is at a crossroads between international reliance and homegrown adoption.

The issue of the EU digital sovereignty has gained momentum in the context of recent and significant shifts in US foreign policy toward its allies. In this environment, the pursuit of the EU digital sovereignty appears as a justified and proportionate response, one that might previously have been perceived as unnecessarily confrontational.

In light of this, this analysis’s main points will discuss the rationale behind the EU digital sovereignty (including dependency, innovation and effective compliance), recent European-centric technological and platform shifts, the steps the EU is taking to successfully be digitally sovereign and finally, examples of European alternatives

Rationale behind the move

The reasons for digital sovereignty can be summed up in three main areas: (I) less dependency on non-EU tech, (ii) leading and innovating technological solutions, and (iii) ensuring better enforcement and subsequent adherence to data protection laws/fundamental rights.

(i) Less dependency: Global geopolitical tensions between US-China/Russia push Europe towards developing its own digital capabilities and secure its supply chains. Insecure supply chain makes Europe vulnerable to failing energy grids.

More recently, US giant Microsoft threatened the International legal order by revoking US-sanctioned International Criminal Court Chief Prosecutor Karim Khan’s Microsoft software access, preventing the Chief Prosecutor from working on his duties at the ICC. In light of these scenarios, Europeans are turning to developing more European-based solutions to reduce upstream dependencies.

(ii) Leaders & innovators: A common argument is that Americans innovate, the Chinese copy, and the Europeans regulate. If the EU aims to be a digital geopolitical player, it must position itself to be a regulator which promotes innovation. It can achieve this by upskilling its workforce of non-digital trades into digital ones to transform its workforce, have more EU digital infrastructure (data centres, cloud storage and management software), further increase innovation spending and create laws that truly allow for the uptake of EU technological development instead of relying on alternative, cheaper non-EU options.

(iii) Effective compliance: Knowing that fines are more difficult to enforce towards non-EU companies than the EU companies (ex., Clearview AI), EU-based technological organisations would allow for corrective measures, warnings, and fines to be enforced more effectively. Thus, enabling more adherence towards the EU’s digital agenda and respect for fundamental rights.

Can the EU achieve Digital Sovereignty?

The main speed bumps towards the EU digital sovereignty are: i) a lack of digital infrastructure (cloud storage & data centres), ii) (critical) raw material dependency and iii) Legislative initiatives to facilitate the path towards digital sovereignty (innovation procurement and fragmented compliance regime).

i) lack of digital infrastructure: In order for the EU to become digitally sovereign it must have its own sovereign digital infrastructure.

In practice, the EU relies heavily on American data centre providers (i.e. Equinix, Microsoft Azure, Amazon Web Services) hosted in the EU. In this case, even though the data is European and hosted in the EU, the company that hosts it is non-European. This poses reliance and legislative challenges, such as ensuring adequate technical and organisational measures to protect personal data when it is in transit to the US. Given the EU-US DPF, there is a legal basis for transferring EU personal data to the US.

However, if the DPF were to be struck down (perhaps due to the US’ Cloud Act), as it has been in the past (twice with Schrems I and Schrems II) and potentially Schrems III, there would no longer be a legal basis for the transfer of the EU personal data to a US data centre.

Previously, the EU’s 2022 Directive on critical entities resilience allowed for the EU countries to identify critical infrastructure and subsequently ensure they take the technical, security and organisational measures to assure their resilience. Part of this Directive covers digital infrastructure, including providers of cloud computing services and providers of data centres. From this, the EU has recently developed guidelines for member states to identify critical entities. However, these guidelines do not anticipate how to achieve resilience and leave this responsibility with member states.

Currently, the EU is revising legislation to strengthen its control over critical digital infrastructure. Reports state revisions of existing legislation (Chips Act and Quantum Act) as well as new legislation (Digital Networks Act, the Cloud and AI Development Act) are underway.

ii) Raw material dependency: The EU cannot be digitally sovereign until it reduces some of its dependencies on other countries’ raw materials to build the hardware necessary to be technologically sovereign. In 2025, the EU’s goals were to create a new roadmap towards critical raw material (CRM) sovereignty to rely on its own energy sources and build infrastructure.

Thus, the RESourceEU Action Plan was born in December 2025. This plan contains 6 pillars: securing supply through knowledge, accelerating and promoting projects, using the circular economy and fostering innovation (recycling products which contain CRMs), increasing European demand for European projects (stockpiling CRMs), protecting the single market and partnering with third countries for long-lasting diversification. Practically speaking, part of this plan is to match Europe and or global raw material supply with European demand for European projects.

iii) Legislative initiatives to facilitate the path towards digital sovereignty:

Tackling difficult innovation procurement: the argument is to facilitate its uptake of innovation procurement across the EU. In 2026, the EU is set to reform its public procurement framework for innovation. The Innovation Procurement Update (IPU) team has representatives from over 33 countries (predominantly through law firms, Bird & Bird being the most represented), which recommends that innovation procurement reach 20% of all public procurement.

Another recommendation would help more costly innovative solutions to be awarded procurement projects, which in the past were awarded to cheaper procurement bids. In practice, the lowest price of a public procurement bid is preferred, and if it meets the remaining procurement conditions, it wins the bid – but de-prioritising this non-pricing criterion would enable companies with more costly innovative solutions to win public procurement bids.

Alleviating compliance challenges: lowering other compliance burdens whilst maintaining the digital aquis: recently announced at the World Economic Forum by Commission President Ursula von der Leyen, EU.inc would help cross-border business operations scaling up by alleviating company, corporate, insolvency, labour and taxation law compliance burdens. By harmonising these into a single framework, businesses can more easily grow and deploy cross-border solutions that would otherwise face hurdles.

Power through data: another legislative measure to help facilitate the path towards the EU digital sovereignty is unlocking the potential behind European data. In order to research innovative solutions, data is required. This can be achieved through personal or non-personal data. The EU’s GDPR regulates personal data and is currently undergoing amendments. If the proposed changes to the GDPR are approved, i.e. a broadening of its scope, data that used to be considered personal (and thus required GDPR compliance) could be deemed non-personal and used more freely for research purposes. The Data Act regulate the reuse and re-sharing of non-personal data. It aims to simplify and bolster the fair reuse of non-personal data. Overall, both personal and non-personal data can give important insight that research can benefit from in developing European innovative sovereign solutions.

European alternatives

European companies have already built a network of European platforms, services and apps with European values at heart:

CategoryCurrently UsedEU AlternativeComments
Social mediaTikTok, X, InstagramMonnet (Luxembourg)

‘W’ (Sweden)
Monnet is a social media app prioritises connections and non-addictive scrolling. Recently announced ‘W’ replaces ‘X’ and is gaining major traction with non-advertising models at its heart.
EmailMicrosoft’s Outlook and Google’s gmailTuta (mail/calendar), Proton (Germany), Mailbox (Germany), Mailfence (Belgium)Replace email and calendar apps with a privacy focused business model.
Search engineGoogle Search and DuckDuckGoQwant (France) and Ecosia (German)Qwant has focused on privacy since its launch in 2013. Ecosia is an ecofriendly focused business model which helps plant trees when users search
Video conferencingMicrosoft Teams and Slack aVisio (France), Wire (Switzerland, Mattermost (US but self hosted), Stackfield (Germany), Nextcloud Talk (Germany) and Threema (Switzerland)These alternatives are end-to-end encrypted. Visio is used by the French Government
Writing toolsMicrosoft’s Word & Excel and Google Sheets, NotionLibreOffice (German), OnlyOffice (Latvian), Collabora (UK), Nextcloud Office (German) and CryptPad (France)LibreOffice is compatible with and provides an alternative to Microsoft’s office suit for free.
Cloud storage & file sharingOneDrive, SharePoint and Google DrivePydio Cells (France), Tresorit (Switzerland), pCloud (Switzerland), Nextcloud (Germany)Most of these options provide cloud storage and NexCloud is a recurring alternative across categories.
FinanceVisa and MastercardWero (EU)Not only will it provide an EU wide digital wallet option, but it will replace existing national options – providing for fast adoption.
LLMOpenAI, Gemini, DeepSeek’s LLMMistral AI (France) and DeepL (Germany)DeepL is already wildly used and Mistral is more transparent with its partially open-source model and ease of reuse for developers
Hardware
Semi conductors: ASML (Dutch) Data Center: GAIA-X (Belgium)ASML is a chip powerhouse for the EU and GAIA-X set an example of EU based data centres with it open-source federated data infrastructure.

A dedicated website called ‘European Alternatives’ provides exactly what it says, European Alternatives. A list with over 50 categories and 100 alternatives

Conclusion

In recent years, the Union’s policy goals have shifted towards overt digital sovereignty solutions through diversification of materials and increased innovation spending, combined with a restructuring of the legislative framework to create the necessary path towards European digital infrastructure.

Whilst this analysis does not include all speed bumps, nor avenues towards the road of the EU digital sovereignty, it sheds light on the EU’s most recent major policy developments. Key questions remain regarding data reuse, its impact on data protection fundamental rights and whether this reshaping of the framework will yield the intended results.

Therefore, how will the EU tread whilst it becomes a more coherent sovereign geopolitical player?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Shein faces formal proceedings under EU Digital Services Act

The European Commission has opened formal proceedings against Shein under the Digital Services Act over addictive design and illegal product risks. The move follows preliminary reviews of company reports and responses to information requests. Officials said the decision does not prejudge the outcome.

Investigators will review safeguards to prevent illegal products being sold in the European Union, including items that could amount to child sexual abuse material, such as child-like sex dolls. Authorities will also assess how the platform detects and removes unlawful goods offered by third-party sellers.

The Commission will examine risks linked to platform design, including engagement-based rewards that may encourage excessive use. Officials will assess whether adequate measures are in place to limit potential harm to users’ well-being and ensure effective consumer protection online.

Transparency obligations under the DSA are another focal point. Platforms must clearly disclose the main parameters of their recommender systems and provide at least one easily accessible option that is not based on profiling. The Commission will assess whether Shein meets these requirements.

Coimisiún na Meán, the Digital Services Coordinator of Ireland, will assist the investigation as Ireland is Shein’s EU base. The Commission may seek more information or adopt interim measures if needed. Proceedings run alongside consumer protection action and product safety enforcement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!