The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.
Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.
A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).
The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.
Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.
With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.
The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has unveiled a new counterterrorism agenda under the ProtectEU initiative, outlining measures to strengthen the EU’s response to evolving security threats. Officials say the strategy aims to improve preparedness, reinforce cooperation and protect citizens and businesses from emerging forms of terrorism and violent extremism.
Authorities warn that technological change is reshaping the threat landscape. Terrorist groups increasingly exploit digital tools such as social media, AI and encrypted platforms for recruitment, propaganda and fundraising.
New risks also include the potential misuse of drones, crypto-assets and 3D-printed weapons, while radicalisation of minors online has become a growing concern across Europe.
The agenda proposes stronger capabilities for anticipating threats through expanded intelligence analysis and enhanced support for Europol, including greater use of open-source intelligence. Additional research funding will explore the security implications of emerging technologies, while new initiatives aim to strengthen early prevention efforts and community engagement to counter radicalisation, particularly among young people.
Online safety forms another key priority. The Commission plans to intensify cooperation with digital platforms to remove extremist content more quickly and to strengthen enforcement of the Digital Services Act. A new EU Online Crisis Response Framework is also proposed to improve coordination between authorities and technology companies during security incidents.
Measures targeting the physical environment will focus on protecting public spaces and critical infrastructure, including investments in security projects and stronger monitoring of individuals suspected of terrorism.
The strategy also seeks to improve the tracking of terrorist financing, including through cryptocurrencies, and to expand cooperation with international partners, such as countries in the Western Balkans and the Mediterranean region.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the European Parliament are pressing the European Commission for clarity after reports that Meta’s smart glasses recorded people in intimate moments without their knowledge.
Concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.
The reports indicate that personal data from EU users was sent to Sama, a third-party contractor, in Kenya for human review. Annotators working there said they viewed images of individuals changing clothes and believed the recordings were taken without consent.
They added that Meta’s attempts to blur faces or apply other safeguards failed often enough to expose identifiable material instead of ensuring proper anonymisation.
EU privacy law requires clear information and consent before collecting and processing personal data, and additional safeguards when exporting data to countries without recognised adequacy status.
Kenya is still negotiating such recognition with the Commission, meaning contractual protections would be necessary.
The Irish Data Protection Commission, responsible for Meta’s GDPR oversight, has been contacted amid questions about whether Meta complied with EU requirements.
Lawmakers also want the Commission to examine whether proposed changes in the Digital Omnibus package could dilute privacy protections rather than strengthen them.
Critics argue the reforms might ease data-use rules for AI training at a moment when allegations about Meta’s smart glasses have intensified scrutiny of the EU’s broader digital policy agenda.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European governments are renewing pressure to scale back industrial AI rules rather than expand regulatory demands.
Ten countries, including Germany, France, Italy, Spain and Poland, have urged the EU to clarify how the AI Act overlaps with machinery law and to adopt more realistic implementation deadlines. Their position is even more surprising, given that the legislation already outlines its relationship with existing industrial frameworks.
Parliament’s centre and centre-right groups are pushing for deeper cuts. The European People’s Party wants all industrial sectors to move to a lighter regime, while Renew is advocating broad exemptions for industrial and business-to-business AI.
The European Conservatives and Reformers are also seeking reductions for non-safety-related systems. Together, the three groups edge close to a parliamentary majority, signalling momentum for a broader deregulation push.
No sweeping changes have been added to the AI omnibus so far, yet policymakers expect more adjustments ahead. The package must be finalised by August, so legislators are focused on meeting the deadline instead of reopening primary debates.
Broader revisions to industrial AI rules are likely to reappear in the Commission’s forthcoming Digital Fitness Check, which will reassess how multiple EU tech laws interact.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Italian data protection authority has ordered Amazon Italia Logistics to halt processing of sensitive employee data after investigators found that the company gathered details ranging from health conditions to union involvement.
Information about workers’ private lives and family members had also been collected, often retained for a decade through internal tracking systems rather than being limited to what labour rules in Italy allow.
Regulators discovered that some data originated from cameras positioned near restrooms and staff break areas, a practice that breached EU privacy standards.
The watchdog concluded that the company’s monitoring went far beyond what employers are permitted to compile when assessing staff performance or workplace needs.
Amazon responded by stressing that protecting employee information remains a priority and said that internal rules and training programmes are designed to ensure compliance. The company added that any findings from the Italian authority would prompt a review of its procedures instead of being dismissed.
An order that arrives as Amazon attempts to regain its lobby badges at the European Parliament.
Access was suspended in 2024 after senior representatives declined to attend hearings on warehouse working conditions, and opposition from MEPs continues to place pressure on Parliament President Roberta Metsola to reject reinstatement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Regulatory scrutiny of the EU’s digital fairness framework is set to begin on 1 July as the European Commission moves to tighten its supervision of online platforms.
The Commission is preparing a major upgrade of its consumer protection framework, expected by December 2026.
The reforms aim to reinforce enforcement tools under the Unfair Commercial Practices Directive and the Consumer Protection Cooperation Regulation, allowing regulators to intervene more effectively when platforms breach fairness standards.
Michael McGrath, Commissioner for Democracy, Justice and Rule of Law, has highlighted the need for greater transparency and accountability as digital markets expand rapidly.
The forthcoming scrutiny focuses on ensuring that platforms respect transparency obligations, avoid manipulating users and provide fair conditions in online transactions.
Regulators seek to replace fragmented enforcement with a more coordinated model that reflects the increasingly cross-border nature of digital commerce.
Stronger consumer safeguards are becoming central to the digital agenda of the EU.
The next phase of reforms is expected to streamline investigations across member states and deliver more predictable outcomes for affected consumers, offering steadier enforcement instead of reactive measures taken after violations escalate.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has proposed opening negotiations to bring Albania, Bosnia and Herzegovina, Kosovo, Montenegro, North Macedonia, and Serbia into the EU’s ‘Roam Like at Home’ regime. The move would allow citizens and businesses to use their mobile phones across borders without incurring additional roaming charges, once the necessary agreements are finalised and the rules are aligned.
If implemented, travellers between the EU and the Western Balkans would be able to make calls, send text messages, and use mobile data at domestic rates. This would apply both to Western Balkan visitors in the EU and to the EU citizens travelling in the region, ensuring seamless connectivity without unexpected costs.
The change would make travel for study, work, and tourism more affordable and practical. By removing roaming surcharges, the initiative aims to simplify cross-border communication and strengthen economic and social ties between the two regions.
To move forward, the European Commission has adopted proposals for negotiating mandates and is now seeking authorisation from the European Council to begin formal talks. Once approved, the Commission will negotiate bilateral agreements with each Western Balkan partner. After successful alignment with the EU roaming rules, the countries would join the EU’s roaming area.
The proposal builds on existing voluntary arrangements between some EU and Western Balkan mobile operators, which already offer reduced roaming charges. It also complements the regional roaming agreement within the Western Balkans, where lower tariffs are already in place.
More broadly, the initiative reflects the EU’s gradual integration strategy outlined in the 2023 Growth Plan for the Western Balkans. By progressively extending elements of the EU Single Market to candidate countries, the plan aims to deliver practical benefits to citizens and businesses before full EU membership, while keeping the enlargement process on track.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The first enforcement provisions of the EU AI Act entered into force on 2 February 2025, marking a turning point for Europe’s AI startup ecosystem. The initial phase targets ‘unacceptable risk’ systems, including social scoring, real-time biometric surveillance in public spaces, and manipulative AI practices.
Under the regulation, penalties can reach €35 million or 7% of global annual turnover, whichever is higher. Although the current enforcement covers only prohibited practices, the move signals that Europe’s AI rulebook is now operational rather than theoretical.
Broader obligations for high-risk AI systems, such as hiring tools, credit scoring, and medical diagnostics, will apply from August 2026. Separate rules for general-purpose AI models are scheduled to take effect in August 2025.
Surveys from European SME groups indicate that many smaller technology companies feel unprepared. A significant share of reports have not conducted formal risk classification of their AI systems, despite this being a foundational requirement under the EU AI Act’s tiered framework.
While some founders warn that compliance costs could slow innovation, others point to long-term benefits from clearer governance standards. For startups, the coming months will focus on aligning products with AI Act risk tiers and strengthening documentation and oversight before stricter rules apply.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word of the Year, reflecting a widespread reaction to AI-generated content online, often referred to as ‘AI slop.’ By choosing ‘slop’, typically associated with unappetising animal feed, they captured unease about the digital clutter created by AI tools.
As LLMs and AI tools became accessible to more people, many saw them as opportunities for profit through the creation of artificial content for marketing or entertainment, or through the manipulation of social media algorithms. However, despite video and image generation advances, there is a growing gap between perceived quality and actual detection: many overestimate how easily AI content evades notice, fueling scepticism about its online value.
As generative AI systems expand, the debate goes beyond digital clutter to deeper concerns about trust, market incentives, and regulatory resilience. How will societies manage the social, economic, and governance impacts of an information ecosystem increasingly shaped by automated abundance? In simplified terms, is AI slop more than a simple digital nuisance, or do we needlessly worry about a transient vogue that will eventually fade away?
The social aspect of AI slop’s influence
The most visible effects of AI slop emerge on large social media platforms such as YouTube, TikTok, and Instagram. Users frequently encounter AI-generated images and videos that appropriate celebrity likenesses without consent, depict fabricated events, or present sensational and misleading scenarios. Comment sections often become informal verification spaces, where some users identify visual inconsistencies and warn others, while many remain uncertain about the content’s authenticity.
However, no platform has suffered the AI slop effect as much as Facebook, and once you take a glance at its demographics, the pieces start to come together. According to multiple studies, Facebook’s user base is mostly populated by adults aged 25-34, but users over the age of 55 make up nearly 24 percent of all users. While seniors do not constitute the majority (yet), younger generations have been steadily migrating to social platforms such as TikTok, Instagram, and X, leaving the most popular platform to the whims of the older generation.
Due to factors such as cognitive decline, positivity bias, or digital (il)literacy, older social media users are more likely to fall for scams and fraud. Such conditions make Facebook an ideal place for spreading low-quality AI slop and false information. Scammers use AI tools to create fake images and videos about made-up crises to raise money for causes that are not real.
The lack of regulation on Meta’s side is the most glaring sore spot, evidenced by the company pushing back against the EU’s Digital Services Act (DSA) and Digital Markets Act (DMA), viewing them as ‘overreaching‘ and stifling innovation. The math is simple: content generates engagement, resulting in more revenue for Facebook and other platforms owned by Meta. Whether that content is authentic and high-quality or low-effort AI slop, the numbers don’t care.
The economics behind AI slop
At its core, AI content is not just a social media phenomenon, but an economic one as well. GenAI tools drastically reduce the cost and time required to produce all types of content, and when production approaches zero marginal cost, the incentive to churn out AI slop seems too good to ignore. Even minimal engagement can generate positive returns through advertising, affiliate marketing, or platform monetisation schemes.
AI content production goes beyond exploiting social media algorithms and monetisation policies. SEO can now be automated at scale, thus generating thousands of keyword-optimised articles within hours. Affiliate link farming allows creators to monetise their products or product recommendations with minimal editorial input.
On video platforms like TikTok and YouTube, synthetic voice-overs and AI-generated visuals are on full display, banking on trending topics and using AI-generated thumbnails to garner more views on a whim. Thanks to AI tools, content creators can post relevant AI-generated content in minutes, enabling them to jump on the hottest topics and drive clicks faster than with any other authentic content creation method.
To add salt to the wound, YouTube content creators share the sentiment that they are victims of the platform’s double standards in enforcing its strict community guidelines. Even the largest YouTube Channels are often flagged for a plethora of breaches, including copyright claims and depictions of dangerous or illegal activities, and harmful speech, to name a few. On the other hand, AI slop videos seem to fly under YouTube’s radar, leading to more resentment towards AI-generated content.
Businesses that rely on generative AI tools to market their services online are also finding AI to be the way to go, as most users are still not too keen on distinguishing authentic content, nor do they give much importance to those aspects. Instead of paying voice-over artists and illustrators, it is way cheaper to simply create a desired post in under a few minutes, adding fuel to an already raging fire. Some might call it AI slop, but again, the numbers are what truly matter.
The regulatory challenge of AI slop
AI slop is not only a social and economic issue, but also a regulatory one. The problem is not a single AI-generated post that promotes harmful behaviour or misleading information, but the sheer scale of synthetic content entering digital platforms. When large volumes of low-value or deceptive material circulate on the web, they can distort information ecosystems and make moderation a tough challenge. Such a predicament shifts the focus from individual violations to broader systemic effects.
In the EU, the DSA requires very large online platforms to assess and mitigate the systemic risks linked to their services. While the DSA does not specifically target AI slop, its provisions on transparency, content recommendation algorithms, and risk mitigation could apply if AI content significantly affects public discourse or enables fraud. The challenge lies in defining when content volume prevails over quality control, becoming a systemic issue rather than isolated misuse.
Debates around labelling AI slop and transparency also play a large role. Policymakers and platforms have explored ways to flag AI-generated content throughout disclosures or watermarking. For example, OpenAI’s Sora generates videos with a faint Sora watermark, although it is hardly visible to an uninitiated user. Nevertheless, labelling alone may not address deeper concerns if recommendation systems continue to prioritise engagement above all else, with the issue not only being whether users know the content is AI-generated, but how such content is ranked, amplified, and monetised.
More broadly, AI slop highlights the limits of traditional content moderation. As generative tools make production faster and cheaper, enforcement systems may struggle to keep pace. Regulation, therefore, faces a structural question: can existing digital governance frameworks preserve information quality in an environment where automated content production continues to grow?
Building resilience in the era of AI slop
Humans are considered the most adaptable species on Earth, and for good reason. While AI slop has exposed weaknesses in platform design, monetisation models, and moderation systems, it may also serve as a catalyst for adaptation. Unless regulatory bodies unite under one banner and agree to ban AI content for good, it is safe to say that synthetic content is here to stay. However, sooner or later, systemic regulations will evolve to address this new AI craze and mitigate its negative effects.
The AI slop bubble is bound to burst at some point, as online users will come to favour meticulously crafted content – whether authentic or artificial over low-quality content. Consequently, incentives may also evolve along with content saturation, leading to a greater focus on quality rather than quantity. Advertisers and brands often prioritise credibility and brand safety, which could encourage platforms to refine their ranking systems to reward originality, reliability, and verified creators.
Transparency requirements, systemic risk assessments, and discussions around provenance disclosure mechanisms imply that governance is responding to the realities of generative AI. Instead of marking the deterioration of digital spaces, AI slop may represent a transitional phase in which platforms, policymakers, and users are challenged to adjust their expectations and norms accordingly.
Finally, the long-term outcome will depend entirely on whether innovation, market incentives, and governance structures can converge around information quality and resilience. In that sense, AI slop may ultimately function less as a permanent state of affairs and more as a stress test to separate the wheat from the chaff. In the upcoming struggle between user experience and generative AI tools, the former will have the final say, which is an encouraging thought.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has confirmed it will again delay publishing guidance on high-risk AI systems under the EU AI Act. The guidelines were due by 2 February 2026, but will now follow a revised timeline.
According to Euractiv, the document is intended to clarify which AI systems fall into the high-risk category and therefore face stricter obligations. Officials said more time is needed to incorporate significant stakeholder feedback.
The delay marks the second missed deadline and adds to broader implementation setbacks surrounding the EU AI Act. Several member states have yet to designate national enforcement bodies, complicating oversight preparations.
Brussels is also considering postponing the application of high-risk rules through a digital simplification package. Parliament and Council appear supportive of moving the August deadline back by more than a year, easing pressure on companies awaiting guidance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!