The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.
Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.
The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.
Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.
Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta has committed to purchasing $60bn worth of AI chips from Advanced Micro Devices over five years, signalling one of the largest infrastructure bets in the sector despite ongoing concerns about an AI investment bubble.
The agreement includes a 10% stake in the chipmaker and large-scale deployment of next-generation hardware beginning later this year.
Analysts say the move signals a shift to secure compute capacity and cut reliance on Nvidia amid supply constraints. Talks with Google and ongoing in-house chip work signal a multi-vendor strategy to support expanding data centre operations.
Executives say the investment reflects a shift towards hosting AI workloads and infrastructure services. Custom processors built for performance and efficiency will complement AMD GPUs, supporting capacity expansion as enterprise demand rises.
Enterprise AI competition intensifies as Anthropic and OpenAI expand integrations and tools. Significant platform investments are reshaping semiconductors and signalling strong long-term confidence in AI computing demand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In December 2025, the Macquarie Dictionary, Merriam-Webster, and the American Dialect Society named ‘slop’ as the Word of the Year, reflecting a widespread reaction to AI-generated content online, often referred to as ‘AI slop.’ By choosing ‘slop’, typically associated with unappetising animal feed, they captured unease about the digital clutter created by AI tools.
As LLMs and AI tools became accessible to more people, many saw them as opportunities for profit through the creation of artificial content for marketing or entertainment, or through the manipulation of social media algorithms. However, despite video and image generation advances, there is a growing gap between perceived quality and actual detection: many overestimate how easily AI content evades notice, fueling scepticism about its online value.
As generative AI systems expand, the debate goes beyond digital clutter to deeper concerns about trust, market incentives, and regulatory resilience. How will societies manage the social, economic, and governance impacts of an information ecosystem increasingly shaped by automated abundance? In simplified terms, is AI slop more than a simple digital nuisance, or do we needlessly worry about a transient vogue that will eventually fade away?
The social aspect of AI slop’s influence
The most visible effects of AI slop emerge on large social media platforms such as YouTube, TikTok, and Instagram. Users frequently encounter AI-generated images and videos that appropriate celebrity likenesses without consent, depict fabricated events, or present sensational and misleading scenarios. Comment sections often become informal verification spaces, where some users identify visual inconsistencies and warn others, while many remain uncertain about the content’s authenticity.
However, no platform has suffered the AI slop effect as much as Facebook, and once you take a glance at its demographics, the pieces start to come together. According to multiple studies, Facebook’s user base is mostly populated by adults aged 25-34, but users over the age of 55 make up nearly 24 percent of all users. While seniors do not constitute the majority (yet), younger generations have been steadily migrating to social platforms such as TikTok, Instagram, and X, leaving the most popular platform to the whims of the older generation.
Due to factors such as cognitive decline, positivity bias, or digital (il)literacy, older social media users are more likely to fall for scams and fraud. Such conditions make Facebook an ideal place for spreading low-quality AI slop and false information. Scammers use AI tools to create fake images and videos about made-up crises to raise money for causes that are not real.
The lack of regulation on Meta’s side is the most glaring sore spot, evidenced by the company pushing back against the EU’s Digital Services Act (DSA) and Digital Markets Act (DMA), viewing them as ‘overreaching‘ and stifling innovation. The math is simple: content generates engagement, resulting in more revenue for Facebook and other platforms owned by Meta. Whether that content is authentic and high-quality or low-effort AI slop, the numbers don’t care.
The economics behind AI slop
At its core, AI content is not just a social media phenomenon, but an economic one as well. GenAI tools drastically reduce the cost and time required to produce all types of content, and when production approaches zero marginal cost, the incentive to churn out AI slop seems too good to ignore. Even minimal engagement can generate positive returns through advertising, affiliate marketing, or platform monetisation schemes.
AI content production goes beyond exploiting social media algorithms and monetisation policies. SEO can now be automated at scale, thus generating thousands of keyword-optimised articles within hours. Affiliate link farming allows creators to monetise their products or product recommendations with minimal editorial input.
On video platforms like TikTok and YouTube, synthetic voice-overs and AI-generated visuals are on full display, banking on trending topics and using AI-generated thumbnails to garner more views on a whim. Thanks to AI tools, content creators can post relevant AI-generated content in minutes, enabling them to jump on the hottest topics and drive clicks faster than with any other authentic content creation method.
To add salt to the wound, YouTube content creators share the sentiment that they are victims of the platform’s double standards in enforcing its strict community guidelines. Even the largest YouTube Channels are often flagged for a plethora of breaches, including copyright claims and depictions of dangerous or illegal activities, and harmful speech, to name a few. On the other hand, AI slop videos seem to fly under YouTube’s radar, leading to more resentment towards AI-generated content.
Businesses that rely on generative AI tools to market their services online are also finding AI to be the way to go, as most users are still not too keen on distinguishing authentic content, nor do they give much importance to those aspects. Instead of paying voice-over artists and illustrators, it is way cheaper to simply create a desired post in under a few minutes, adding fuel to an already raging fire. Some might call it AI slop, but again, the numbers are what truly matter.
The regulatory challenge of AI slop
AI slop is not only a social and economic issue, but also a regulatory one. The problem is not a single AI-generated post that promotes harmful behaviour or misleading information, but the sheer scale of synthetic content entering digital platforms. When large volumes of low-value or deceptive material circulate on the web, they can distort information ecosystems and make moderation a tough challenge. Such a predicament shifts the focus from individual violations to broader systemic effects.
In the EU, the DSA requires very large online platforms to assess and mitigate the systemic risks linked to their services. While the DSA does not specifically target AI slop, its provisions on transparency, content recommendation algorithms, and risk mitigation could apply if AI content significantly affects public discourse or enables fraud. The challenge lies in defining when content volume prevails over quality control, becoming a systemic issue rather than isolated misuse.
Debates around labelling AI slop and transparency also play a large role. Policymakers and platforms have explored ways to flag AI-generated content throughout disclosures or watermarking. For example, OpenAI’s Sora generates videos with a faint Sora watermark, although it is hardly visible to an uninitiated user. Nevertheless, labelling alone may not address deeper concerns if recommendation systems continue to prioritise engagement above all else, with the issue not only being whether users know the content is AI-generated, but how such content is ranked, amplified, and monetised.
More broadly, AI slop highlights the limits of traditional content moderation. As generative tools make production faster and cheaper, enforcement systems may struggle to keep pace. Regulation, therefore, faces a structural question: can existing digital governance frameworks preserve information quality in an environment where automated content production continues to grow?
Building resilience in the era of AI slop
Humans are considered the most adaptable species on Earth, and for good reason. While AI slop has exposed weaknesses in platform design, monetisation models, and moderation systems, it may also serve as a catalyst for adaptation. Unless regulatory bodies unite under one banner and agree to ban AI content for good, it is safe to say that synthetic content is here to stay. However, sooner or later, systemic regulations will evolve to address this new AI craze and mitigate its negative effects.
The AI slop bubble is bound to burst at some point, as online users will come to favour meticulously crafted content – whether authentic or artificial over low-quality content. Consequently, incentives may also evolve along with content saturation, leading to a greater focus on quality rather than quantity. Advertisers and brands often prioritise credibility and brand safety, which could encourage platforms to refine their ranking systems to reward originality, reliability, and verified creators.
Transparency requirements, systemic risk assessments, and discussions around provenance disclosure mechanisms imply that governance is responding to the realities of generative AI. Instead of marking the deterioration of digital spaces, AI slop may represent a transitional phase in which platforms, policymakers, and users are challenged to adjust their expectations and norms accordingly.
Finally, the long-term outcome will depend entirely on whether innovation, market incentives, and governance structures can converge around information quality and resilience. In that sense, AI slop may ultimately function less as a permanent state of affairs and more as a stress test to separate the wheat from the chaff. In the upcoming struggle between user experience and generative AI tools, the former will have the final say, which is an encouraging thought.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Across organisations, AI tools are moving beyond IT teams and into everyday business functions. CIOs now face the challenge of widening access while protecting data, security and trust.
Earlier waves of low-code platforms and citizen data science showed that empowerment can boost innovation but also create shadow IT and technical debt. AI agents and generative systems raise the stakes, with risks ranging from data leaks to flawed automated decisions.
Pressure from boards and business leaders means AI cannot be restricted to a small pilot group. Transparent governance, approved toolkits, and updated data policies are essential to prevent misuse while still enabling experimentation.
Long-term success depends on culture as much as technology. Leaders must define a focused AI vision, invest in literacy and adapt change management so employees use AI to improve decisions rather than accelerate flawed processes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has exceeded its 2025 internet access target, reaching over 299 million people globally, including more than 124 million in Africa. The milestone reflects years of partnerships to connect communities lacking reliable digital access.
Efforts are shifting from simple coverage to holistic digital participation, combining connectivity with energy, devices, digital skills, and AI tools.
Microsoft aims to enable meaningful adoption, ensuring communities can fully engage in the growing AI economy. Partnerships focus on scalable, community-based models aligned with national development priorities.
Collaboration with Starlink expands Microsoft’s toolkit to reach rural and hard-to-reach regions. Projects in Kenya pair satellite connectivity with local hubs and training to boost productivity, market access, and AI adoption.
As adoption accelerates, Microsoft plans to expand its approach by integrating financing, energy access, and community-first AI solutions. The initiative highlights the need for long-term, locally led strategies for fair participation in the digital and AI economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A 2025 survey by Statistics Netherlands (CBS) shows that 41% of employees think AI could perform part of their job, while 4% fear full replacement. Higher-educated workers and young adults are most likely to believe their tasks could be automated.
Among those using AI at work, 56% expect it could partly or fully do their jobs, compared with 37% of non-users. Almost half of the workers who see AI as a potential replacement expressed concern, with women slightly more worried than men.
Most adults anticipate that AI will lead to job losses (75%), a decline in workforce skills (64%), and less interesting work (48%). Despite these concerns, 57% believe AI could boost productivity by speeding up tasks.
Fewer respondents think AI will solve labour shortages (46%) or replace unsafe jobs (41%). The findings highlight both the opportunities and anxieties surrounding AI adoption in the workplace.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cyber adversaries increasingly used AI to accelerate attacks and evade detection in 2025, according to CrowdStrike’s 2026 Global Threat Report. The company described the period as the year of the evasive adversary, marked by subtle and rapid intrusions.
The average time to a financially motivated online crime breakout fell to 29 minutes, with the fastest recorded at 27 seconds. CrowdStrike observed an 89 percent rise in attacks by AI-enabled threat actors compared with 2024.
Attackers also targeted AI systems themselves, exploiting GenAI tools at more than 90 organisations through malicious prompt injection. Supply chain compromises and the abuse of valid credentials enabled intrusions to blend into legitimate activity, with most detections classified as malware-free.
China linked activity rose by 38 percent across sectors, while North Korea linked incidents increased by 130 percent. CrowdStrike tracked more than 281 adversaries in total, warning that speed, credential abuse, and AI fluency now define the modern threat landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Linköping University have developed an AI-powered electronic nose capable of detecting early signs of ovarian cancer in blood plasma samples. The pilot study, published in Advanced Intelligent Systems, reports 97 per cent accuracy using machine-learning models trained on biobank data.
Ovarian cancer is often diagnosed late because symptoms resemble those of more common conditions. In 2022, around 325,000 new cases and more than 200,000 deaths were recorded globally. Earlier detection could significantly improve survival rates and access to timely treatment.
The prototype device contains 32 commercially available sensors that detect volatile substances emitted by blood samples. Rather than targeting a single biomarker, the system analyses complex chemical patterns, with machine learning identifying signatures linked to ovarian cancer.
Unlike conventional blood tests, which can be slow and rely on specific biomarkers, the electronic nose evaluates a broad spectrum of compounds. Researchers say the approach offers greater precision and could reduce screening costs while improving accessibility.
Developers estimate the test takes around 10 minutes and could become part of cancer screening programmes within three years. Although currently focused on ovarian cancer, the team suggests the method could eventually be adapted to detect multiple cancer types.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Endor Labs identified six high- to critical vulnerabilities in the open-source AI agent framework OpenClaw using an AI-powered static application security testing engine to trace untrusted data flows. The flaws included server-side request forgery, authentication bypass, and path traversal.
The bugs affected multiple components of the agentic system, which integrates large language models with external tools and web services. Several SSRF issues were found in the gateway and authentication modules, potentially exposing internal services or cloud metadata depending on the deployment context.
Access control failures were also found in OpenClaw. A webhook handler lacked proper verification, enabling forged requests, while another flaw allowed unauthenticated access to protected functionality. Researchers confirmed exploitability with proof-of-concept demonstrations.
The team said that traditional static analysis tools struggle with modern AI software stacks, where inputs undergo multiple transformations before reaching sensitive operations. Their AI-based SAST engine preserved context across layers, tracing untrusted data from entry points to critical functions.
OpenClaw maintainers were notified through responsible disclosure and have since issued patches and advisories. Researchers argue that as AI agent frameworks expand into enterprise environments, security analysis must adapt to address both conventional vulnerabilities and AI-specific attack surfaces.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Sony Group has developed technology designed to identify the original sources of music generated by AI. The move comes amid growing concern over the unauthorised use of copyrighted works in AI training.
According to Sony Group, the system can extract data from an underlying AI model and compare generated tracks with original compositions. The process aims to quantify how much specific works contributed to the output.
Composers, songwriters and publishers could use the technology to seek compensation from AI developers if their material was used without permission. Sony said the goal is to help ensure creators are properly rewarded.
Efforts to safeguard intellectual property have intensified across the music industry. Sony Music Entertainment in the US previously filed a copyright infringement lawsuit in 2024 over AI-generated music, underscoring wider tensions around AI and creative rights.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!