Polish authorities have urged the European Commission to investigate TikTok over AI-generated content advocating Poland’s exit from the European Union. Officials say the videos pose risks to democratic processes and public order.
Deputy Minister for Digitalisation Dariusz Standerski highlighted that the narratives, distribution patterns, and synthetic audiovisual material suggest TikTok may not be fulfilling its obligations under the EU Digital Services Act for Very Large Online Platforms.
The associated TikTok account has since disappeared from the platform.
The Digital Services Act requires platforms to address systemic risks, including disinformation, and allows fines of up to 6% of a company’s global annual turnover for non-compliance. TikTok and the Commission have not provided immediate comment.
Authorities emphasised that the investigation could set an important precedent for how EU countries address AI-driven disinformation on major social media platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
These rapidly produced AI videos aim to grab attention but make it harder for traditional creators to gain visibility. Analysis of top trending channels shows Spain leads in AI slop subscribers with 20.22 million, while South Korea’s channels have amassed 8.45 billion views.
India’s Bandar Apna Dost is the most-viewed AI slop channel, earning an estimated $4.25 million annually and showing the profit potential of mass AI-generated content.
The prevalence of AI slop and brainrot has sparked debates over creativity, ethics, and advertiser confidence. YouTube CEO Neal Mohan calls generative AI transformative, but rising automated videos raise concerns over quality and brand safety.
Researchers warn that repeated exposure to AI-generated content can distort perception and contribute to information overload. Some AI content earns artistic respect, but much normalises low-quality videos, making it harder for users to tell meaningful content from repetitive or misleading material.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US State Department has imposed a visa ban on former EU Commissioner Thierry Breton and four other individuals, citing opposition to European regulation of social media platforms. The US visa ban reflects growing tensions between Washington and Brussels over digital governance and free expression.
US officials said the visa ban targets figures linked to organisations involved in content moderation and disinformation research. Those named include representatives from HateAid, the Center for Countering Digital Hate, and the Global Disinformation Index, alongside Breton.
Secretary of State Marco Rubio accused the individuals of pressuring US-based platforms to restrict certain viewpoints. A senior State Department official described Breton as a central figure behind the EU’s Digital Services Act, a law that sets obligations for large online platforms operating in Europe.
Breton rejected the US visa ban, calling it a witch hunt and denying allegations of censorship. European organisations affected by the decision criticised the move as unlawful and authoritarian, while the European Commission said it had sought clarification from US authorities.
France and the European Commission condemned the visa ban and warned of a possible response. EU officials said European digital rules are applied uniformly and are intended to support a safe, competitive online environment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Japan Fair Trade Commission (JFTC) announced it will investigate AI-based online search services over concerns that using news articles without permission could violate antitrust laws.
Authorities said such practices may amount to an abuse of a dominant bargaining position under Japan’s antimonopoly regulations.
The inquiry is expected to examine services from global tech firms, including Google, Microsoft, and OpenAI’s ChatGPT, as well as US startup Perplexity AI and Japanese company LY Corp. AI search tools summarise online content, including news articles, raising concerns about their effect on media revenue.
The Japan Newspaper Publishers and Editors Association warned AI summaries may reduce website traffic and media revenue. JFTC Secretary General Hiroo Iwanari said generative AI is evolving quickly, requiring careful review to keep up with technological change.
The investigation reflects growing global scrutiny of AI services and their interaction with content providers, with regulators increasingly assessing the balance between innovation and fair competition in digital markets.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.
The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.
Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.
Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.
Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.
The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan’s competition authority will probe AI search services from major domestic and international tech firms. The investigation aims to identify potential antitrust violations rather than impose immediate sanctions.
The probe is expected to cover LY Corp., Google, Microsoft and AI providers such as OpenAI and Perplexity AI. Concerns centre on how AI systems present and utilise news content within search results.
Legal action by Japanese news organisations alleges unauthorised use of articles by AI services. Regulators are assessing whether such practices constitute abuse of market dominance.
The inquiry builds on a 2023 review of news distribution contracts that warned against the use of unfair terms for publishers. Similar investigations overseas, including within the EU, have guided the commission’s approach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is transforming how news is produced and consumed, moving faster than audiences and policies can adapt. Journalists increasingly use AI for research, transcription and content optimisation, creating new trust challenges.
Ethical concerns are rising when AI misrepresents events or uses content without consent. Media organisations have introduced guidelines, but experts warn that rules alone cannot cover every scenario.
Audience scepticism remains, even as journalists adopt AI tools in daily practice. Transparency, apparent human oversight and ethical adoption are key to maintaining credibility and legitimacy.
Europe faces pressure to strengthen its trust infrastructure and regulate the use of AI in newsrooms. Experts argue that democratic stability depends on informed audiences and resilient journalism to counter disinformation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI is increasingly being weaponised to harass women in public roles, according to a new report commissioned by UN Women. Journalists, activists, and human rights defenders face AI-assisted abuse that endangers personal safety and democratic freedoms.
The study surveyed 641 women from 119 countries and found that nearly one in four of those experiencing online violence reported AI-generated or amplified abuse.
Writers, communicators, and influencers reported the highest exposure, with human rights defenders and journalists also at significant risk. Rapidly developing AI tools, including deepfakes, facilitate the creation to harmful content that spreads quickly on social media.
Online attacks often escalate into offline harm, with 41% of women linking online abuse to physical harassment, stalking, or intimidation. Female journalists are particularly affected, with offline attacks more than doubling over five years.
Experts warn that such violence threatens freedom of expression and democratic processes, particularly in authoritarian contexts.
Researchers call for urgent legal frameworks, platform accountability, and technological safeguards to prevent AI-assisted attacks on women. They advocate for human rights-focused AI design and stronger support systems to protect women in public life.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new analysis examines the impact of AI on North Macedonia’s media sector, offering guidance on ethical standards, human rights, and regulatory approaches.
Prepared in both Macedonian and English, the study benchmarks the country’s practices against European frameworks and provides actionable recommendations for future regulation and self-regulation.
The research, supported by the EU and Council of Europe’s PRO-FREX initiative and in collaboration with the Agency for Audio and Audiovisual Media Services (AVMU), was presented during Media Literacy Days 2025 in Skopje.
It highlights the relevance of EU and Council of Europe guidelines, including the Framework Convention on AI and Human Rights, and guidance on responsible AI in journalism.
AVMU’s involvement underlines its role in ensuring media freedom, fairness, and accountability amid rapid technological change. Participants highlighted the need for careful policymaking to manage AI’s impact, protecting media diversity, journalistic standards, and public trust online.
The analysis forms part of broader efforts under the Council of Europe and the EU’s Horizontal Facility for the Western Balkans and Türkiye, aiming to support North Macedonia in aligning media regulation with European standards while responsibly integrating AI technologies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!