UNESCO, Lebanon and Télé Liban launch campaign to promote media literacy

Lebanon’s Ministry of Information, UNESCO and Télé Liban have launched a nationwide media and information literacy campaign aimed at raising public awareness of misinformation and encouraging more responsible information sharing.

Funded by UNIFIL, the initiative, titled ‘Share Responsibly: Be Part of the Truth, Not Misinformation’, uses short episodes inspired by daily life in Lebanon to show how misleading information can spread and shape public perception.

The campaign features Yara Bou Monsef in scenarios set in taxis, shops, elevators and other public spaces, illustrating how people encounter and respond to misinformation in everyday situations. Through these examples, the organisers aim to encourage audiences to verify information before sharing it online or offline.

The initiative forms part of broader efforts to strengthen media and information literacy, promote critical thinking and support more resilient and informed communities.

Why does it matter?

Misinformation campaigns are often discussed in relation to elections, conflict or online platforms, but public resilience also depends on everyday information habits. By using familiar public spaces and locally recognisable scenarios, the campaign frames media literacy as a civic skill rather than only a technical or platform-governance issue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Ireland and the EU intensify DSA pressure on Meta

Coimisiún na Meán, the media regulator of Ireland, has launched two formal investigations into Meta over the design of recommender systems on Facebook and Instagram under the Digital Services Act. The investigations focus on whether users are prevented from choosing recommendation feeds that are not based on the profiling of their personal data.

Coimisiún na Meán said concerns emerged following platform supervision reviews and complaints linked to potential ‘dark patterns’ and deceptive interface designs. Regulators are examining whether users can easily access and modify non-profiled recommendation feeds as required under Article 27 of the DSA, alongside whether interface designs may improperly influence user choices under Article 25.

John Evans, Digital Services Commissioner at Coimisiún na Meán, said recommender systems can repeatedly push harmful material into user feeds, particularly affecting children and younger users. The regulator also warned that Very Large Online Platforms (VLOPs) must ensure users can exercise their rights under the DSA without manipulation or unnecessary barriers.

EU investigates Meta over under-13 access on Instagram and Facebook

At the same time, the European Commission has preliminarily found Meta in potential breach of the DSA over failures to adequately prevent children under 13 from accessing Instagram and Facebook. Regulators said Meta’s age verification and reporting systems may be ineffective, while the company’s risk assessments allegedly failed to properly address harms faced by underage users.

Why does it matter?

These investigations are critical because they could shape how the DSA is enforced across Europe, particularly in cases involving children and algorithmic recommendation systems. If regulators conclude that Meta failed to properly protect minors or used manipulative interface designs that discouraged users from choosing non-profiled feeds, the case may set a wider precedent for how large online platforms handle age assurance, user consent, privacy protections, and recommender system transparency under EU law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Online Safety Act brings progress, but UK children still face harm online

A new report from Internet Matters suggests the UK’s Online Safety Act has introduced more visible safety measures for children, but has not yet delivered the step change needed to make their online lives meaningfully safer. Drawing on surveys and focus groups with children and parents, the report presents an early view of how the law is affecting families in practice.

The findings point to some clear signs of progress. Parents and children report seeing more safety features, including improved reporting tools, content filters, restrictions on certain functions, and stronger parental controls. Many children also say the content they encounter online is becoming more age-appropriate.

At the same time, the report argues that important weaknesses remain. Children continue to encounter harmful content at high rates, while age verification is widely seen as easy to bypass. Internet Matters also says that some of the issues families care most about, including excessive screen time and the risks linked to AI-generated content, are still not adequately addressed under the current framework.

The report concludes that parents are still carrying too much of the burden of keeping children safe online. It calls for stronger enforcement, more effective age assurance, tighter limits on harmful features, and a broader safety-by-design approach to digital services used by children in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China closes consultation on digital virtual human services

The Cyberspace Administration of China has closed its public consultation on the draft Administrative Measures for Digital Virtual Human Information Services, which set out proposed rules for digital virtual human services provided to the public in China.

The notice states that the consultation opened in April 2026 and that comments were accepted until 6 May 2026. According to the draft, the measures would apply to internet information services delivered to the public within China through digital virtual humans.

The draft says providers and users must process data for lawful purposes and within a lawful scope, use data from legal sources, and fulfil their data security responsibilities. It also requires technical and other necessary measures to protect data storage and transmission and to prevent leaks or improper use.

The text further requires digital virtual human service providers and users to establish security risk monitoring, warning, emergency response, anti-addiction mechanisms, and stronger content-direction management, while also retaining logs. Providers whose services have public opinion attributes or social mobilisation capacity would also be required to complete algorithm filing procedures and security assessments in line with existing national rules.

Beyond cybersecurity and data protection, the draft includes provisions on personal information, personality rights, intellectual property, content controls, labelling requirements, and protections for minors. It defines digital virtual humans as virtual figures in the non-physical world that simulate human appearance and may have voice, behaviour, interaction abilities, or personality traits, using graphics, digital image processing, or AI technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Major publishers book again Meta’s Llama over AI training

Meta and Mark Zuckerberg are facing a new copyright lawsuit from five major publishers, Hachette, Macmillan, McGraw-Hill, Elsevier, and Cengage, along with author Scott Turow. The plaintiffs accuse the company of using millions of copyrighted books, journal articles, textbooks, and scholarly works to train its Llama AI models without permission. Filed in the US District Court for the Southern District of New York (Manhattan federal court), the proposed complaint seeks monetary compensation, an injunction, and the destruction of allegedly infringing copies held by Meta.

The complaint argues that Meta’s AI strategy relied on protected works from trade, education, and academic publishing, including content allegedly taken from pirate libraries such as LibGen and Anna’s Archive, as well as broad web scrapes containing subscription-only material. The publishers also claim Zuckerberg personally directed or authorised the conduct, a charge Meta is expected to contest vigorously.

At the centre of the lawsuit is a policy question now shaping AI governance worldwide: whether large-scale copying for model training can be justified as fair use or requires permission, transparency, and compensation? Meta and other AI developers argue that training enables transformative innovation, while rights holders say commercial models are being built from creative and scholarly labour without licensing. A previous Meta win in an author’s case showed that courts may accept fair-use arguments, but only where plaintiffs fail to prove clear market harm.

Either way, the publishers are trying to make that market-harm argument harder to dismiss. Their filing describes Llama as an ‘infinite substitution machine’, capable of generating long-form books, educational materials, and scholarly-style outputs that may compete with human-authored works. The case also points to the alleged erosion of licensing markets, arguing that harm occurs not only when AI outputs imitate books, but also when copyrighted works are copied into commercial training pipelines without consent.

The US Copyright Office’s 2025 report said that fair use in generative AI training requires case-by-case analysis, with market effects and the source of the training material playing central roles. In the EU, the AI Act has shifted the debate toward transparency by requiring general-purpose AI providers to publish summaries of their training data and to comply with the EU copyright rules, including rights reservations for text and data mining.

Why does it matter?

The Meta case is the manifestation of a global shift in digital governance: AI copyright disputes are no longer isolated lawsuits, but part of a broader effort to define lawful data supply chains. Anthropic’s $1.5 billion settlement over pirated books, the EU’s training-data transparency regulation, and continuing legal disputes in the US all point in the same direction: courts and regulators are asking whether AI innovation can remain competitive while respecting the rights, labour, and markets that make high-quality knowledge possible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Meta age assurance system aims to prevent underage access

Meta has expanded its use of AI to strengthen age assurance and improve enforcement of underage account policies across its platforms. The systems are designed to detect users under 13 for removal and to place suspected teens into protected Teen Account settings on Instagram and Facebook in regions including the EU, Brazil, and the US.

The technology analyses a range of signals, including profile information, user activity, and other contextual indicators, to estimate age more accurately. Automated systems are also being used to support faster and more consistent review of reports related to underage use.

Visual analysis has also become part of Meta’s broader detection approach, with the company saying its systems look for general age-related indicators rather than attempting to identify specific individuals. Reporting tools have been simplified, and AI-assisted moderation is being used to improve the speed and reliability of enforcement decisions.

Alongside these enforcement measures, Meta is increasing parental engagement through notifications and guidance to encourage more accurate age reporting and safer online behaviour. The wider effort reflects growing pressure on platforms to move beyond self-declared age checks and to build stronger systems to protect younger users.

Why does it matter?

The significance of the move lies in the fact that age assurance is becoming a core platform governance issue rather than a secondary moderation tool. Meta is trying to show that large social platforms can use AI not only to recommend or personalise content, but also to enforce minimum age rules at scale. That matters because regulators are increasingly questioning whether self-declared age data is enough to protect minors online. It also points to a broader shift in which platforms are expected to combine safety obligations, automated detection, and parental tools into a more active system of child protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UNESCO supports Western Balkans regulators on EU digital rules implementation

UNESCO organised a study visit for media regulators from the Western Balkans under an EU-funded project on journalism as a public good. The initiative aimed to support preparation for European rules affecting the information ecosystem.

Participants from Albania, Bosnia and Herzegovina, Montenegro, North Macedonia, and Serbia examined implementation of the Digital Services Act (DSA) and the European Media Freedom Act (EMFA). The visit included exchanges with institutions in France and the Netherlands on regulatory approaches.

The Netherlands presented a model based on a risk-based regulatory culture, with separate roles for a Digital Services Coordinator and a media authority. France presented a more integrated structure within a central media regulator, supported by specialised bodies and legislation.

Meetings involved stakeholders, including the House of Representatives of the Netherlands, TikTok, Reporters Without Borders, and UNESCO. Discussions covered platform engagement, regulatory cooperation, and institutional practice.

Participants identified institutional cooperation, technical expertise, and engagement with platforms as key elements of effective implementation. Discussions with Mariya Gabriel also addressed public-interest journalism, platform governance, and regional cooperation to tackle digital risks while safeguarding freedom of expression.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Japan sign cooperation arrangement on digital platform regulation

The European Commission has signed a cooperation arrangement with Japan’s Ministry of Internal Affairs and Communications to strengthen the enforcement of digital platform regulation.

The agreement was concluded during the fourth EU–Japan Digital Partnership Council meeting in Brussels and focuses on cooperation to improve oversight of online platforms under respective regulatory frameworks.

The arrangement supports the implementation of the EU’s Digital Services Act and Japan’s Information Distribution Platform Act. Both sides will collaborate on key areas, including transparency requirements and notice-and-action systems for illegal or harmful online content.

Cooperation will be delivered through technical expert exchanges, joint training sessions, shared research initiatives, and coordinated studies. Officials said closer international alignment is important for maintaining a secure and trusted online environment as digital platforms operate across borders.

The agreement follows similar partnerships with regulators in the United Kingdom and Australia, including joint work on age assurance standards. The Commission said the agreement forms part of broader efforts to develop coordinated approaches to platform governance and digital safety regulation.

Why does it matter? 

The agreement highlights the need for coordinated international oversight of digital platforms operating across borders. As online services expand globally, closer alignment between jurisdictions like the EU and Japan helps close regulatory gaps and strengthen consistent standards for transparency and accountability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta taps blockchain networks for faster creator payments

Meta has introduced USDC payouts for selected Facebook creators in Colombia and the Philippines, marking another step towards using blockchain-based payment rails for creator earnings. The programme allows eligible users to receive funds directly into crypto wallets using Polygon or Solana as settlement networks.

Creators receiving USDC on Polygon can move funds through supported wallets or exchanges and convert them into local currency where off-ramp services are available. The model reduces reliance on traditional cross-border payment channels and is intended to give creators faster and more flexible access to dollar-denominated earnings.

Polygon has been included alongside Solana as part of the payout infrastructure, with Polygon arguing that its network already handles a large share of global USDC transfer activity. Low transaction costs and broad wallet and exchange support are presented as key reasons stablecoin rails are becoming more attractive for recurring digital payouts.

Why does it matter?

The significance of the move lies less in crypto branding than in payment infrastructure. Meta is testing whether stablecoin rails can make creator payouts faster, more flexible, and less dependent on the frictions of traditional cross-border transfers. If this model scales, it would suggest that blockchain networks are becoming useful not only for trading or speculation, but for mainstream platform payments where speed, settlement, and access to dollar-denominated value matter.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

The Academy introduces rules excluding AI-generated work from Oscar eligibility

The Academy’s Board of Governors has introduced new rules excluding AI-generated performances and screenplays from eligibility for the Oscars. The updated rules require that recognised work be created and performed by humans.

Under the updated framework, only performances credited in a film’s legal billing and demonstrably carried out by individuals with their consent will qualify for an Oscar. Screenplays must also be authored by humans, with the academy reserving the right to request further disclosure on the use of AI in production.

The update comes as AI technologies are increasingly used in filmmaking, including digital recreations of actors and synthetic performers. Industry tensions around AI have grown in recent years, including during the 2023 writers’ and actors’ strikes.

The move is described as part of efforts within the creative sector to preserve human authorship and artistic control as generative AI tools expand across media production.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!