EMFA guidance sets expectations for Big Tech media protections

The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.

Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.

The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.

The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.

The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT begins limited ads test in the US

OpenAI has begun testing advertisements inside ChatGPT for some adult users in the US, marking a major shift for the widely used AI service.

The ads appear only on Free and Go tiers in the US, while paid plans remain ad free. OpenAI says responses are unaffected, though critics warn commercial messaging could blur boundaries over time in the US.

Ads are selected based on conversation topics and prior interactions, prompting concern among privacy advocates in the US. OpenAI says advertisers receive only aggregated data and cannot view conversations.

Industry analysts say the move reflects growing pressure to monetise costly AI infrastructure in the US. Regulators and researchers continue to debate whether advertising can coexist with trust in AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US lawsuits target social media platforms for deliberate child engagement designs

A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.

The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.

The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.

A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.

Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.

Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.

Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.

More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI governance takes focus at UN security dialogue

The UN will mark the fourth International Day for the Prevention of Violent Extremism Conducive to Terrorism on 12 February 2026 with a high-level dialogue focused on AI. The event will examine how emerging technologies are reshaping both prevention strategies and extremist threats.

Organised by the UN Office of Counter-Terrorism in partnership with the Republic of Korea’s UN mission, the dialogue will take place at UN Headquarters in New York. Discussions will bring together policymakers, technology experts, civil society representatives, and youth stakeholders.

A central milestone will be the launch of the first UN Practice Guide on Artificial Intelligence and Preventing and Countering Violent Extremism. The guide offers human rights-based advice on responsible AI use, addressing ethical, governance, and operational risks.

Officials warn that AI-generated content, deepfakes, and algorithmic amplification are accelerating extremist narratives online. Responsibly governed AI tools could enhance early detection, research, and community prevention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU telecom simplification at risk as Digital Networks Act adds extra admin

The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.

The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.

Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.

Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.

The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI redefines criminal justice decision making

AI is increasingly being considered for use in criminal justice systems, raising significant governance and accountability questions. Experts warn that, despite growing adoption, there are currently no clear statutory rules governing the deployment of AI in criminal proceedings, underscoring the need for safeguards, transparency, and human accountability in high-stakes decisions.

Within this context, AI is being framed primarily as a support tool rather than a decision maker. Government advisers argue that AI could assist judges, police, and justice officials by structuring data, drafting reports, and supporting risk assessments, while final decisions on sentencing and release remain firmly in human hands.

However, concerns persist about the reliability of AI systems in legal settings. The risk of inaccuracies, or so-called hallucinations, in which systems generate incorrect or fabricated information, is particularly problematic when AI outputs could influence judicial outcomes or public safety.

The debate is closely linked to wider sentencing reforms aimed at reducing prison populations. Proposals include phasing out short custodial sentences, expanding alternatives such as community service and electronic monitoring, and increasing the relevance of AI-supported risk assessments.

At the same time, AI tools are already being used in parts of the justice system for predictive analytics, case management, and legal research, often with limited oversight. This gap between practice and regulation has intensified calls for clearer standards and disclosure requirements.

Proponents also highlight potential efficiency gains. AI could help ease administrative burdens on courts and police by automating routine tasks and analysing large volumes of data, freeing professionals to focus on judgment and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Writing as thinking in the age of AI

In his article, Richard Gunderman argues that writing is not merely a way to present ideas but a core human activity through which people think, reflect and form meaning.

He contends that when AI systems generate text on behalf of users, they risk replacing this cognitive process with automated output, weakening the connection between thought and expression.

According to the piece, writing serves as a tool for reasoning, emotional processing and moral judgment. Offloading it to AI can diminish originality, flatten individual voice and encourage passive consumption of machine-produced ideas.

Gunderman warns that this shift could lead to intellectual dependency, where people rely on AI to structure arguments and articulate positions rather than developing those skills themselves.

The article also raises ethical concerns about authenticity and responsibility. If AI produces large portions of written work, it becomes unclear who is accountable for the ideas expressed. Gunderman suggests that overreliance on AI writing tools may undermine trust in communication and blur the line between human and machine authorship.

Overall, the piece calls for a balanced approach: AI may assist with editing or idea generation, but the act of writing itself should remain fundamentally human, as it is central to critical thinking, identity and social responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI in education reveals a critical evidence gap

Universities are increasingly reorganising around AI, treating AI-based instruction as a proven solution for delivering education more efficiently. This shift reflects a broader belief that AI can reliably replace or reduce human-led teaching, despite growing uncertainty about its actual impact on learning.

Recent research challenges this assumption by re-examining the evidence used to justify AI-driven reforms. A comprehensive re-analysis of AI and learning studies reveals severe publication bias, with positive results published far more frequently than negative or null findings. Once corrected, reported learning gains from AI shrink substantially and may be negligible.

More critically, the research exposes deep inconsistency across studies. Outcomes vary so widely that the evidence cannot predict whether AI will help or harm learning in a given context, and no educational level, discipline, or AI application shows consistent benefits.

By contrast, human-mediated teaching remains a well-established foundation of learning. Decades of research demonstrate that understanding develops through interaction, adaptation, and shared meaning-making, leading the article to conclude that AI in education remains an open question, while human instruction remains the known constant.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!