EMFA guidance sets expectations for Big Tech media protections

The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.

Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.

The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.

The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.

The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dutch MPs renew push to move data off US clouds

Dutch MPs have renewed calls for companies and public services in the Netherlands to reduce reliance on US-based cloud servers. The move reflects growing concern over data security and foreign access in the Netherlands.

Research by NOS found that two-thirds of essential service providers in the Netherlands rely on at least one US cloud server. Local councils, health insurers and hospitals in the Netherlands remain heavily exposed.

Concerns intensified following a proposed sale of Solvinity, which manages the DigiD system used across the Netherlands. A sale to a US firm could place Dutch data under the US Cloud Act.

Parties including D66, VVD and CDA say critical infrastructure data in the Netherlands should be prioritised for protection. Dutch cloud providers say Europe could handle most services if procurement rules changed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US lawsuits target social media platforms for deliberate child engagement designs

A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.

The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.

The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.

A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.

Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.

Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.

Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.

More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI governance takes focus at UN security dialogue

The UN will mark the fourth International Day for the Prevention of Violent Extremism Conducive to Terrorism on 12 February 2026 with a high-level dialogue focused on AI. The event will examine how emerging technologies are reshaping both prevention strategies and extremist threats.

Organised by the UN Office of Counter-Terrorism in partnership with the Republic of Korea’s UN mission, the dialogue will take place at UN Headquarters in New York. Discussions will bring together policymakers, technology experts, civil society representatives, and youth stakeholders.

A central milestone will be the launch of the first UN Practice Guide on Artificial Intelligence and Preventing and Countering Violent Extremism. The guide offers human rights-based advice on responsible AI use, addressing ethical, governance, and operational risks.

Officials warn that AI-generated content, deepfakes, and algorithmic amplification are accelerating extremist narratives online. Responsibly governed AI tools could enhance early detection, research, and community prevention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU telecom simplification at risk as Digital Networks Act adds extra admin

The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.

The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.

Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.

Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.

The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Coal reserves could help Nigeria enter $650 billion AI economy

Nigeria has been advised to develop its coal reserves to benefit from the rapidly expanding global AI economy. A policy organisation said the country could capture part of the projected $650 billion AI investment by strengthening its energy supply capacity.

AI infrastructure requires vast and reliable electricity to power data centres and advanced computing systems. Technology companies worldwide are increasing energy investments as competition intensifies and demand for computing power continues to grow rapidly.

Nigeria holds nearly five billion metric tonnes of coal, offering a significant opportunity to support global energy needs. Experts warned that failure to develop these resources could result in major economic losses and missed industrial growth.

The organisation also proposed creating a national corporation to convert coal into high-value energy and industrial products. Analysts stressed that urgent government action is needed to secure Nigeria’s position in the emerging AI-driven economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI redefines criminal justice decision making

AI is increasingly being considered for use in criminal justice systems, raising significant governance and accountability questions. Experts warn that, despite growing adoption, there are currently no clear statutory rules governing the deployment of AI in criminal proceedings, underscoring the need for safeguards, transparency, and human accountability in high-stakes decisions.

Within this context, AI is being framed primarily as a support tool rather than a decision maker. Government advisers argue that AI could assist judges, police, and justice officials by structuring data, drafting reports, and supporting risk assessments, while final decisions on sentencing and release remain firmly in human hands.

However, concerns persist about the reliability of AI systems in legal settings. The risk of inaccuracies, or so-called hallucinations, in which systems generate incorrect or fabricated information, is particularly problematic when AI outputs could influence judicial outcomes or public safety.

The debate is closely linked to wider sentencing reforms aimed at reducing prison populations. Proposals include phasing out short custodial sentences, expanding alternatives such as community service and electronic monitoring, and increasing the relevance of AI-supported risk assessments.

At the same time, AI tools are already being used in parts of the justice system for predictive analytics, case management, and legal research, often with limited oversight. This gap between practice and regulation has intensified calls for clearer standards and disclosure requirements.

Proponents also highlight potential efficiency gains. AI could help ease administrative burdens on courts and police by automating routine tasks and analysing large volumes of data, freeing professionals to focus on judgment and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Learnovate launches community of practice on AI for learning

The Learnovate Centre, a global innovation hub focused on the future of work and learning at Trinity College Dublin, is spearheading a community of practice on responsible AI in learning, bringing together educators, policymakers, institutional leaders and sector specialists to discuss safe, effective and compliant uses of AI in educational settings.

This initiative aims to help practitioners interpret emerging policy frameworks, including EU AI Act requirements, share practical insights and align AI implementation with ethical and pedagogical principles.

One of the community’s early activities includes virtual meetings designed to build consensus around AI norms in teaching, compliance strategies and knowledge exchange on real-world implementation.

Participants come from diverse education domains, including schools, higher and vocational education and training, as well as representatives from government and unions, reflecting a broader push to coordinate AI adoption across the sector.

Learnovate plays a wider role in AI and education innovation, supporting research, summits and collaborative programmes that explore AI-powered tools for personalised learning, upskilling and ethical use cases.

It also partners with start-ups and projects (such as AI platforms for teachers and learners) to advance practical solutions that balance innovation with safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Tech firms push longer working hours to compete in AI race

Tech companies competing in AI are increasingly expecting employees to work longer weeks to keep pace with rapid innovation. Some start-ups openly promote 70-hour schedules, presenting intense effort as necessary to launch products faster and stay ahead of rivals.

Investors and founders often believe that extended working hours improve development speed and increase the chances of securing funding. Fast growth and fierce global competition have made urgency a defining feature of many AI workplaces.

However, research shows productivity rises only up to a limit before fatigue reduces efficiency and focus. Experts warn that excessive workloads can lead to burnout and make it harder for companies to retain experienced professionals.

Health specialists link extended working weeks to higher risks of heart disease and stroke. Many experts argue that smarter management and efficient use of technology offer safer and more effective paths to lasting productivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!