UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ICO and Ofcom issue guidance on age assurance and online safety

The Information Commissioner’s Office and Ofcom have issued a joint statement outlining how age assurance measures should align with online safety and data protection requirements.

A guidance that focuses on protecting children from harm online instead of treating safety and privacy as separate obligations, reflecting closer coordination between the two regulators.

The statement is directed at digital services likely to be accessed by children and falling within the scope of the Online Safety Act and UK data protection laws.

It provides a practical overview of existing policies, helping organisations understand how to meet both regulatory frameworks while implementing age assurance technologies.

Rather than introducing new rules, the guidance clarifies how current requirements interact in practice. It highlights the importance of designing systems that both verify users’ ages and safeguard personal data, ensuring that safety measures do not undermine privacy protections.

The approach encourages organisations to integrate compliance into service design instead of addressing obligations separately.

By aligning regulatory expectations, the ICO and Ofcom aim to support organisations in delivering safer online environments for children while maintaining strong data protection standards.

The joint effort signals a broader move towards coordinated digital regulation, where safety and privacy are addressed together to reflect the complexities of modern online services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU watchdogs launch GDPR transparency sweep

The European Data Protection Board has launched a Europe-wide enforcement initiative to examine transparency and information obligations under the GDPR. The programme forms part of its Coordinated Enforcement Framework for 2026.

Twenty-five national data protection authorities will assess how organisations inform people about the processing of their personal data. Reviews will involve formal investigations and fact-finding exercises across multiple sectors.

Authorities plan to exchange findings later in the year to build a shared picture of compliance trends. A consolidated report will guide follow-up measures at both the national and EU levels.

The framework supports closer regulatory cooperation and consistent GDPR enforcement. Previous coordinated actions examined cloud services, data protection officers, access rights and the right to erasure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU privacy bodies back cybersecurity overhaul

The European Data Protection Board and the European Data Protection Supervisor have backed proposals to strengthen the EU cybersecurity law while safeguarding personal data. Their joint opinion addresses reforms to the Cybersecurity Act and updates to the NIS2 Directive.

Regulators support plans to reinforce the mandate of the European Union Agency for Cybersecurity and expand cybersecurity certification across digital supply chains. Clearer coordination between ENISA and privacy authorities is seen as essential for consistent oversight.

Advice also calls for limits on the processing of personal data and for prior consultation on technical rules affecting privacy. Certification schemes should align with the GDPR and help organisations demonstrate compliance.

Additional recommendations include broader cybersecurity skills training and a single EU entry point for personal data breach notifications. Proposed changes would also classify digital identity wallet providers as essential entities under the EU security rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPB summarises conference on cross-regulatory cooperation in the EU

The European Data Protection Board has published a summary of its 17 March conference in Brussels on cross-regulatory interplay and cooperation in the EU from a data protection perspective. According to the EDPB, the event brought together representatives of the EU institutions, European Data Protection Authorities, academia, and industry.

Three panels structured the conference discussion. One focused on data protection and competition, another on the Digital Markets Act and the General Data Protection Regulation (GDPR), and a third on the Digital Services Act and the GDPR.

Discussion in the first panel centred on cooperation between regulatory bodies in data protection and competition, including lessons from the aftermath of the Bundeskartellamt ruling. The EDPB said speakers emphasised the need for regulators to align their approaches and recognise synergies between the two fields. Speakers also said data protection should be considered in competition analysis only when relevant and on a case-by-case basis. The EDPB added that it had recently agreed with the European Commission to develop joint guidelines on the interplay between competition law and data protection.

The second panel focused on joint guidelines on the Digital Markets Act and the GDPR, developed by the European Commission and the EDPB and recently opened to public consultation. According to the EDPB, speakers described the guidelines as an example of regulatory cooperation aimed at developing a coherent and compatible interpretation of the two frameworks while respecting regulatory competences. The Board said participants linked the guidelines to stronger consistency, legal clarity, and easier compliance. Some speakers also suggested changes to the final version, including points related to proportionality and the relationship between DMA obligations and the GDPR.

The final panel examined the interaction between the Digital Services Act and the GDPR. The EDPB said panellists referred to the protection of minors as one example, arguing that age verification should be effective while remaining fully in line with data protection legislation. Speakers also highlighted the need for coordination between the two frameworks, including cooperation involving the EU institutions such as the European Board for Digital Services, the European Commission, the EDPB, and national authorities. Emerging technologies such as AI were also mentioned in the discussion.

The event also featured keynote speeches from European Commission Executive Vice President Henna Virkkunen and European Parliament LIBE Committee Chair Javier Zarzalejos. According to the EDPB, Virkkunen said the Commission remained committed to cooperation between different frameworks and highlighted the need to support compliance through stronger coordination among regulators. Zarzalejos said close cross-regulatory cooperation was essential for consistency, effective enforcement, and trust, and pointed to the intersections among data protection law, competition law, the DMA, and the DSA.

EDPB Chair Anu Talus closed the conference by reiterating that the EDPB and European Data Protection Authorities are committed to supporting stakeholders in navigating what the Board described as a new cross-regulatory landscape. The EDPB said future work will include continued cooperation with the Commission on joint guidelines on the interplay between the AI Act and the GDPR, finalisation of the joint guidelines on the interplay between the DMA and the GDPR, and work on the recently announced Joint Guidelines on the interplay between data protection and competition law.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital Services Act disinformation signatories publish first 2026 reports

Signatories to the EU Code of Conduct on Disinformation have published new transparency reports describing the measures they say they are taking to reduce the spread of disinformation online. According to the European Commission, the reports are the first ones submitted since the Code was recognised as a code of conduct under the Digital Services Act.

The reports are available through the Code’s Transparency Centre and come from a broad group of signatories, including online platforms such as Google, Meta, Microsoft, and TikTok, as well as fact-checkers, research organisations, civil society bodies, and representatives of the advertising industry. The European Commission says the reporting round covers the period from 1 July to 31 December 2025 and marks the first full reporting cycle linked to the Digital Services Act.

Dedicated sections in the reports cover responses to ongoing crises, notably the conflict in Ukraine, as well as measures intended to safeguard the integrity of elections. Data on the implementation of disinformation-related measures is also included, alongside developments in signatories’ policies, tools, and partnerships under the Digital Services Act framework.

Greater significance attaches to the reporting cycle because of the Code’s changed legal and regulatory position. The Commission says the Code was endorsed on 13 February 2025 by the Commission and the European Board for Digital Services, at the request of the signatories, as a code of conduct within the meaning of the Digital Services Act. From 1 July 2025, the Code became part of the co-regulatory framework under the Digital Services Act.

A more formal role now applies to the Code than under its earlier voluntary setup. According to the Commission, signatories’ adherence to its commitments is subject to independent annual auditing, and the Code serves as a relevant benchmark for determining compliance with Article 35 of the Digital Services Act. The Commission also says the Code has become a ‘significant and meaningful benchmark of DSA compliance’ for providers of very large online platforms and very large online search engines that adhere to its commitments under the Digital Services Act.

Reporting obligations differ depending on the type of signatory. Under the Code, providers of very large online platforms and very large online search engines commit to reporting, every six months, on the actions taken by their subscribed services. The Commission lists Google Search, YouTube, Google Ads, Facebook, Instagram, Messenger, WhatsApp, Bing, LinkedIn, and TikTok among the covered services, while other non-platform signatories report once per year under the Digital Services Act structure.

Broader policy relevance lies in the EU’s attempt to connect platform self-reporting to a more formal oversight structure. By placing the disinformation Code inside the Digital Services Act framework, the Commission and the Board are using voluntary commitments, transparency reporting, and auditing as part of a co-regulatory approach to systemic online risks. The reports themselves do not prove compliance, but they now carry greater weight within the wider Digital Services Act architecture for platform governance.

One further point is that the Commission notice focuses on publication of the reports rather than evaluating their quality or effectiveness. The notice says the reports describe measures, data, and policy developments, but it does not assess whether the actions taken by signatories were sufficient. Such a distinction matters in politically sensitive areas such as election integrity and crisis-related disinformation, especially where transparency under the Digital Services Act may shape future scrutiny.

Taken together, the first reporting round shows how the EU is using the Digital Services Act not only to impose direct legal obligations on large platforms and search engines, but also to anchor voluntary commitments within a more structured regulatory environment. Continued reporting, auditing, and review will determine how much practical weight the Code carries within the Digital Services Act and how effectively the Digital Services Act supports oversight of disinformation risks online.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO, UNICEF and ITU publish Charter for Public Digital Learning Platforms

The United Nations Educational, Scientific and Cultural Organization (UNESCO), the United Nations Children’s Fund (UNICEF), and the International Telecommunication Union (ITU) have published a Charter for Public Digital Learning Platforms, which sets out principles to guide governments in developing and governing digital learning systems.

The Charter states that education is a human right and a public good, and emphasises that digital learning platforms should support public education systems rather than replace in-person schooling. It describes such platforms as components of broader education systems that bring together content, technology, and users to support teaching and learning.

According to the Charter, governments are encouraged to establish and maintain public digital learning platforms as part of the national education infrastructure. The document notes that, in many contexts, the absence or limited quality of such platforms has led to increased reliance on private-sector solutions, which may not always align with public education objectives.

The Charter outlines seven principles for public digital learning platforms, covering areas including:

  • public governance and financing, with oversight by public authorities;
  • inclusion, including accessibility, multilingual support, and cultural relevance;
  • pedagogical design, with a focus on teacher-led learning;
  • integration with education systems and public digital infrastructure;
  • open standards and interoperability;
  • user-focused development based on educational needs;
  • trustworthiness, including data protection, safety, and reliability.

The document also highlights the importance of data governance, stating that data generated through such platforms should remain under public control and be managed in accordance with applicable laws, with safeguards for privacy and security.

The Charter was developed under the UNESCO–UNICEF Gateways to Public Digital Learning Initiative, with input from governments and international organisations. It was released on the occasion of the International Day of Digital Learning 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI Foundation expands investment strategy to shape AI benefits and resilience

A major expansion of its activities has been outlined by OpenAI Foundation, signalling a broader effort to ensure AI delivers tangible benefits while addressing emerging risks.

The organisation plans to invest at least $1 billion over the next year, forming part of a wider $25 billion commitment focused on disease research and AI resilience.

AI is increasingly reshaping healthcare, scientific discovery and economic productivity, offering pathways to faster medical breakthroughs and more efficient public services.

OpenAI Foundation frames such potential as central to its mission, while recognising that more capable systems introduce complex societal and safety challenges that require coordinated responses.

Initial programmes prioritise life sciences, including research into Alzheimer’s disease, expanded access to public health data, and accelerated progress on high-mortality conditions.

Parallel efforts examine the economic impact of automation, with engagement across policymakers, labour groups and businesses aimed at developing practical responses to labour market disruption.

A dedicated resilience strategy addresses risks linked to advanced AI systems, including safety standards, biosecurity concerns and the protection of children and young users.

Alongside community-focused funding, the OpenAI Foundation’s initiative reflects a dual objective: enabling innovation rather than leaving societies exposed to technological disruption.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada’s watchdog highlights surge in AI impersonation scams

A growing wave of AI-driven scams is prompting warnings from Competition Bureau Canada, as fraudsters increasingly impersonate government officials through deepfake technology and fake websites.

Authorities report a steady rise in complaints linked to deceptive schemes designed to exploit public trust.

Scammers are using synthetic media to mimic well-known political figures, including senior government officials, to extract personal information and spread misleading narratives.

Such tactics demonstrate how AI tools are being weaponised for social engineering rather than for legitimate communication.

The trend reflects a broader shift in digital fraud, where increasingly sophisticated techniques blur the line between authentic and fabricated content. As synthetic identities become more convincing, individuals find it harder to verify the legitimacy of online interactions and official communications.

In response, authorities in Canada are intensifying awareness efforts during Fraud Prevention Month, offering expert guidance on identifying and avoiding scams.

The development underscores the urgent need for stronger safeguards and public education to counter evolving AI-enabled threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IWF report reveals a rapid growth of synthetic child abuse material online

A surge in AI-generated child sexual abuse material has raised urgent concerns across Europe, with the Internet Watch Foundation reporting record levels of harmful content online.

Findings of the IWF report indicate that AI is accelerating both the scale and severity of abuse, transforming how offenders create and distribute illicit material.

Data from 2025 reveals a sharp increase in AI-generated imagery and video, with over 8,000 cases identified and a dramatic rise in highly severe content.

Synthetic videos have grown at an unprecedented rate, reflecting how emerging tools are being used to produce increasingly realistic and extreme scenarios rather than traditional formats.

Analysis of offender behaviour highlights a disturbing trend toward automation and accessibility.

Discussions on dark web forums suggest that future agentic AI systems may enable the creation of fully produced abusive content with minimal technical skill. The integration of audio and image manipulation further deepens risks, particularly where real children’s likenesses are involved.

Calls for regulatory action are intensifying as policymakers in the EU debate reforms to the Child Sexual Abuse Directive.

Advocacy groups emphasise the need for comprehensive criminalisation, alongside stronger safety-by-design requirements, arguing that technological innovation must not outpace child protection frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!