UNESCO, UNICEF and ITU publish Charter for Public Digital Learning Platforms

The United Nations Educational, Scientific and Cultural Organization (UNESCO), the United Nations Children’s Fund (UNICEF), and the International Telecommunication Union (ITU) have published a Charter for Public Digital Learning Platforms, which sets out principles to guide governments in developing and governing digital learning systems.

The Charter states that education is a human right and a public good, and emphasises that digital learning platforms should support public education systems rather than replace in-person schooling. It describes such platforms as components of broader education systems that bring together content, technology, and users to support teaching and learning.

According to the Charter, governments are encouraged to establish and maintain public digital learning platforms as part of the national education infrastructure. The document notes that, in many contexts, the absence or limited quality of such platforms has led to increased reliance on private-sector solutions, which may not always align with public education objectives.

The Charter outlines seven principles for public digital learning platforms, covering areas including:

  • public governance and financing, with oversight by public authorities;
  • inclusion, including accessibility, multilingual support, and cultural relevance;
  • pedagogical design, with a focus on teacher-led learning;
  • integration with education systems and public digital infrastructure;
  • open standards and interoperability;
  • user-focused development based on educational needs;
  • trustworthiness, including data protection, safety, and reliability.

The document also highlights the importance of data governance, stating that data generated through such platforms should remain under public control and be managed in accordance with applicable laws, with safeguards for privacy and security.

The Charter was developed under the UNESCO–UNICEF Gateways to Public Digital Learning Initiative, with input from governments and international organisations. It was released on the occasion of the International Day of Digital Learning 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media linked to declining well-being among young people

The World Happiness Report 2026 has identified a growing decline in well-being among young people, with increased social media use emerging as a key contributing factor. These findings suggest that digital habits are increasingly shaping life satisfaction, particularly across Western societies.

The report notes that younger age groups now report significantly lower happiness levels compared to previous decades.

In regions such as North America and Western Europe, the decline coincides with a sharp rise in time spent on social media platforms. Researchers highlight that heavy usage is associated with measurable reductions in well-being, especially among younger users.

Alongside these trends, the report continues to rank Finland as the happiest country globally, reflecting broader stability in Nordic nations. However, such stability contrasts with emerging concerns about mental health and social outcomes in more industrialised regions, where digital environments are playing an increasingly influential role.

While the report identifies risks including cyberbullying, depression and online exploitation, it does not advocate for complete restrictions. Instead, it emphasises the need for carefully designed regulatory approaches that balance protection with the potential benefits of digital connectivity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Inspired Education introduces AI-driven learning for primary schools

Inspired Education has unveiled a new AI-enabled primary teaching model designed to modernise traditional learning systems. The programme aims to better align education with how children learn in a digital and fast-changing environment.

The model combines core academic subjects in the morning with applied learning in the afternoon. Students focus on life skills such as problem-solving, entrepreneurship and communication alongside standard curriculum content.

Learning is structured around mastery rather than age, allowing children to progress at their own pace. AI-powered tools are used to personalise lessons and support faster and more adaptive learning outcomes.

The first early-access programme will launch in Central London in January 2027. Further rollouts are planned across cities, including Lisbon, Milan, Madrid, Mexico City, São Paulo and Auckland.

Developers say the approach responds to growing demand from parents for AI-integrated education. The initiative reflects broader efforts to prepare students with digital, practical and future-ready skills.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

UK drops AI copyright opt-out plan amid growing industry divide

The UK Government has abandoned its previous preference for an AI copyright opt-out model, signalling a shift in policy following strong opposition from creative industries.

Ministers now acknowledge that there is no clear consensus on how AI developers should access copyrighted material.

Concerns from writers, artists and rights holders focused on the use of their work in training AI systems without permission.

Liz Kendall confirmed that extensive consultation exposed significant disagreement, prompting the government to step back from its earlier position that would have allowed the use of copyrighted content unless creators opted out.

A joint report from the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport states that further evidence is required before any legislative change.

Policymakers in the UK will assess how copyright frameworks influence AI development, while also examining international regulation, licensing models and ongoing legal disputes.

Government strategy now centres on balancing innovation with fair compensation.

Officials emphasise that creators must retain control over how their work is used, while AI developers require access to high-quality data to remain competitive. Potential measures include labelling AI-generated content to reduce risks linked to disinformation and deepfakes.

No timeline has been set for reform, reflecting the complexity of aligning economic growth with intellectual property protection.

The debate unfolds alongside broader ambitions outlined by Rachel Reeves, who has identified AI as a central driver of future economic expansion, with the UK aiming to lead adoption across the G7.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Parents underestimate how teenagers use AI in daily life

Parents often believe they understand how their children use AI tools in daily life, but recent studies suggest a clear and growing disconnect. Teenagers are using AI more frequently and in more complex ways than most adults realise.

Research indicates that 64% of teens use AI, while only 51% of parents think their children do. A large share of families have never discussed AI, leaving teenagers to navigate its role without guidance.

Teenagers commonly use AI for schoolwork, research and entertainment as part of their routine activities. However, a notable number also rely on it for advice, conversation and even emotional support in personal situations.

Experts warn that this awareness gap can increase risks linked to misuse and emotional dependence on AI tools. Limited parental understanding means many overlook how strongly AI is influencing behaviour and decision-making.

Despite these concerns, many teenagers feel confident using AI and see it as a helpful tool. Specialists emphasise that open conversations are essential to ensure more responsible and balanced use at home.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU child safety rules lapse amid ongoing debate over privacy and enforcement

The European Union has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April.

Discussions between the European Parliament and the Council of the European Union concluded without reaching a consensus on how to proceed with such measures.

The existing rules permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material.

The European Commission had proposed a temporary extension while negotiations continue on a permanent framework under the Child Sexual Abuse Regulation, but differing views on scope and safeguards prevented agreement.

Stakeholders across sectors have highlighted the importance of maintaining effective tools to address online harms, while also emphasising the need to respect fundamental rights.

Previous periods of legal uncertainty have shown that detection capabilities may be affected when such frameworks are absent, although assessments of effectiveness remain subject to ongoing debate.

At the same time, concerns have been raised regarding the broader implications of monitoring digital communications. Some perspectives stress that any approach should carefully consider privacy protections, particularly in relation to secure and encrypted services.

Attention now turns to ongoing negotiations on a long-term regulatory solution.

The outcome will shape how the EU approaches the challenge of addressing harmful online content while safeguarding rights and ensuring proportional and transparent enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO launches research on harmful online content governance in South Africa

A new research initiative led by UNESCO is examining the governance of harmful online content in South Africa, bringing together actors from government, academia, civil society and technology platforms to strengthen digital governance frameworks.

Conducted under the Social Media 4 Peace programme and supported by the EU, the study investigates the spread and impact of hate speech and disinformation while assessing existing regulatory approaches and platform governance systems.

Emphasis is placed on identifying structural gaps and developing practical responses suited to the country’s socio-political context.

Stakeholder engagement has shaped the research design to reflect local realities, with the aim of producing actionable and rights-based recommendations. As noted by a researcher involved in the project,

At Research ICT Africa, we don’t want this study to end with generic recommendations. We are aiming for grounded insights into how social media is shaping information integrity in our context, alongside practical guidance that regulators, platforms, and civil society can apply.

Kola Ijasan, a researcher at Research ICT Africa

Regulatory perspectives also highlight the importance of understanding emerging risks. As one regulator stated,

We are particularly interested in identifying regulatory gaps – areas where current laws and frameworks fall short in addressing emerging digital risks.

Nomzamo Zondi, a regulator in South Africa.

Findings are expected to contribute to evidence-based policymaking, strengthen platform accountability and safeguard freedom of expression and access to information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GDPR changes debated as EU seeks balance on data protection rules

Debate over potential updates to the GDPR is intensifying, as Marina Kaljurand advocates a focused ‘fitness check’ rather than sweeping legislative changes in an omnibus package.

Concerns raised in the European Parliament highlight risks associated with altering foundational elements of the regulation, particularly its definitions to personal data. Preserving these core principles is seen as essential to maintaining the integrity of the EU’s data protection framework.

Ongoing discussions reflect broader policy tensions within the EU, where efforts to reduce regulatory complexity must be balanced against the need to uphold strong privacy safeguards. Proposals for simplification are therefore facing scrutiny from lawmakers prioritising stability and legal clarity.

Future developments are likely to shape how the EU adapts its data protection rules to evolving digital markets, while ensuring that existing protections remain effective in a rapidly changing technological environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!