France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Dutch court bans harmful Grok AI-generated images

A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.

The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.

The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.

The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia reviews compliance with under-16 social media age ban

Australia’s eSafety Commissioner has released an update on rules requiring platforms to prevent users under 16 from holding accounts. Early results show significant action by companies, but also ongoing challenges in fully enforcing the restrictions.

By mid-December 2025, around 4.7 million accounts were removed or restricted, with more than 300,000 additional accounts blocked by March 2026. Despite these reductions, many children continue to retain accounts, create new ones, or pass age assurance checks.

Regulators identified several compliance concerns, including platforms that allow repeated attempts at age verification and encourage some users to update their ages. Reporting systems for underage accounts were often difficult to access, particularly for parents.

Investigations into five major platforms are ongoing to determine whether they have taken reasonable steps to meet their legal obligations. Authorities are assessing systems and processes rather than individual accounts, with enforcement decisions expected by mid-2026.

A new legislative rule introduced in March 2026 targets platform features linked to potential harm, such as recommender systems and continuous content feeds. Regulators will continue working with industry while gathering evidence and maintaining transparency during the enforcement process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU boosts fact-checking with €5 million disinformation resilience plan

The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.

The programme introduces a comprehensive support system for fact-checkers, covering legal assistance, cybersecurity protection and psychological support.

It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.

Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.

By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO initiative drives new digital platform governance frameworks in South Asia

South Asia is strengthening digital platform governance through a rights-based approach shaped by regional cooperation and international guidance.

A workshop led by UNESCO brought together policymakers, civil society and academics to align platform regulation with principles of freedom of expression and access to information.

The discussions focused on addressing governance gaps linked to misinformation, platform accountability and transparency. Participants examined national experiences and identified shared regulatory challenges, emphasising the need for coordinated regional responses instead of fragmented national measures.

An initiative that also validated regional toolkits designed for policymakers and civil society, translating global principles into practical guidance. These tools aim to support the implementation of governance frameworks that reflect local contexts while upholding international human rights standards.

The process builds on UNESCO’s Internet for Trust guidelines, reinforcing a human-centred model of digital governance. Continued collaboration across South Asia is expected to strengthen regulatory capacity and ensure that digital platforms operate with greater accountability and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK-Philippines partnership advances digital education and EdTech

The British Embassy in Manila and the Philippines’ Department of Education have expanded cooperation to advance EdTech and digital learning, focusing on inclusive and evidence-based approaches instead of fragmented implementation.

A partnership that aims to strengthen foundational learning while supporting long-term resilience in the education system.

Support is being delivered through EdTech Hub, with initiatives centred on developing a National EdTech Policy, improving responses to climate-related disruptions, and expanding the use of AI in education administration.

The programme includes pilot projects and evaluation frameworks designed to ensure technology adoption remains effective, scalable, and responsive to local needs.

A key component involves participation in global AI initiatives, including an observatory and challenge programme to build institutional capacity and encourage experimentation.

These efforts seek to enhance efficiency in education systems while supporting innovation in teaching and learning environments, particularly in areas affected by environmental and structural challenges.

The collaboration between the UK and the Philippines reflects a broader commitment to digital transformation in education across Southeast Asia, aiming to ensure equitable access to learning opportunities.

By combining research, policy development, and technological innovation, both sides seek to prepare students and institutions for evolving demands while maintaining a focus on inclusion and long-term sustainability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Brazil study maps age assurance practices across 25 digital services

A new study by CGI.br and NIC.br examines how digital services in Brazil implement age assurance measures. Presented in Brasília during an event on the Digital Child and Adolescent Statute (ECA Digital), the study reviewed 25 popular online services used by children and adolescents.

The study found that most of the services analysed do not apply age checks at the point of registration, including some platforms aimed at adults. According to the release, age assurance usually appears later, when users try to access specific features such as livestreaming or monetisation.

Titled ‘Age assurance practices in 25 digital services used by children in Brazil’, the study analysed governance documents published before the ECA Digital entered into force. From 18 March, the law requires information-society services aimed at children and adolescents in Brazil, or likely to be accessed by them, to adopt effective age-assurance measures and parental supervision.

The study found that 11 of the 25 platforms relied on third-party age-assurance services, particularly social media and generative AI platforms. Official identity document submission was the most common verification method, while selfie-based checks were the most common age-estimation tool. Differences were also found between the minimum ages stated by services and those listed in app stores, and some adult-oriented platforms could still be accessed by younger users with parental consent.

Parental supervision tools were available in 15 of the 25 services, but activation was usually optional and depended on parents or guardians. Transparency also emerged as a weakness: only six services published Brazil-specific reports, and only one explained how its minimum-age policy was applied. Policies were often spread across multiple pages, averaging 22 pages per service, and around 40% of the services provided related information in other languages.

Fábio Senne, General Research Coordinator at Cetic.br | NIC.br, said: ‘One of the study’s central aims was to verify the integrity of the information made available by digital services in Brazil. It is essential that data on age protection be communicated clearly and accessibly, allowing more informed and effective parental supervision.’

Juliana Cunha, manager of the Digital Public Policy Advisory Office at CGI.br | NIC.br, said: ‘This survey was developed to support the debate on implementation of the ECA Digital and to offer a clear understanding of the current landscape. This initiative forms part of a broader set of actions by CGI.br and NIC.br aimed at providing technical evidence to support effective enforcement of the law. Our commitment is to foster a safer and more responsible digital ecosystem for children and adolescents in Brazil.’

The release says the study used as a methodological reference the OECD technical paper ‘Age assurance practices of 50 online services used by children’, published in 2025. Information was collected between 10 and 30 January 2026 from public documents made available by the services in Brazil, totalling 550 pages analysed. The event also marked the launch of TIC Kids Online Brazil 2025, a publication on internet use by children and adolescents aged 9 to 17 in Brazil.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Boston schools expand AI learning initiative

A new partnership led by the City of Boston aims to expand AI literacy across public schools, supported by funding from tech entrepreneur Paul English. The initiative brings together government, academia and industry to strengthen digital skills.

The programme will introduce AI-focused learning in high schools, alongside teacher training and the development of industry-informed curricula. Plans include creating student ambassador roles and offering access to advanced courses.

University of Massachusetts Boston in the US will help design educational content and provide resources through its applied AI institute. The collaboration aims to prepare students for changing job markets shaped by emerging technologies.

Officials say the effort will support responsible and ethical use of AI while opening career pathways. An advisory board of industry experts will guide the programme and connect schools with the wider technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!