Australia’s OAIC updates the Children’s Online Privacy Code page during public consultation

The Office of the Australian Information Commissioner (OAIC) updated its Children’s Online Privacy Code page, as the regulator continues consultation on a draft code that will set privacy rules for online services likely to be accessed by children.

The page says the Code is being developed under the Privacy and Other Legislation Amendment Act 2024 and will operate as an APP Code under the Privacy Act 1988.

According to the OAIC, the Code will apply to online services that fall within the categories of social media services, relevant electronic services, and designated internet services under the Online Safety Act 2021, where those services are likely to be accessed by children or primarily concern children’s activities. The regulator says the Code is intended to put children at the centre of privacy protections in Australia while also lifting privacy practices more broadly.

The updated page highlights the current public consultation on the exposure draft of the Children’s Online Privacy Code. It also refers users to separate consultation pathways for children, young people, parents and carers, and for industry, civil society, academia, and other interested parties.

The OAIC also says it has created a dedicated Privacy for Kids hub to support participation in the consultation. According to the page, the hub includes workbooks and child-friendly guides to help explain the draft Code to children, young people, and parents and carers.

In addition, the updated page invites stakeholders to register for an OAIC webinar on the Children’s Online Privacy Code public consultation. The OAIC says the final Code must be finalised and registered by 10 December 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU targets platforms over child safety and addictive design practices

The European Commission has intensified enforcement under the Digital Services Act (DSA), targeting online platforms for child safety, addictive design features, and insufficient age-verification systems.

Executive Vice-President Virkkunen said the measures are intended to ensure platforms are held accountable when services expose minors to harmful or restricted content.

Actions have been taken against multiple major platforms, including TikTok, Facebook, Instagram, Snapchat, and Shein, over concerns related to design practices such as infinite scroll, autoplay, and highly personalised recommendation systems.

Additional enforcement has also been launched against pornographic platforms for failing to implement adequate age verification tools.

Alongside enforcement, the EU has developed a digital age verification app designed to give users control over personal data through privacy-preserving technology based on zero-knowledge proofs.

The system is already technically ready and is being tested across several member states, either as a standalone tool or integrated into national digital wallets.

The Commission is also preparing an EU-wide coordination mechanism to standardise accreditation of national solutions and avoid fragmentation across member states. The initiative aims to establish a unified age-verification framework that upholds privacy standards and supports wider adoption across digital services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Dutch court bans harmful Grok AI-generated images

A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.

The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.

The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.

The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia reviews compliance with under-16 social media age ban

Australia’s eSafety Commissioner has released an update on rules requiring platforms to prevent users under 16 from holding accounts. Early results show significant action by companies, but also ongoing challenges in fully enforcing the restrictions.

By mid-December 2025, around 4.7 million accounts were removed or restricted, with more than 300,000 additional accounts blocked by March 2026. Despite these reductions, many children continue to retain accounts, create new ones, or pass age assurance checks.

Regulators identified several compliance concerns, including platforms that allow repeated attempts at age verification and encourage some users to update their ages. Reporting systems for underage accounts were often difficult to access, particularly for parents.

Investigations into five major platforms are ongoing to determine whether they have taken reasonable steps to meet their legal obligations. Authorities are assessing systems and processes rather than individual accounts, with enforcement decisions expected by mid-2026.

A new legislative rule introduced in March 2026 targets platform features linked to potential harm, such as recommender systems and continuous content feeds. Regulators will continue working with industry while gathering evidence and maintaining transparency during the enforcement process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU boosts fact-checking with €5 million disinformation resilience plan

The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.

The programme introduces a comprehensive support system for fact-checkers, covering legal assistance, cybersecurity protection and psychological support.

It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.

Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.

By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO initiative drives new digital platform governance frameworks in South Asia

South Asia is strengthening digital platform governance through a rights-based approach shaped by regional cooperation and international guidance.

A workshop led by UNESCO brought together policymakers, civil society and academics to align platform regulation with principles of freedom of expression and access to information.

The discussions focused on addressing governance gaps linked to misinformation, platform accountability and transparency. Participants examined national experiences and identified shared regulatory challenges, emphasising the need for coordinated regional responses instead of fragmented national measures.

An initiative that also validated regional toolkits designed for policymakers and civil society, translating global principles into practical guidance. These tools aim to support the implementation of governance frameworks that reflect local contexts while upholding international human rights standards.

The process builds on UNESCO’s Internet for Trust guidelines, reinforcing a human-centred model of digital governance. Continued collaboration across South Asia is expected to strengthen regulatory capacity and ensure that digital platforms operate with greater accountability and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!