UK House of Commons backs amendments in lieu on Children’s Wellbeing Bill with online safety provisions

The UK House of Commons has backed government amendments instead of the Children’s Wellbeing and Schools Bill, after insisting on its disagreement with the Lords’ amendments and proposing its own amendments in lieu. In the debate, ministers said the Children’s Wellbeing and Schools Bill will place a statutory duty on the Secretary of State to act following the consultation, changing the wording from ‘may’ to ‘must’.

Education minister Olivia Bailey told MPs that the government is consulting on the mechanism, but that ‘under any outcome’ it will impose ‘some form of age or functionality restrictions for children under 16’. She added that curfews would be considered in addition to, not instead of, those restrictions.

Bailey said the Children’s Wellbeing and Schools Bill now requires a statutory progress report three months after Royal Assent, with regulations to be laid within 12 months after that. She said the government intends to move faster and aims to lay the regulations by the end of the year, while describing any further six-month extension as a backstop for ‘exceptional and unforeseen circumstances’ only.

Opposition MPs and Liberal Democrats argued that the timetable remained too slow. Conservative frontbencher Laura Trott said the revised proposal was ‘a huge step forward’ but warned that ‘every month of delay just leaves children more exposed to the harms of social media online’.

Liberal Democrat spokesperson Munira Wilson said the overall timeline could still amount to 21 months before action. The House later voted by 272 to 64 to insist on its disagreement with the Lords’ amendments and to approve the government’s amendments in lieu. Lords amendment 105C was also agreed to, allowing the Children’s Wellbeing and Schools Bill to move forward with the revised online safety provisions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces EU Digital Services Act breach finding over under-13 access

The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act over failures to adequately prevent children under 13 from accessing the platforms. The finding remains provisional and does not prejudge the outcome of the investigation.

According to the Commission, Meta’s existing measures do not effectively enforce its own minimum age requirement of 13. The preliminary findings say children below that age can still create accounts by entering false birth dates, while the company’s reporting tool for underage users is difficult to use and often does not result in effective follow-up.

The Commission also considers Meta’s risk assessment to be incomplete and arbitrary. It says the company failed to identify and assess the risks properly posed to children under 13 who access Instagram and Facebook, despite evidence from across the EU suggesting that a significant share of children under 13 use one or both services. This wording is best kept cautious unless you are quoting the exact percentage directly from the Commission text.

At this stage, the Commission says Meta must revise its risk assessment methodology and strengthen its measures to prevent, detect, and remove children under 13 from the platforms. It also says the company must better counter and mitigate the risks those children may face and ensure a high level of privacy, safety, and security for minors.

The preliminary findings form part of formal proceedings opened against Meta in May 2024 under the DSA. The Commission says the investigation has included analysis of Meta’s risk assessment reports, internal data and documents, and the company’s responses to requests for information, with support from civil society organisations and child protection experts across the EU.

If the Commission’s preliminary view is confirmed, it may adopt a non-compliance decision and impose a fine of up to 6% of the provider’s total worldwide annual turnover, as well as periodic penalty payments. Meta now has the opportunity to reply before any final decision is taken.

Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security and Democracy, said Meta’s own terms and conditions already state that its services are not intended for children under 13, but that the company appears to be doing too little in practice to prevent them from gaining access.

Why does it matter?

The case matters because it goes to the heart of how the Digital Services Act is expected to work in practice: not only by requiring large platforms to set rules for child safety, but by obliging them to enforce those rules effectively. If the Commission’s preliminary view is confirmed, the Meta case could become an important benchmark for how the EU treats age assurance, risk assessments, and platform accountability in cases involving minors, with wider implications for other services that rely on self-declared age checks and weak reporting tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF and Immaterialism expand efforts to combat child abuse content online

Immaterialism has joined the Internet Watch Foundation to strengthen efforts against the spread of child sexual abuse material online.

The partnership introduces IWF tools designed to accelerate the identification of harmful domains and enable faster intervention when abusive activity is detected. By adopting Registrar Alerts and related datasets, the registrar aims to improve its ability to respond to criminal content across the domains under its management.

The collaboration reflects a broader shift towards more proactive action at the domain infrastructure layer. By integrating intelligence tools into operational processes, the initiative aims to disrupt both the deliberate distribution of abusive material and the continued availability of domains linked to it.

The IWF says the volume of detected child sexual abuse material continues to rise, reinforcing the need for coordinated responses between safety organisations and private-sector actors. In that sense, the partnership points to closer alignment between domain service providers and specialist online safety groups working to strengthen protections for children online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Online safety agreement signed by eSafety and OAIC in Australia

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a memorandum of understanding to strengthen cooperation on issues where online safety and privacy intersect.

The agreement formalises communication pathways between the two regulators and builds on existing collaboration. It covers matters including age-assurance requirements under Australia’s online industry codes and standards, as well as compliance by age-restricted platforms with Social Media Minimum Age obligations.

eSafety Commissioner Julie Inman Grant stated: ‘Both regulators have always recognised that combatting certain harms requires privacy and safety to go hand in hand. For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognise important rights, including the right to privacy.’

She added: ‘Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.’

Inman Grant also linked the agreement to emerging risks associated with new technologies and wider regulatory requirements around age assurance. Grant expanded: ‘It comes at an important time, when the proliferation of new technologies like artificial intelligence is amplifying risks and we are increasingly requiring industry to deploy age-assurance technologies that meet their regulatory obligations and respect privacy in the Australian context.’

Australian Information Commissioner Elizabeth Tydd said the memorandum would support the OAIC’s work in monitoring and responding to emerging online privacy risks and help both agencies deliver their statutory functions under the Online Safety Act.

Tydd added: ‘With this memorandum, we’re not only formalising cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.’

Why does it matter?

A growing number of online safety measures now depend on systems that also raise privacy questions, especially age-assurance tools and other platform controls involving personal data. The agreement gives both regulators a clearer basis for coordinating oversight as Australia expands enforcement around child safety, platform obligations, and emerging technologies such as AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK children’s bill advances with new online safety powers

The UK’s Children’s Wellbeing and Schools Bill has moved forward with a substantial set of online safety amendments, showing how child protection policy is increasingly being folded into wider legislation beyond the Online Safety Act itself. The current printed version of the bill, published as it continues through consideration of amendments between the Commons and Lords, includes new powers that could allow ministers to require providers of specified internet services to prevent or restrict children’s access to certain services, features, or functionalities where there is a risk of harm.

At the centre of the package is a proposed new section 214A to be inserted into the Online Safety Act 2023. Under that provision, the Secretary of State would be able to make regulations requiring providers of specified internet services to block or limit access for children of a specified age. The text makes clear that those powers could apply not only to entire services but also to specific features or functions within them.

That matters because the bill goes well beyond a general statement of principle. The amendments envisage regulations that could address issues such as the amount of time children spend on services, the times of day they can access them, contact from strangers, live audio or video communications, and the ability of unknown users to identify a child’s actual or approximate location. In other words, the government is seeking flexible powers to target specific design features and risks rather than relying only on broad platform-wide restrictions.

The bill would also place Ofcom into the process. As drafted, the regulator is expected to carry out research or provide advice at the Secretary of State’s request to support the making of regulations under the new power, and to publish that advice afterwards. A separate clause would require the Secretary of State, within six months of the Act being passed, to lay before Parliament a progress statement on the first regulations and a timetable for bringing them forward, unless those regulations have already been made.

Another part of the amendment package would give ministers the power to alter the age at which a child can consent to the processing of personal data in relation to information society services, within a range of 13 to 16. The text also allows for regulations on age verification for that consent, including provisions on compliance, monitoring, and enforcement. That means the bill is not only about access and harmful features, but also about the data governance rules that shape children’s use of digital services.

Also, the bill shows that Parliament has not fully settled the question of how far to go. The latest printed text also includes Lords’ amendments to Commons Amendment 38J, which would require the Secretary of State to make regulations imposing highly effective age-assurance and anti-circumvention measures for under-16s on specified regulated user-to-user services. Those Lords’ changes sit within the continuing exchange between the two Houses, rather than representing a final agreed position. The bill remains in the ‘consideration of amendments’ stage and has not yet received Royal Assent.

Why does it matter?

The broader significance of the bill is that the UK is moving towards a more interventionist model of child online safety, one that reaches beyond content moderation into product design, age assurance, feature controls, and the governance of children’s data. But the legislative picture is still in flux. What is emerging is not yet a final settlement, but a live parliamentary struggle over how prescriptive ministers should be, how much discretion they should have, and how strongly the law should push platforms to redesign services for children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s OAIC updates the Children’s Online Privacy Code page during public consultation

The Office of the Australian Information Commissioner (OAIC) updated its Children’s Online Privacy Code page, as the regulator continues consultation on a draft code that will set privacy rules for online services likely to be accessed by children.

The page says the Code is being developed under the Privacy and Other Legislation Amendment Act 2024 and will operate as an APP Code under the Privacy Act 1988.

According to the OAIC, the Code will apply to online services that fall within the categories of social media services, relevant electronic services, and designated internet services under the Online Safety Act 2021, where those services are likely to be accessed by children or primarily concern children’s activities. The regulator says the Code is intended to put children at the centre of privacy protections in Australia while also lifting privacy practices more broadly.

The updated page highlights the current public consultation on the exposure draft of the Children’s Online Privacy Code. It also refers users to separate consultation pathways for children, young people, parents and carers, and for industry, civil society, academia, and other interested parties.

The OAIC also says it has created a dedicated Privacy for Kids hub to support participation in the consultation. According to the page, the hub includes workbooks and child-friendly guides to help explain the draft Code to children, young people, and parents and carers.

In addition, the updated page invites stakeholders to register for an OAIC webinar on the Children’s Online Privacy Code public consultation. The OAIC says the final Code must be finalised and registered by 10 December 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU targets platforms over child safety and addictive design practices

The European Commission has intensified enforcement under the Digital Services Act (DSA), targeting online platforms for child safety, addictive design features, and insufficient age-verification systems.

Executive Vice-President Virkkunen said the measures are intended to ensure platforms are held accountable when services expose minors to harmful or restricted content.

Actions have been taken against multiple major platforms, including TikTok, Facebook, Instagram, Snapchat, and Shein, over concerns related to design practices such as infinite scroll, autoplay, and highly personalised recommendation systems.

Additional enforcement has also been launched against pornographic platforms for failing to implement adequate age verification tools.

Alongside enforcement, the EU has developed a digital age verification app designed to give users control over personal data through privacy-preserving technology based on zero-knowledge proofs.

The system is already technically ready and is being tested across several member states, either as a standalone tool or integrated into national digital wallets.

The Commission is also preparing an EU-wide coordination mechanism to standardise accreditation of national solutions and avoid fragmentation across member states. The initiative aims to establish a unified age-verification framework that upholds privacy standards and supports wider adoption across digital services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot