Online Safety Act brings progress, but UK children still face harm online

A new report from Internet Matters suggests the UK’s Online Safety Act has introduced more visible safety measures for children, but has not yet delivered the step change needed to make their online lives meaningfully safer. Drawing on surveys and focus groups with children and parents, the report presents an early view of how the law is affecting families in practice.

The findings point to some clear signs of progress. Parents and children report seeing more safety features, including improved reporting tools, content filters, restrictions on certain functions, and stronger parental controls. Many children also say the content they encounter online is becoming more age-appropriate.

At the same time, the report argues that important weaknesses remain. Children continue to encounter harmful content at high rates, while age verification is widely seen as easy to bypass. Internet Matters also says that some of the issues families care most about, including excessive screen time and the risks linked to AI-generated content, are still not adequately addressed under the current framework.

The report concludes that parents are still carrying too much of the burden of keeping children safe online. It calls for stronger enforcement, more effective age assurance, tighter limits on harmful features, and a broader safety-by-design approach to digital services used by children in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China closes consultation on digital virtual human services

The Cyberspace Administration of China has closed its public consultation on the draft Administrative Measures for Digital Virtual Human Information Services, which set out proposed rules for digital virtual human services provided to the public in China.

The notice states that the consultation opened in April 2026 and that comments were accepted until 6 May 2026. According to the draft, the measures would apply to internet information services delivered to the public within China through digital virtual humans.

The draft says providers and users must process data for lawful purposes and within a lawful scope, use data from legal sources, and fulfil their data security responsibilities. It also requires technical and other necessary measures to protect data storage and transmission and to prevent leaks or improper use.

The text further requires digital virtual human service providers and users to establish security risk monitoring, warning, emergency response, anti-addiction mechanisms, and stronger content-direction management, while also retaining logs. Providers whose services have public opinion attributes or social mobilisation capacity would also be required to complete algorithm filing procedures and security assessments in line with existing national rules.

Beyond cybersecurity and data protection, the draft includes provisions on personal information, personality rights, intellectual property, content controls, labelling requirements, and protections for minors. It defines digital virtual humans as virtual figures in the non-physical world that simulate human appearance and may have voice, behaviour, interaction abilities, or personality traits, using graphics, digital image processing, or AI technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Meta age assurance system aims to prevent underage access

Meta has expanded its use of AI to strengthen age assurance and improve enforcement of underage account policies across its platforms. The systems are designed to detect users under 13 for removal and to place suspected teens into protected Teen Account settings on Instagram and Facebook in regions including the EU, Brazil, and the US.

The technology analyses a range of signals, including profile information, user activity, and other contextual indicators, to estimate age more accurately. Automated systems are also being used to support faster and more consistent review of reports related to underage use.

Visual analysis has also become part of Meta’s broader detection approach, with the company saying its systems look for general age-related indicators rather than attempting to identify specific individuals. Reporting tools have been simplified, and AI-assisted moderation is being used to improve the speed and reliability of enforcement decisions.

Alongside these enforcement measures, Meta is increasing parental engagement through notifications and guidance to encourage more accurate age reporting and safer online behaviour. The wider effort reflects growing pressure on platforms to move beyond self-declared age checks and to build stronger systems to protect younger users.

Why does it matter?

The significance of the move lies in the fact that age assurance is becoming a core platform governance issue rather than a secondary moderation tool. Meta is trying to show that large social platforms can use AI not only to recommend or personalise content, but also to enforce minimum age rules at scale. That matters because regulators are increasingly questioning whether self-declared age data is enough to protect minors online. It also points to a broader shift in which platforms are expected to combine safety obligations, automated detection, and parental tools into a more active system of child protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

MEPs consider stronger EU measures on cyberbullying and online harassment

The European Parliament has voted on a resolution on targeted criminal provisions and platform responsibility to address cyberbullying and online harassment, following a debate with the Commission.

The debate focused on whether EU law should go further in addressing harmful online behaviour, including through targeted criminal provisions and stronger obligations for platforms. Parliament’s plenary briefing said MEPs were expected to press the Commission on what more can be done beyond existing Digital Services Act protections.

Draft resolution texts tabled in Parliament say MEPs want the Commission to consider making cyberbullying a criminal offence under EU law and to address legal gaps in the current framework.

The vote followed the Commission’s recent action plan against cyberbullying, which Parliament said is built around a support app, coordination of national approaches, and the promotion of safer digital practices.

The debate also comes after MEPs heard testimony earlier this year from Jackie Fox, whose daughter Coco’s case led to Ireland’s Harassment, Harmful Communications and Related Offences Act 2020, known as Coco’s Law. Parliament’s briefing notes that while EU initiatives address parts of the issue, there is still no EU-wide anti-online bullying law or commonly agreed definition at the European or international level.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission urges fast rollout of EU age verification app

The European Commission has adopted a recommendation urging member states to accelerate the rollout of the EU age verification app and make it available by the end of the year. The recommendation says the app can be deployed either as a standalone solution or integrated into a European Digital Identity Wallet.

According to the Commission, the app is intended to let users prove they meet a required age threshold without disclosing their exact age, identity, or other personal details. The Commission has also published a blueprint for the system, leaving it to member states to customise and produce the app for their citizens.

The recommendation sets out actions for member states to support rapid availability and interoperability, including implementation plans and coordination to ensure the swift rollout of the solution across the EU.

The measure forms part of the EU’s wider approach to protecting minors online under the Digital Services Act, which requires online platforms to ensure a high level of privacy, safety, and security for minors.

Executive Vice-President Henna Virkkunen said: ‘Effective and privacy-preserving age verification is the next piece of the puzzle that we are getting closer to completing, as we work towards an online space where our children are safe and empowered to use positively and responsibly without restricting the rights of adults.’

Why does it matter?

The move takes age verification in the EU from a general policy objective to a more concrete implementation phase. Rather than leaving platforms and member states to develop separate solutions, the Commission is trying to steer the bloc towards a common privacy-preserving model that can work across borders.

That matters for both child protection and regulatory coherence, because if countries adopt incompatible systems or move at very different speeds, enforcement under the Digital Services Act could become uneven in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK House of Commons backs amendments in lieu on Children’s Wellbeing Bill with online safety provisions

The UK House of Commons has backed government amendments instead of the Children’s Wellbeing and Schools Bill, after insisting on its disagreement with the Lords’ amendments and proposing its own amendments in lieu. In the debate, ministers said the Children’s Wellbeing and Schools Bill will place a statutory duty on the Secretary of State to act following the consultation, changing the wording from ‘may’ to ‘must’.

Education minister Olivia Bailey told MPs that the government is consulting on the mechanism, but that ‘under any outcome’ it will impose ‘some form of age or functionality restrictions for children under 16’. She added that curfews would be considered in addition to, not instead of, those restrictions.

Bailey said the Children’s Wellbeing and Schools Bill now requires a statutory progress report three months after Royal Assent, with regulations to be laid within 12 months after that. She said the government intends to move faster and aims to lay the regulations by the end of the year, while describing any further six-month extension as a backstop for ‘exceptional and unforeseen circumstances’ only.

Opposition MPs and Liberal Democrats argued that the timetable remained too slow. Conservative frontbencher Laura Trott said the revised proposal was ‘a huge step forward’ but warned that ‘every month of delay just leaves children more exposed to the harms of social media online’.

Liberal Democrat spokesperson Munira Wilson said the overall timeline could still amount to 21 months before action. The House later voted by 272 to 64 to insist on its disagreement with the Lords’ amendments and to approve the government’s amendments in lieu. Lords amendment 105C was also agreed to, allowing the Children’s Wellbeing and Schools Bill to move forward with the revised online safety provisions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces EU Digital Services Act breach finding over under-13 access

The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act over failures to adequately prevent children under 13 from accessing the platforms. The finding remains provisional and does not prejudge the outcome of the investigation.

According to the Commission, Meta’s existing measures do not effectively enforce its own minimum age requirement of 13. The preliminary findings say children below that age can still create accounts by entering false birth dates, while the company’s reporting tool for underage users is difficult to use and often does not result in effective follow-up.

The Commission also considers Meta’s risk assessment to be incomplete and arbitrary. It says the company failed to identify and assess the risks properly posed to children under 13 who access Instagram and Facebook, despite evidence from across the EU suggesting that a significant share of children under 13 use one or both services. This wording is best kept cautious unless you are quoting the exact percentage directly from the Commission text.

At this stage, the Commission says Meta must revise its risk assessment methodology and strengthen its measures to prevent, detect, and remove children under 13 from the platforms. It also says the company must better counter and mitigate the risks those children may face and ensure a high level of privacy, safety, and security for minors.

The preliminary findings form part of formal proceedings opened against Meta in May 2024 under the DSA. The Commission says the investigation has included analysis of Meta’s risk assessment reports, internal data and documents, and the company’s responses to requests for information, with support from civil society organisations and child protection experts across the EU.

If the Commission’s preliminary view is confirmed, it may adopt a non-compliance decision and impose a fine of up to 6% of the provider’s total worldwide annual turnover, as well as periodic penalty payments. Meta now has the opportunity to reply before any final decision is taken.

Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security and Democracy, said Meta’s own terms and conditions already state that its services are not intended for children under 13, but that the company appears to be doing too little in practice to prevent them from gaining access.

Why does it matter?

The case matters because it goes to the heart of how the Digital Services Act is expected to work in practice: not only by requiring large platforms to set rules for child safety, but by obliging them to enforce those rules effectively. If the Commission’s preliminary view is confirmed, the Meta case could become an important benchmark for how the EU treats age assurance, risk assessments, and platform accountability in cases involving minors, with wider implications for other services that rely on self-declared age checks and weak reporting tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europol’s IOCTA 2026 shows growing cyber threats across Europe’s digital landscape

The 2026 Internet Organised Crime Threat Assessment has been released by Europol, outlining the growing complexity of cybercrime across Europe. The report identifies encryption, proxies, and AI as key drivers behind the increasing scale and sophistication of digital threats.

According to Europol, criminal networks are adapting rapidly, using fragmented online environments and encrypted communication channels to evade detection. The report highlights cybercrime enablers, online fraud schemes, cyber-attacks, and online child sexual exploitation as central areas of concern in the EU threat landscape.

AI is playing a growing role in cyber-enabled crime by making fraud, deception, and other forms of online abuse more scalable and more convincing. Europol presents this as part of a wider shift in which digital threats are becoming more adaptive, more accessible, and harder to disrupt through traditional law enforcement methods alone. This is an inference based on Europol’s framing of AI as a major force expanding cybercrime.

The report also points to continued risks in cyber-attacks and online child sexual exploitation, underlining how technological change is affecting both financially motivated crime and harms involving vulnerable users. In that sense, IOCTA 2026 presents Europe’s cyber challenge not as a series of isolated incidents, but as a broader digital threat environment shaped by enabling technologies and rapidly evolving criminal tactics. This is an inference grounded in Europol’s description of the report’s main threat areas.

These developments reinforce the need for stronger operational cooperation, more advanced investigative capabilities, and continued adaptation across Europe’s law enforcement and regulatory systems. Europol’s overall message is that cybercrime is becoming more sophisticated, more industrialised, and more deeply embedded in the wider digital ecosystem. This is an inference based on the report’s scope and framing.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IWF and Immaterialism expand efforts to combat child abuse content online

Immaterialism has joined the Internet Watch Foundation to strengthen efforts against the spread of child sexual abuse material online.

The partnership introduces IWF tools designed to accelerate the identification of harmful domains and enable faster intervention when abusive activity is detected. By adopting Registrar Alerts and related datasets, the registrar aims to improve its ability to respond to criminal content across the domains under its management.

The collaboration reflects a broader shift towards more proactive action at the domain infrastructure layer. By integrating intelligence tools into operational processes, the initiative aims to disrupt both the deliberate distribution of abusive material and the continued availability of domains linked to it.

The IWF says the volume of detected child sexual abuse material continues to rise, reinforcing the need for coordinated responses between safety organisations and private-sector actors. In that sense, the partnership points to closer alignment between domain service providers and specialist online safety groups working to strengthen protections for children online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!