IWF data shows 63% of global child abuse content hosted in the EU

New data from the Internet Watch Foundation (IWF) points to a stark imbalance in global online child protection, with the EU member states hosting the majority of confirmed child sexual abuse material URLs identified by the organisation. In 2025, IWF analysts actioned 310,437 URLs, with 63% traced to hosting services in the EU member states.

A small cluster of countries, including Bulgaria and the Netherlands, accounted for a large share of that hosting concentration, highlighting structural vulnerabilities in hosting infrastructure and uneven enforcement across jurisdictions. The IWF notes that such concentrations often reflect a combination of high-volume sites, migration between hosting locations, and inconsistent takedown speeds.

These findings come shortly after the EU failed to preserve legal continuity for the temporary framework that had allowed companies to carry out certain voluntary detection measures while negotiations on a permanent child sexual abuse law continued. That lapse has intensified concerns about a widening gap between the scale of online abuse and the legal tools available to detect and disrupt it.

The IWF argues that fragmented regulation and uneven infrastructure responses make it easier for criminal content to persist online. Where abuse material remains concentrated on a few high-volume sites in jurisdictions with slower or less consistent takedown practices, it stays accessible for longer and is more likely to be copied, redistributed, or reposted elsewhere.

By contrast, takedown performance can vary sharply across jurisdictions. The UK accounted for just 951 actioned URLs in 2025, or 0.30% of the total, a figure the IWF links to a much stronger domestic removal framework and closer operational cooperation.

The broader message of the data is that child sexual abuse material cannot be tackled effectively through fragmented national responses alone. The IWF is using the figures to press for a more coherent international framework for detection, reporting, and removal, warning that without aligned rules and stronger accountability, systemic weaknesses in digital governance will continue to leave serious gaps in child protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

eSafety Commissioner of Australia issues notices to Roblox, Minecraft, Fortnite and Steam

Australia’s eSafety Commissioner has issued legally enforceable transparency notices to Roblox, Minecraft, Fortnite and Steam over concerns that online games are being used by individuals seeking to groom children and by extremist groups to spread violent propaganda and radicalise young people.

The notices require the platforms to explain how they identify, prevent and respond to harms including grooming, cyberbullying, online hate, sexual extortion and violent extremism. They also ask how systems, staffing and safety-by-design measures align with the Australian Government’s Basic Online Safety Expectations.

eSafety Commissioner Julie Inman Grant said online games and gaming-adjacent services can serve as first points of contact between children and offenders in cases involving serious online harm. She said: ‘What we often see after these offenders make contact with children in online game environments, they then move children to private messaging services.’

Inman Grant also said: ‘Predatory adults know this and target children through grooming or embedding terrorist and violent extremist narratives in gameplay, increasing the risks of contact offending, radicalisation and other off-platform harms.’

eSafety said it publishes reports based on transparency notices to provide the public, including parents, with more information about safety risks and existing mitigations, while also increasing pressure on technology companies to adopt Safety by Design. Online game platforms must also comply with Australia’s Online Safety Codes and Standards, and a breach of a direction to comply with a code or standard can attract penalties of up to A$49.5 million per breach.

Compliance with a transparency notice is mandatory. If companies fail to respond, eSafety has enforcement options, including financial penalties of up to A$825,000 a day.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom steps up child safety enforcement with Telegram and chat site investigations

The UK’s online safety regime has entered a more confrontational phase, with Ofcom opening new investigations into Telegram and two chat platforms over suspected failures to protect children from serious harm. The move signals a shift from broad compliance warnings to more direct enforcement against services deemed to pose acute risks under the Online Safety Act.

Ofcom said it is investigating Telegram to determine whether the platform is doing enough to prevent child sexual abuse material from being shared. Separate probes have also been opened into Teen Chat and Chat Avenue, where the regulator says there are concerns that chat functions may be facilitating grooming and other harms to children. According to Ofcom, the providers have not demonstrated sufficient safeguards for UK users despite earlier engagement.

The cases are part of a wider enforcement drive rather than isolated actions. Ofcom has already been pressing file-sharing and file-storage services over child sexual abuse risks, and says some platforms have since introduced automated detection tools, blocked access for UK users, or otherwise changed their systems in response to regulatory pressure. In other cases, investigations have been closed after providers took corrective steps.

That broader context matters. Since the first online safety duties became enforceable, Ofcom has been moving from rule-setting into operational enforcement, testing whether platforms are actually putting in place the systems and processes needed to reduce illegal harms.

In the child safety area, that increasingly means proactive risk management, technical detection measures, and design choices that make it harder for offenders to share abusive material or contact children in the first place.

Ofcom has also made clear that services available in the UK cannot treat these duties as optional. Under the Online Safety Act, companies can face significant financial penalties for failing to comply, and the regulator can ask courts to impose business disruption measures or restrict access where necessary. That gives the current investigations weight beyond the individual platforms involved.

The bigger significance of the latest action is that platform accountability is being judged less on stated policies and more on demonstrable safeguards. The Telegram case in particular shows that even large, globally used platforms are now exposed to direct scrutiny if UK regulators believe child safety risks are not being properly addressed.

Taken together, the investigations suggest that Ofcom is trying to establish a more interventionist model of online safety enforcement, one in which companies are expected to anticipate and reduce harm rather than respond only after it has spread.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Philippines presses Meta for faster action on online disinformation

The Philippine government is intensifying pressure on Meta to act more quickly to address harmful online disinformation, arguing that the company’s current enforcement approach is insufficient to address rapidly spreading false content that can affect public order, economic confidence, and national security. The latest move comes in the form of a formal response from the Department of Information and Communications Technology, following an earlier joint request involving the Presidential Communications Office and the Department of Justice.

Officials acknowledged Meta’s willingness to engage and its existing moderation policies, but said broad descriptions of enforcement mechanisms fall short of what the situation requires. According to the DICT, the government is seeking clear commitments, faster intervention processes, and measurable outcomes rather than general assurances about existing platform rules.

The pressure campaign is tied to concerns that false and misleading online content can trigger real-world harm, especially during politically and economically sensitive periods. Government statements have linked the problem to panic-inducing disinformation that could affect fuel prices, economic stability, and public trust, and have warned that inadequate action from Meta could lead to legal and regulatory consequences.

The latest DICT response sharpens that message. While recognising Meta’s engagement, the agency said general explanations of moderation policies were not enough, arguing that what is needed now are faster enforcement processes, concrete commitments, and measurable results. The government has tied that position to its wider ‘Kontra Fake News’ campaign, which it says is intended to protect access to accurate information while holding those who deliberately spread falsehoods accountable.

The dispute is also part of a broader institutional shift. The DICT, Presidential Communications Office, and Department of Justice have moved towards a more coordinated response to digital disinformation, including a memorandum of agreement aimed at a whole-of-government approach to false content and related threats such as deepfakes. That makes the Meta case more than a platform-specific complaint: it is becoming part of a wider governance and enforcement strategy.

In the meantime, officials of the Philippines have tried to draw a line between legitimate expression and harmful manipulation. The government says freedom of expression remains protected, but that protection does not extend to coordinated or deliberately harmful disinformation that can trigger panic or erode confidence in public institutions. That distinction is likely to become more important if talks with Meta fail and the government moves towards tougher intervention.

The broader significance of the case lies in what it says about platform governance. Rather than accepting general assurances about moderation systems, governments are increasingly demanding faster, more transparent, and more locally responsive enforcement from major technology companies. In the Philippine case, that pressure is now being expressed through a formal inter-agency effort that could test how far states are willing to go when platforms are seen as too slow to respond to politically and economically sensitive disinformation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea warns on AI fake news risks

Reporting by The Korea Herald states that South Korean Prime Minister Kim Min-seok has warned of the risks of AI-generated fake news ahead of an upcoming election. Authorities are urging greater vigilance as digital content becomes harder to verify.

According to the report, AI technologies are increasingly capable of producing realistic false information, including manipulated images and videos. This raises concerns about their potential impact on public opinion and trust.

The government has called for precautionary measures to limit the spread of misinformation and protect the integrity of democratic processes. This includes encouraging awareness and responsible use of AI tools.

The warning reflects broader concerns about the influence of AI driven disinformation during election cycles in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU monitoring highlights platform performance under revised hate speech code

The European Commission has published the first monitoring results under the revised Code of Conduct on Countering Illegal Hate Speech Online+, providing insight into how major platforms handle reported content.

The assessment combines independent monitoring with self-reported data from participating companies.

Findings indicate that most platforms reviewed a majority of notifications within 24 hours, in line with their commitments.

However, a significant share of reported cases was either disputed or classified as erroneous, with inaccuracies partly attributed to monitoring bodies’ misuse of reporting channels.

The monitoring exercise functions as a structured stress test within the framework of the Digital Services Act (DSA), assessing whether platforms meet minimum response thresholds and apply appropriate measures when illegal hate speech is identified under national and the EU law.

Such a publication of results aims to strengthen transparency and accountability, while informing future improvements ahead of the next monitoring cycle.

The Code of Conduct on Countering Illegal Hate Speech Online+ now operates as part of the EU’s co-regulatory approach to platform governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU advances AI copyright safeguards through GPAI taskforce discussions

The European Commission has convened the second meeting of the Signatory Taskforce under the General-Purpose AI Code of Practice (GPAI), focusing on copyright protection in AI systems.

The discussion brought together signatories to exchange early implementation practices and technical approaches.

Participants examined methods to reduce copyright risks in AI-generated outputs, highlighting measures applied across the model’s lifecycle, including data selection, training, and deployment.

Emphasis was placed on combining technical safeguards with organisational processes to improve transparency and effectiveness.

One approach presented involved training models on licensed content alongside attribution systems to identify similarities between generated outputs and source material. Such a method aims to support fair remuneration and strengthen accountability within AI development.

The meeting also addressed mechanisms for handling complaints from rights holders, with participants discussing procedures for accessible and timely responses.

An exchange that forms part of ongoing EU efforts to refine governance standards for AI systems and copyright compliance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece moves to restrict youth social media access with new digital age rules

New measures to protect minors online have been announced by Greece, introducing a national ‘digital age of majority’, restricting access to social media for users under 15.

The policy forms part of a broader strategy addressing child safety and digital overuse, with implementation scheduled for January 2027.

An initiative that places primary responsibility on platforms, requiring robust age-verification systems and periodic re-verification of existing accounts. Authorities will oversee compliance under the EU’s Digital Services Act framework, with penalties including fines and operational restrictions for violations.

The policy builds on earlier tools such as KidsWallet, an age-verification mechanism already deployed nationally.

Authorities in Greece argue that reliance on parental control alone is insufficient, citing increasing evidence linking excessive platform use to mental health risks, including anxiety, reduced sleep, and social isolation.

A proposal that aligns with wider European discussions on youth protection, including efforts to establish a unified digital age threshold across member states.

Greece has also called for stronger EU-wide enforcement mechanisms, positioning the measure as part of a coordinated approach to safeguarding minors in digital environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

National Crime Agency to receive CSEA reports under UK Online Safety Act rules

UK regulations under the Online Safety Act 2023 are now in force, requiring certain regulated user-to-user services to register with the National Crime Agency and report detected and unreported child sexual exploitation and abuse content.

Under the Online Safety (CSEA Content Reporting by Regulated User-to-User Service Providers) Regulations 2026, providers subject to the reporting duty, and any third-party providers acting on their behalf, must register with the National Crime Agency through an online portal. They must also appoint an organisation administrator as a point of contact.

Reports submitted to the National Crime Agency must contain specified information, including details about the content, the time it was uploaded, relevant IP addresses, and user account data. The regulations also require providers to classify reports into three priority levels and submit them within the corresponding timeframes.

Record-keeping duties are also set out in the regulations. Providers must retain the report reference number for five years and keep the associated content and user data for one year from the reporting date.

The rules form part of the reporting framework under the Online Safety Act 2023 for child sexual exploitation and abuse content on regulated user-to-user services in the UK. Non-compliance may result in a penalty of up to 10% of qualifying worldwide revenue or £18 million, whichever is greater.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!