Australia aligns privacy and online safety regulation

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a new agreement to strengthen cooperation on online privacy and safety regulation.

The Memorandum of Understanding formalises coordination between the two bodies as digital risks increasingly overlap across their respective mandates.

The agreement focuses on joint oversight of age-assurance technologies and compliance with social media minimum-age requirements. Both regulators say they want to ensure that systems designed to protect children from harmful or inappropriate content also respect privacy obligations under Australian law.

Officials also highlighted the growing complexity of online risks, particularly with the rapid development of AI and other emerging technologies. The framework is intended to support more consistent regulatory responses by improving communication, information sharing, and enforcement coordination.

Why does it matter?

Officials from both agencies said closer collaboration will help address digital harms more effectively while ensuring privacy protections remain central to online safety measures. The initiative reflects a broader shift towards more integrated regulation of technology-driven risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Online safety agreement signed by eSafety and OAIC in Australia

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a memorandum of understanding to strengthen cooperation on issues where online safety and privacy intersect.

The agreement formalises communication pathways between the two regulators and builds on existing collaboration. It covers matters including age-assurance requirements under Australia’s online industry codes and standards, as well as compliance by age-restricted platforms with Social Media Minimum Age obligations.

eSafety Commissioner Julie Inman Grant stated: ‘Both regulators have always recognised that combatting certain harms requires privacy and safety to go hand in hand. For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognise important rights, including the right to privacy.’

She added: ‘Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.’

Inman Grant also linked the agreement to emerging risks associated with new technologies and wider regulatory requirements around age assurance. Grant expanded: ‘It comes at an important time, when the proliferation of new technologies like artificial intelligence is amplifying risks and we are increasingly requiring industry to deploy age-assurance technologies that meet their regulatory obligations and respect privacy in the Australian context.’

Australian Information Commissioner Elizabeth Tydd said the memorandum would support the OAIC’s work in monitoring and responding to emerging online privacy risks and help both agencies deliver their statutory functions under the Online Safety Act.

Tydd added: ‘With this memorandum, we’re not only formalising cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.’

Why does it matter?

A growing number of online safety measures now depend on systems that also raise privacy questions, especially age-assurance tools and other platform controls involving personal data. The agreement gives both regulators a clearer basis for coordinating oversight as Australia expands enforcement around child safety, platform obligations, and emerging technologies such as AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF data shows 63% of global child abuse content hosted in the EU

New data from the Internet Watch Foundation (IWF) points to a stark imbalance in global online child protection, with the EU member states hosting the majority of confirmed child sexual abuse material URLs identified by the organisation. In 2025, IWF analysts actioned 310,437 URLs, with 63% traced to hosting services in the EU member states.

A small cluster of countries, including Bulgaria and the Netherlands, accounted for a large share of that hosting concentration, highlighting structural vulnerabilities in hosting infrastructure and uneven enforcement across jurisdictions. The IWF notes that such concentrations often reflect a combination of high-volume sites, migration between hosting locations, and inconsistent takedown speeds.

These findings come shortly after the EU failed to preserve legal continuity for the temporary framework that had allowed companies to carry out certain voluntary detection measures while negotiations on a permanent child sexual abuse law continued. That lapse has intensified concerns about a widening gap between the scale of online abuse and the legal tools available to detect and disrupt it.

The IWF argues that fragmented regulation and uneven infrastructure responses make it easier for criminal content to persist online. Where abuse material remains concentrated on a few high-volume sites in jurisdictions with slower or less consistent takedown practices, it stays accessible for longer and is more likely to be copied, redistributed, or reposted elsewhere.

By contrast, takedown performance can vary sharply across jurisdictions. The UK accounted for just 951 actioned URLs in 2025, or 0.30% of the total, a figure the IWF links to a much stronger domestic removal framework and closer operational cooperation.

The broader message of the data is that child sexual abuse material cannot be tackled effectively through fragmented national responses alone. The IWF is using the figures to press for a more coherent international framework for detection, reporting, and removal, warning that without aligned rules and stronger accountability, systemic weaknesses in digital governance will continue to leave serious gaps in child protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

eSafety Commissioner of Australia issues notices to Roblox, Minecraft, Fortnite and Steam

Australia’s eSafety Commissioner has issued legally enforceable transparency notices to Roblox, Minecraft, Fortnite and Steam over concerns that online games are being used by individuals seeking to groom children and by extremist groups to spread violent propaganda and radicalise young people.

The notices require the platforms to explain how they identify, prevent and respond to harms including grooming, cyberbullying, online hate, sexual extortion and violent extremism. They also ask how systems, staffing and safety-by-design measures align with the Australian Government’s Basic Online Safety Expectations.

eSafety Commissioner Julie Inman Grant said online games and gaming-adjacent services can serve as first points of contact between children and offenders in cases involving serious online harm. She said: ‘What we often see after these offenders make contact with children in online game environments, they then move children to private messaging services.’

Inman Grant also said: ‘Predatory adults know this and target children through grooming or embedding terrorist and violent extremist narratives in gameplay, increasing the risks of contact offending, radicalisation and other off-platform harms.’

eSafety said it publishes reports based on transparency notices to provide the public, including parents, with more information about safety risks and existing mitigations, while also increasing pressure on technology companies to adopt Safety by Design. Online game platforms must also comply with Australia’s Online Safety Codes and Standards, and a breach of a direction to comply with a code or standard can attract penalties of up to A$49.5 million per breach.

Compliance with a transparency notice is mandatory. If companies fail to respond, eSafety has enforcement options, including financial penalties of up to A$825,000 a day.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK children’s bill advances with new online safety powers

The UK’s Children’s Wellbeing and Schools Bill has moved forward with a substantial set of online safety amendments, showing how child protection policy is increasingly being folded into wider legislation beyond the Online Safety Act itself. The current printed version of the bill, published as it continues through consideration of amendments between the Commons and Lords, includes new powers that could allow ministers to require providers of specified internet services to prevent or restrict children’s access to certain services, features, or functionalities where there is a risk of harm.

At the centre of the package is a proposed new section 214A to be inserted into the Online Safety Act 2023. Under that provision, the Secretary of State would be able to make regulations requiring providers of specified internet services to block or limit access for children of a specified age. The text makes clear that those powers could apply not only to entire services but also to specific features or functions within them.

That matters because the bill goes well beyond a general statement of principle. The amendments envisage regulations that could address issues such as the amount of time children spend on services, the times of day they can access them, contact from strangers, live audio or video communications, and the ability of unknown users to identify a child’s actual or approximate location. In other words, the government is seeking flexible powers to target specific design features and risks rather than relying only on broad platform-wide restrictions.

The bill would also place Ofcom into the process. As drafted, the regulator is expected to carry out research or provide advice at the Secretary of State’s request to support the making of regulations under the new power, and to publish that advice afterwards. A separate clause would require the Secretary of State, within six months of the Act being passed, to lay before Parliament a progress statement on the first regulations and a timetable for bringing them forward, unless those regulations have already been made.

Another part of the amendment package would give ministers the power to alter the age at which a child can consent to the processing of personal data in relation to information society services, within a range of 13 to 16. The text also allows for regulations on age verification for that consent, including provisions on compliance, monitoring, and enforcement. That means the bill is not only about access and harmful features, but also about the data governance rules that shape children’s use of digital services.

Also, the bill shows that Parliament has not fully settled the question of how far to go. The latest printed text also includes Lords’ amendments to Commons Amendment 38J, which would require the Secretary of State to make regulations imposing highly effective age-assurance and anti-circumvention measures for under-16s on specified regulated user-to-user services. Those Lords’ changes sit within the continuing exchange between the two Houses, rather than representing a final agreed position. The bill remains in the ‘consideration of amendments’ stage and has not yet received Royal Assent.

Why does it matter?

The broader significance of the bill is that the UK is moving towards a more interventionist model of child online safety, one that reaches beyond content moderation into product design, age assurance, feature controls, and the governance of children’s data. But the legislative picture is still in flux. What is emerging is not yet a final settlement, but a live parliamentary struggle over how prescriptive ministers should be, how much discretion they should have, and how strongly the law should push platforms to redesign services for children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom steps up child safety enforcement with Telegram and chat site investigations

The UK’s online safety regime has entered a more confrontational phase, with Ofcom opening new investigations into Telegram and two chat platforms over suspected failures to protect children from serious harm. The move signals a shift from broad compliance warnings to more direct enforcement against services deemed to pose acute risks under the Online Safety Act.

Ofcom said it is investigating Telegram to determine whether the platform is doing enough to prevent child sexual abuse material from being shared. Separate probes have also been opened into Teen Chat and Chat Avenue, where the regulator says there are concerns that chat functions may be facilitating grooming and other harms to children. According to Ofcom, the providers have not demonstrated sufficient safeguards for UK users despite earlier engagement.

The cases are part of a wider enforcement drive rather than isolated actions. Ofcom has already been pressing file-sharing and file-storage services over child sexual abuse risks, and says some platforms have since introduced automated detection tools, blocked access for UK users, or otherwise changed their systems in response to regulatory pressure. In other cases, investigations have been closed after providers took corrective steps.

That broader context matters. Since the first online safety duties became enforceable, Ofcom has been moving from rule-setting into operational enforcement, testing whether platforms are actually putting in place the systems and processes needed to reduce illegal harms.

In the child safety area, that increasingly means proactive risk management, technical detection measures, and design choices that make it harder for offenders to share abusive material or contact children in the first place.

Ofcom has also made clear that services available in the UK cannot treat these duties as optional. Under the Online Safety Act, companies can face significant financial penalties for failing to comply, and the regulator can ask courts to impose business disruption measures or restrict access where necessary. That gives the current investigations weight beyond the individual platforms involved.

The bigger significance of the latest action is that platform accountability is being judged less on stated policies and more on demonstrable safeguards. The Telegram case in particular shows that even large, globally used platforms are now exposed to direct scrutiny if UK regulators believe child safety risks are not being properly addressed.

Taken together, the investigations suggest that Ofcom is trying to establish a more interventionist model of online safety enforcement, one in which companies are expected to anticipate and reduce harm rather than respond only after it has spread.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission allocates €63.2 million to support AI innovation in health and online safety

The European Commission has announced €63.2 million in funding to support AI innovation, focusing on health, online safety and broader technological development. The initiative aims to accelerate the deployment of AI solutions across key sectors.

According to the Commission, the funding will support projects that improve healthcare systems and strengthen protections in digital environments. It is part of ongoing efforts to expand AI capabilities and adoption.

The programme also seeks to encourage collaboration between research institutions, businesses and public bodies. This approach is intended to foster innovation while addressing societal challenges linked to AI use.

The Commission states that the investment will contribute to strengthening Europe’s digital capacity and advancing AI development across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ofcom updates enforcement programme on CSAM risks in file-sharing services

The Office of Communications (Ofcom) has updated its enforcement programme launched on 17 March 2025 to assess measures taken by file-sharing and file-storage services to prevent risks to UK users from image-based child sexual abuse material (CSAM). The update follows its previous report in February 2026.

Ofcom said it identified concerns after two services under investigation redirected users to another file-sharing platform, Pixeldrain. Following further assessment, the regulator found that the provider had not initially taken appropriate measures to manage risks linked to CSAM storage and dissemination.

Following engagement with Ofcom, the provider of Pixeldrain updated its Illegal Content Risk Assessment and reassessed the level of risk on its service. The company also implemented perceptual hash matching to reduce the risk of known CSAM being shared. Ofcom stated that, due to these improvements and constructive engagement, no further action will be taken at this stage.

Ofcom’s investigation into Im.ge remains ongoing, focusing on compliance with risk assessment and user protection duties under the Online Safety Act 2023. Separately, the regulator has closed its investigation into Yolobit after the service became unavailable to UK users, reducing exposure risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

IWF and Utropolis partnership strengthens AI-driven child online safety

The Internet Watch Foundation (IWF) has announced a new partnership with Utropolis, marking a step forward in efforts to strengthen online child protection. The collaboration brings together established detection tools and emerging AI-driven safeguarding technologies.

Utropolis specialises in cloud-based filtering systems designed to identify risks in real time, particularly in school environments.

By integrating IWF datasets, including verified lists of harmful content, the platform aims to improve prevention and detection capabilities while helping educators maintain safer digital spaces.

The initiative reflects a broader trend towards combining AI with established regulatory and safeguarding frameworks. As harmful material continues to spread online, organisations are increasingly focusing on scalable, automated solutions that can adapt to evolving threats.

The partnership also aligns with UK online safety standards in education, reinforcing compliance requirements and strengthening institutional responses.

As digital environments continue to expand, collaborations of this kind highlight the growing role of AI in supporting child protection strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!