Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pope urges guidance for youth in an AI-shaped world

Pope Leo XIV urged global institutions to guide younger generations as they navigate the expanding influence of AI. He warned that rapid access to information cannot replace the deeper search for meaning and purpose.

Previously, the Pope had warned students not to rely solely on AI for educational support. He encouraged educators and leaders to help young people develop discernment and confidence when encountering digital systems.

Additionally, he called for coordinated action across politics, business, academia and faith communities to steer technological progress toward the common good. He argued that AI development should not be treated as an inevitable pathway shaped by narrow interests.

He noted that AI reshapes human relationships and cognition, raising concerns about its effects on freedom, creativity and contemplation. He insisted that safeguarding human dignity is essential to managing AI’s wide-ranging consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Porn site fined £1m for ignoring UK child safety age checks

A UK pornographic website has been fined £1m by Ofcom for failing to comply with mandatory age verification under the Online Safety Act. The company, AVS Group Ltd, did not respond to repeated contact from the regulator, prompting an additional £50,000 penalty.

The Act requires websites hosting adult content to implement ‘highly effective age assurance’ to prevent children from accessing explicit material. Ofcom has ordered the company to comply within 72 hours or face further daily fines.

Other tech platforms are also under scrutiny, with one unnamed major social media company undergoing compliance checks. Regulators warn that non-compliance will result in formal action, highlighting the growing enforcement of child safety online.

Critics argue the law must be tougher to ensure real protection, particularly for minors and women online. While age checks have reduced UK traffic to some sites, loopholes like VPNs remain a concern, and regulators are pushing for stricter adherence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta begins removing underage users in Australia

Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.

Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.

Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.

Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global push against online scam networks

The US tech giant, Meta, outlined an expanded strategy to limit online fraud by combining technical defences with stronger collaboration across industry and law enforcement.

The company described scams as a threat to user safety and as a direct risk to the credibility of its advertising ecosystem, which remains central to its business model.

Executives emphasised that large criminal networks continue to evolve and that a faster, coordinated response is essential instead of fragmented efforts.

Meta presented recent progress, noting that more than 134 million scam advertisements were removed in 2025 and that reports about misleading advertising fell significantly in the last fifteen months.

It also provided details about disrupted criminal networks that operated across Facebook, Instagram and WhatsApp.

Facial recognition tools played a crucial role in detecting scam content that utilised images of public figures, resulting in an increased volume of removals during testing, rather than allowing wider circulation.

Cooperation with law enforcement remains central to Meta’s approach. The company supported investigations that targeted criminal centres in Myanmar and illegal online gambling operations connected to transfers through anonymous accounts.

Information shared with financial institutions and partners in the Global Signal Exchange contributed to the removal of thousands of accounts. At the same time, legal action continued against those who used impersonation or bulk messaging to deceive users.

Meta stated that it backs bipartisan legislation designed to support a national response to online fraud. The company argued that new laws are necessary to weaken transnational groups behind large-scale scam operations and to protect users more effectively.

A broader aim is to strengthen trust across Meta’s services, rather than allowing criminal activity to undermine user confidence and advertiser investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube criticises Australia’s new youth social-media restrictions

Australia’s forthcoming ban on social media accounts for users under 16 has prompted intense criticism from YouTube, which argues that the new law will undermine existing child safety measures.

The report notes that from 10 December, young users will be logged out of their accounts and barred from posting or uploading content, though they will still be able to watch videos without signing in.

YouTube said the policy will remove key parental-control tools, such as content filters, channel blocking and well-being reminders, which only function for logged-in accounts.

Rachel Lord, Google and YouTube public-policy lead for Australia, described the measure as ‘rushed regulation’ and warned the changes could make children ‘less safe’ by stripping away long-established protections.

Communications Minister Anika Wells rejected this criticism as ‘outright weird’, arguing that if YouTube believes its own platform is unsafe for young users, it must address that problem itself.

The debate comes as Australia’s eSafety Commissioner investigates other youth-focused apps such as Lemon8 and Yope, which have seen a surge in downloads ahead of the ban.

Regulators reversed YouTube’s earlier exemption in July after identifying it as the platform where 10- to 15-year-olds most frequently encountered harmful content.

Under the new Social Media Minimum Age Act, companies must deactivate underage accounts, prevent new sign-ups and halt any technical workarounds or face penalties of up to A$49.5m.

Officials say the measure responds to concerns about the impact of algorithms, notifications and constant connectivity on Gen Alpha. Wells said the law aims to reduce the ‘dopamine drip’ that keeps young users hooked to their feeds, calling it a necessary step to shield children from relentless online pressures.

YouTube has reportedly considered challenging its inclusion in the ban, but has not confirmed whether it will take legal action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU states strike deal on chat-scanning law

EU member states have finally reached a unified stance on a long-debated law aimed at tackling online child sexual abuse, ending years of stalemate driven by fierce privacy concerns. Governments agreed to drop the most controversial element of the original proposal, mandatory scanning of private messages, after repeated blockages and public opposition from privacy advocates who warned it would amount to mass surveillance.

The move comes as reports of child abuse material continue to surge, with global hotlines processing nearly 2.5 million suspected images last year.

The compromise, pushed forward under Denmark’s Council presidency, maintains the option for tech companies to scan content voluntarily while affirming that end-to-end encryption must not be compromised. Supporters argue that the agreement closes a regulatory gap that will occur when temporary EU rules allowing voluntary detection expire in 2026.

However, children’s rights groups argue that the Council has not gone far enough, saying that simply preserving the current system will not adequately address the scale of the problem.

Privacy campaigners remain alarmed. Critics fear that framing voluntary scanning as a risk-reduction measure could encourage platforms to expand surveillance of user communications to shield themselves from liability.

Former MEP Patrick Breyer, a prominent voice in the campaign against so-called ‘chat control,’ warned that the compromise could still lead to widespread monitoring and possibly age-verification requirements that limit access to digital services.

With the Council and European Parliament now holding formal positions, negotiations will finally begin on the regulation’s final shape. But with political divisions still deep and the clock ticking toward the 2026 deadline, it may be months before the EU determines how far it is willing to go in regulating the detection of child sexual abuse material, and at what cost to users’ privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!