Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global push against online scam networks

The US tech giant, Meta, outlined an expanded strategy to limit online fraud by combining technical defences with stronger collaboration across industry and law enforcement.

The company described scams as a threat to user safety and as a direct risk to the credibility of its advertising ecosystem, which remains central to its business model.

Executives emphasised that large criminal networks continue to evolve and that a faster, coordinated response is essential instead of fragmented efforts.

Meta presented recent progress, noting that more than 134 million scam advertisements were removed in 2025 and that reports about misleading advertising fell significantly in the last fifteen months.

It also provided details about disrupted criminal networks that operated across Facebook, Instagram and WhatsApp.

Facial recognition tools played a crucial role in detecting scam content that utilised images of public figures, resulting in an increased volume of removals during testing, rather than allowing wider circulation.

Cooperation with law enforcement remains central to Meta’s approach. The company supported investigations that targeted criminal centres in Myanmar and illegal online gambling operations connected to transfers through anonymous accounts.

Information shared with financial institutions and partners in the Global Signal Exchange contributed to the removal of thousands of accounts. At the same time, legal action continued against those who used impersonation or bulk messaging to deceive users.

Meta stated that it backs bipartisan legislation designed to support a national response to online fraud. The company argued that new laws are necessary to weaken transnational groups behind large-scale scam operations and to protect users more effectively.

A broader aim is to strengthen trust across Meta’s services, rather than allowing criminal activity to undermine user confidence and advertiser investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube criticises Australia’s new youth social-media restrictions

Australia’s forthcoming ban on social media accounts for users under 16 has prompted intense criticism from YouTube, which argues that the new law will undermine existing child safety measures.

The report notes that from 10 December, young users will be logged out of their accounts and barred from posting or uploading content, though they will still be able to watch videos without signing in.

YouTube said the policy will remove key parental-control tools, such as content filters, channel blocking and well-being reminders, which only function for logged-in accounts.

Rachel Lord, Google and YouTube public-policy lead for Australia, described the measure as ‘rushed regulation’ and warned the changes could make children ‘less safe’ by stripping away long-established protections.

Communications Minister Anika Wells rejected this criticism as ‘outright weird’, arguing that if YouTube believes its own platform is unsafe for young users, it must address that problem itself.

The debate comes as Australia’s eSafety Commissioner investigates other youth-focused apps such as Lemon8 and Yope, which have seen a surge in downloads ahead of the ban.

Regulators reversed YouTube’s earlier exemption in July after identifying it as the platform where 10- to 15-year-olds most frequently encountered harmful content.

Under the new Social Media Minimum Age Act, companies must deactivate underage accounts, prevent new sign-ups and halt any technical workarounds or face penalties of up to A$49.5m.

Officials say the measure responds to concerns about the impact of algorithms, notifications and constant connectivity on Gen Alpha. Wells said the law aims to reduce the ‘dopamine drip’ that keeps young users hooked to their feeds, calling it a necessary step to shield children from relentless online pressures.

YouTube has reportedly considered challenging its inclusion in the ban, but has not confirmed whether it will take legal action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU states strike deal on chat-scanning law

EU member states have finally reached a unified stance on a long-debated law aimed at tackling online child sexual abuse, ending years of stalemate driven by fierce privacy concerns. Governments agreed to drop the most controversial element of the original proposal, mandatory scanning of private messages, after repeated blockages and public opposition from privacy advocates who warned it would amount to mass surveillance.

The move comes as reports of child abuse material continue to surge, with global hotlines processing nearly 2.5 million suspected images last year.

The compromise, pushed forward under Denmark’s Council presidency, maintains the option for tech companies to scan content voluntarily while affirming that end-to-end encryption must not be compromised. Supporters argue that the agreement closes a regulatory gap that will occur when temporary EU rules allowing voluntary detection expire in 2026.

However, children’s rights groups argue that the Council has not gone far enough, saying that simply preserving the current system will not adequately address the scale of the problem.

Privacy campaigners remain alarmed. Critics fear that framing voluntary scanning as a risk-reduction measure could encourage platforms to expand surveillance of user communications to shield themselves from liability.

Former MEP Patrick Breyer, a prominent voice in the campaign against so-called ‘chat control,’ warned that the compromise could still lead to widespread monitoring and possibly age-verification requirements that limit access to digital services.

With the Council and European Parliament now holding formal positions, negotiations will finally begin on the regulation’s final shape. But with political divisions still deep and the clock ticking toward the 2026 deadline, it may be months before the EU determines how far it is willing to go in regulating the detection of child sexual abuse material, and at what cost to users’ privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU moves forward on new online child protection rules

EU member states reached a common position on a regulation intended to reduce online child sexual abuse.

The proposal introduces obligations for digital service providers to prevent the spread of harmful content and to respond when national authorities require the removal, blocking or delisting of material.

A framework that requires providers to assess how their services could be misused and to adopt measures that lower the risk.

Authorities will classify services into three categories based on objective criteria, allowing targeted obligations for higher-risk environments. Victims will be able to request assistance when seeking the removal or disabling of material that concerns them.

The regulation establishes an EU Centre on Child Sexual Abuse, which will support national authorities, process reports from companies and maintain a database of indicators. The Centre will also work with Europol to ensure that relevant information reaches law enforcement bodies in member states.

The Council position makes permanent the voluntary activities already carried out by companies, including scanning and reporting, which were previously supported by a temporary exemption.

Formal negotiations with the European Parliament can now begin with the aim of adopting the final regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces new battles over digital rights

EU policy debates intensified after Denmark abandoned plans for mandatory mass scanning in the draft Child Sexual Abuse Regulation. Advocates welcomed the shift yet warned that new age checks and potential app bans still threaten privacy.

France and the UK advanced consultations on good practice guidelines for cyber intrusion firms, seeking more explicit rules for industry responsibility. Civil society groups also marked two years of the Digital Services Act by reflecting on enforcement experience and future challenges.

Campaigners highlighted rising concerns about tech-facilitated gender violence during the 16 Days initiative. The Centre for Democracy and Technology launched fresh resources stressing encryption protection, effective remedies and more decisive action against gendered misinformation.

CDT Europe also criticised the Commission’s digital omnibus package for weakening safeguards under laws, including the AI Act. The group urged firm enforcement of existing frameworks while exploring better redress options for AI-related harms in the EU legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot