Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spain expands digital oversight of online hate

Spain has launched a digital system designed to track hate speech and disinformation across social media platforms. Prime Minister Pedro Sánchez presented the tool in Madrid as part of a wider effort to improve oversight of online platforms.

The platform known as HODIO will analyse public posts and measure the spread and reach of hateful content. Authorities in Spain say the project will publish regular reports examining how platforms respond to harmful material.

The monitoring initiative is managed by Spain’s Observatory on Racism and Xenophobia. Officials in Spain say the data will help citizens understand the scale of online hate and assess how social networks address abusive content.

The initiative forms part of a broader digital policy agenda in Spain that also includes measures to protect minors online. Policymakers in Spain have discussed proposals such as restrictions on social media use by children under 16.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lawmakers urged to rethink rules on private messaging

Policymakers are being urged to rethink the regulation of private messaging platforms as disinformation campaigns increasingly spread through closed digital networks. Researchers say messaging apps now play a major role in political communication and crisis information flows.

Evidence from elections and conflicts highlights the challenge. During Brazil’s 2024 municipal elections, manipulated political content spread widely through WhatsApp groups, while authorities in Ukraine reported Telegram being used for both emergency communication and disinformation.

Experts argue that current laws often fail to address messaging platforms, such as Telegram, because regulation typically targets public social media spaces. Analysts say modern messaging services combine private chats with broadcast channels and other features that allow content to reach large audiences.

Policy specialists propose regulating specific platform features rather than entire services. Governments and technology companies are also encouraged to protect encryption while expanding transparency tools, media literacy programmes and user safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK rejects social media ban for under-16s

A proposed social media ban for under-16s has been rejected by UK MPs, with 307 voting against and 173 in favour. The measure, introduced as an amendment to the Children’s Wellbeing and Schools Bill, aimed to protect children from online harms.

Instead, the UK government secured support for giving ministers flexible powers, enforceable after a consultation on online safety concludes. The technology secretary, Liz Kendall, could limit social media access and VPN use, turn off addictive features, and raise the UK’s digital consent age.

Supporters of a full ban argued parents face an ‘impossible position’ managing online risks for their children. Campaigners, bereaved parents, and organisations such as Mumsnet and the National Education Union called for immediate action.

Critics, including the NSPCC, warned that a blanket ban could push teenagers towards unregulated online spaces.

The government consultation will examine minimum age requirements and the removal of features such as autoplay. MPs emphasised that any policy must balance safety with preparing children for responsible online engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New York moves to ban chatbots from giving legal and medical advice

New York lawmakers are considering legislation that would ban AI chatbots from providing legal or medical advice. The bill aims to stop automated systems from impersonating licensed professionals such as doctors and lawyers.

The proposal would also require chatbot operators to clearly inform users that they are interacting with an AI system. Notices must be prominent, written in the same language as the chatbot, and use a readable font.

A key feature of the bill is a private right of action. However, this would allow users to file civil lawsuits against chatbot owners who violate the law, recovering damages and legal fees. Experts say this enforcement tool strengthens the rules and deters abuse.

Supporters of the legislation argue it protects New Yorkers’ safety, particularly minors. Other bills in the same package would regulate online platforms like Roblox and set standards for generative AI, synthetic content, and the handling of biometric data.

The bill’s author, state Senator Kristen Gonzalez, said AI innovation should not come at the expense of public safety. She pointed to recent cases where AI chatbots were linked to harmful outcomes for minors, highlighting the need for transparency and accountability.

If passed, the law would take effect 90 days after the governor signs it. Lawmakers hope it will balance innovation with user protection, ensuring AI tools are used responsibly and safely across the state.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Smart Classrooms initiative transforms learning in 10 Thai pilot schools

Ten pilot schools in Buriram and Si Sa Ket provinces have launched Smart Classrooms under the UNESCO–Huawei TEOSA initiative, supporting Thailand’s drive to expand digital education.

Led by UNESCO Bangkok in partnership with Thailand’s Ministry of Education and Huawei Technologies Co., Ltd, the Smart Classrooms initiative aims to strengthen digital learning environments, equip teachers with digital and AI competencies, and support policy development for AI in education. The programme also supports Thailand’s ‘Transforming Education in the Digital Era’ policy and the National AI Strategy and Action Plan (2022–2027).

Each province has one designated ‘mother school’ that serves as a regional digital hub, supporting four surrounding ‘child schools’ by sharing resources, training, and expertise. The ten pilot schools in total have received high-speed internet, interactive digital displays, and collaborative learning platforms that support real-time content sharing and blended learning. Forty-five teachers from the pilot schools also participated in hands-on demonstrations of Smart Classrooms systems on 4–5 March.

‘This new technology will help translate theory into practice, allowing students to experiment, test strategies, and see results immediately,’ said Pathanapong Momprakhon, Principal of Paisan Pittayakom School. UNESCO Bangkok’s Deputy Director and Chief of Education, Marina Patrier, highlighted the importance of combining infrastructure with teacher capacity-building.

‘At UNESCO, we are committed to promoting the ethical and inclusive use of AI in ways that empower teachers and expand opportunities for every learner,’ Ms Patrier said at the launch. ‘While Smart Classrooms provide important tools, it is teachers’ creativity, professional judgement and leadership that ultimately bring these innovations to life.’

Chitralada Chanyaem of the Thai National Commission for UNESCO highlighted the importance of collaboration in advancing digital education.

‘The UNESCO–Huawei Funds-in-Trust Project on Technology-Enabled Open Schools for All stands as a powerful example of collaboration dedicated to transforming education into a system that is open, inclusive, flexible, and resilient in the face of a rapidly changing world, she said. ‘As the future of education cannot be confined within classroom walls, it must bridge sectors and communities, working collaboratively to create equitable and sustainable opportunities for all.’

Teachers observed Huawei technical staff and master teachers demonstrate how digital tools and AI-supported applications can be used in everyday lessons. Ms Piyaporn Kidsirianan, Public Relations Manager at Huawei Technologies (Thailand) Co., Ltd, said the initiative aims to reduce digital inequality.

‘The Open Schools for All initiative represents a commitment to using technology as a bridge to deliver quality education to remote and underserved communities.’ The TEOSA Smart Classrooms initiative combines policy support, digital infrastructure upgrades, and teacher training to help translate Thailand’s digital education ambitions into practical impact at the school level.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools linked to rise in abuse disclosures

Support organisations in the UK report that some abuse survivors are turning to AI tools such as ChatGPT before contacting helplines. Charities in the UK say individuals increasingly use AI to explore experiences and seek guidance before approaching professional support services.

The National Association of People Abused in Childhood said callers in the UK have recently reported being referred to its helpline after conversations with ChatGPT. Staff say AI is being used as an informal step in processing trauma.

Law enforcement and support groups in the UK have also recorded a rise in disclosures involving ritualistic sexual abuse. Authorities in the UK say only 14 criminal cases since 1982 have formally recognised such practices.

Police and support organisations are responding by improving training and launching specialist working groups. Officials aim to strengthen the identification and investigation of complex cases of abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot