Search Live in Google expands to over 200 countries

Google has expanded its Search Live feature globally, making it available in more than 200 countries and territories where AI Mode is supported. The tool enables users to interact with Search through real-time voice and camera-based conversations.

The upgrade is powered by Gemini 3.1 Flash Live, a new audio and voice model designed to deliver more natural and intuitive interactions. The model supports multiple languages, enabling users to communicate with Search in their preferred language across regions.

Search Live is designed for situations where typing is inconvenient, allowing users to ask questions aloud and receive audio responses within the Google app. Follow-up queries can be made instantly, with results supplemented by relevant web links.

Camera integration through Google Lens adds visual context, enabling Search to interpret real-world objects and provide step-by-step guidance or suggestions. The rollout is part of Google’s broader effort to make search more interactive, accessible, and useful in everyday tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini introduces tools to import AI chat history

Google has introduced new tools that allow users to transfer their memories, preferences, and chat history from other AI platforms directly into Gemini. The update aims to ease switching and deliver a more personalised experience from the start.

A new memory import feature lets users copy key details from another AI app and upload them to Gemini. Once transferred, the system recognises personal context, enabling more accurate responses without having to start from scratch.

In addition, users can now upload full chat histories via ZIP files, enabling access to past conversations within Gemini. The platform can integrate exchanges with services like Gmail, Photos and Search, with permission, to deliver more relevant responses.

Google confirmed that the rollout has begun and will appear in user settings, alongside a rebranding of ‘past chats’ to ‘memory’. The update reflects a broader push towards more adaptive and context-aware AI assistants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Open letter targets Meta ad practices

A coalition of civil society and industry groups has urged the European Commission to enforce the Digital Markets Act more rigorously, warning that major tech firms continue to exploit compliance gaps. The appeal centres on concerns over data use and online advertising practices.

Organisations including noyb, Check My Ads, and the Irish Council for Civil Liberties argue that current models fail to offer users genuine choice. Critics say consent mechanisms tied to payment or tracking undermine the intent of the EU digital rules.

The letter against Meta calls for clearer standards, including equal options for personalised and non-personalised advertising, as well as stricter limits on design practices that influence user decisions. Campaigners also want stronger coordination between regulators to ensure consistent enforcement.

The push reflects wider frustration among European organisations, with several recent letters demanding faster action against dominant platforms. Observers warn that delayed enforcement risks weakening the credibility of the EU digital regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK tightens sanctions on crypto-linked scam networks

The UK has stepped up its crackdown by sanctioning a crypto marketplace tied to major scam centres in Southeast Asia. Measures aim to disrupt the sale of stolen personal data and limit the financial infrastructure enabling online fraud targeting British victims.

Authorities also targeted operators behind ‘#8 Park’, Cambodia’s largest scam compound, believed to house up to 20,000 trafficked workers. Many individuals forced to run scams were lured with false job offers before being coerced into fraudulent activity under severe threats.

Sanctions extend to key entities and individuals connected to the wider network, including those facilitating crypto laundering and cross-border financial flows. Earlier UK action froze over £1 billion in assets and helped shut down platforms used for laundering illicit funds.

Officials said the measures will isolate these operations from the crypto ecosystem and freeze UK-based assets. The measures come ahead of an international summit in June aimed at strengthening global coordination against illicit finance and digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and Tecnológico de Monterrey partner on AI in education initiative

UNESCO and Tecnológico de Monterrey have signed an agreement to collaborate on advancing the use of AI in education, as digital transformation reshapes learning systems and workforce skills across Latin America and the Caribbean.

The agreement establishes a framework for joint work on generating evidence, developing standards and formulating public policy recommendations on AI in education, and supports the launch of a Regional Observatory on Artificial Intelligence in Education.

A financial contribution of $90,000 will support the Observatory’s implementation, following months of technical coordination and institutional validation between the two organisations.

After the signing, technical teams reviewed the operational plan for the first year, including methodological frameworks on teachers’ digital competencies and AI ethics, as well as pilot projects in Chile, El Salvador and Mexico.

According to Esther Kuisch Laroche, the initiative aims to ensure AI contributes to more inclusive, ethical and relevant education systems, while moving from principles to practical solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU court challenges French police data practices

The Court of Justice of the European Union has ruled that aspects of France’s biometric data collection system breach the EU law. Judges found that taking fingerprints and photographs of suspects under broad conditions fails to meet strict proportionality standards.

The case examined rules allowing police to collect and store data in the French Traitement des antécédents judiciaires and the Fichier automatisé des empreintes digitales. The court said collection cannot be routine and must meet a threshold of absolute necessity.

Judges also criticised the lack of clear justification for data collection, stating that individuals should receive explanations to exercise their legal rights. Existing rules were found to lack safeguards to ensure the limited and proportionate use of sensitive biometric information in France.

The ruling requires national courts to reassess the framework and could lead to changes in policing practices. It also raises broader questions about large-scale data retention and the balance between security and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Mexico wins major case against Meta

A jury has found Meta Platforms liable for misleading consumers and endangering children in a landmark case brought by the New Mexico Department of Justice. The verdict marks the first successful trial by a US state against a major tech firm over child safety concerns.

Jurors awarded civil penalties totalling 375 million dollars after finding violations of consumer protection law. The case focused on claims that platform design choices exposed young users to harmful and exploitative content.

Evidence presented in court included internal company documents and testimony suggesting awareness of risks to children. Allegations centred on failures to prevent exploitation, as well as features linked to addictive behaviour and exposure to harmful material.

Further proceedings in the US are scheduled, with authorities seeking additional penalties and mandated changes to platform safety measures. Proposed actions include stronger age verification and improved protections for minors online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI details Sora 2 safeguards for likeness, audio, and harmful content

OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.

OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.

A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.

OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.

Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.

OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.

Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.

User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.

OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot