OpenAI details Sora 2 safeguards for likeness, audio, and harmful content

OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.

OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.

A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.

OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.

Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.

OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.

Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.

User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.

OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and NVIDIA unveil AI tools for nuclear energy permitting and operations

Microsoft has announced an AI collaboration with NVIDIA to support nuclear energy projects across permitting, design, construction, and operations. In a post published on 24 March, the tech conglomerate said the initiative aims to provide end-to-end tools for the nuclear sector, focusing on streamlining permitting, accelerating design, and optimising operations.

Microsoft frames the effort within a broader energy challenge, arguing that rising power demand and long project timelines are putting pressure to accelerate the delivery of firm, carbon-free power. The company says customised engineering, fragmented data, and manual regulatory review slow nuclear projects. It presents AI as a way to make project development more repeatable, traceable, secure, and predictable.

The post says the collaboration spans the full lifecycle of a nuclear plant. Microsoft describes a model in which digital twins, high-fidelity simulations, and AI-assisted workflows support design and engineering, licensing and permitting, construction and delivery, and operations and maintenance.

According to the company, engineers would be able to reuse design patterns, model the impact of changes before construction begins, and link project decisions to supporting evidence and applicable rules. Microsoft also says generative AI can assist with drafting and gap analysis in permit documentation, while predictive modelling and operational digital twins can support anomaly detection and maintenance planning.

Microsoft says traceability and auditability are central to the approach. The company lists four intended qualities of the system: traceable records linking engineering decisions to evidence and regulations, audit-ready documentation, secure use within a governed environment, and predictable outcomes through simulations intended to identify delays before they occur in the real world.

Several case examples are included in the post. Microsoft says Aalo Atomics reduced the permitting process by 92% using its Generative AI for Permitting solution and estimates annual savings of 80$ million.

Aalo Atomics Chief Technology Officer Yasir Arafat is quoted as saying: ‘Two things matter most: enterprise-scale complexity and mission-critical reliability. We’re deploying something complex at a scale only a company like Microsoft really understands. There’s no room for anything less than proven reliability.’

Microsoft also says Southern Nuclear has deployed Copilot agents across engineering and licensing workstreams to improve consistency, reuse knowledge faster, and support decision-making. Idaho National Laboratory is described as an early adopter in the US federal context, with Microsoft saying the lab is using AI capabilities to automate the assembly of engineering and safety analysis reports and to create standard methodologies for regulators to adopt the tools safely.

The post also expands beyond those three examples. Microsoft says Everstar, described as an NVIDIA Inception startup, is bringing domain-specific AI for nuclear to Azure to support project workflows and governed data pipelines.

Everstar Chief Executive Officer Kevin Kong is quoted as saying: ‘The nuclear industry has been bottlenecked by documentation burden and regulatory complexity for decades. This partnership means our customers get the secure, scalable cloud deployments they demand. It’s a significant step toward making nuclear power fast, safe, and unstoppable.’

Microsoft also says Atomic Canyon’s Neutron platform is available on the Microsoft Marketplace for nuclear developers via established procurement channels.

At the technical level, Microsoft says the collaboration brings together NVIDIA Omniverse, NVIDIA Earth-2, NVIDIA CUDA-X, NVIDIA AI Enterprise, PhysicsNeMo, Isaac Sim, and Metropolis with Microsoft Generative AI for Permitting Solution Accelerator and Microsoft Planetary Computer. The company presents the stack as a digital ecosystem for nuclear energy on Azure.

The official post is a corporate announcement rather than an independent assessment of the approach’s effectiveness. The published note outlines the company’s intended use cases, named partners, and customer examples, but it does not provide a third-party evaluation of the broader claims regarding delivery speed, regulatory confidence, or sector-wide impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Zimbabwe advances AI national strategy with UNESCO support

Zimbabwe has launched a National Artificial Intelligence Strategy for 2026 to 2030, marking a significant step towards shaping its digital future instead of relying solely on traditional development pathways.

Announced by President Emmerson Mnangagwa in Harare, the strategy sets out a national framework for the responsible use of AI to support innovation, improve public services, and expand economic opportunities across sectors such as agriculture, healthcare, education, finance, and public administration.

The strategy places strong emphasis on building digital infrastructure, developing AI skills, and strengthening research and innovation ecosystems.

Officials highlighted the importance of governance frameworks to ensure that AI systems remain transparent, ethical, and aligned with national priorities instead of advancing without oversight.

The initiative reflects a broader effort to position Zimbabwe within the evolving technological landscape of the fourth industrial revolution while promoting sustainable economic growth.

Development of the strategy was supported by UNESCO, working alongside national institutions and stakeholders from academia, industry, and civil society.

The process was informed by the Artificial Intelligence Readiness Assessment Methodology and aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence, promoting a human-centred approach that prioritises human rights, fairness, and transparency.

Regional initiatives across Southern Africa have also contributed to strengthening AI adoption readiness through similar assessment frameworks.

Looking ahead, Zimbabwe aims to translate the strategy into concrete investments in infrastructure, talent development, and innovation ecosystems.

International partners, including the UN, have expressed support for implementation efforts, emphasising the importance of inclusive growth and equitable access to digital opportunities.

By combining national leadership with international collaboration, Zimbabwe seeks to ensure that AI benefits communities across urban and rural areas rather than widening existing socioeconomic divides.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google sets 2029 deadline for post-quantum cryptography migration

A transition to post-quantum cryptography by 2029 is being led by Google, aiming to secure digital systems against future quantum computing threats instead of relying on existing encryption standards.

The move reflects growing concern that advances in quantum hardware and algorithms could eventually undermine current cryptographic protections, particularly through attacks that store encrypted data today for decryption in the future.

Quantum computers are expected to challenge widely used encryption and digital signature systems, prompting the need for early transition strategies.

Google has updated its threat model to prioritise authentication services, recognising that digital signatures pose a critical vulnerability if not addressed before the arrival of quantum machines capable of cryptanalysis.

The company is encouraging broader industry action to accelerate migration efforts and reduce long-term security risks.

As part of its strategy, Google is integrating post-quantum cryptography into its products and services.

Android 17 will include quantum-resistant digital signature protection aligned with standards developed by the US’s National Institute of Standards and Technology. At the same time, support has already been introduced in Google Chrome and cloud platforms.

These measures aim to bring advanced security technologies directly to users instead of limiting them to experimental environments.

By setting a clear timeline, Google aims to instil urgency and direction across the wider technology sector.

The transition to post-quantum cryptography is expected to become a critical step in maintaining online security, ensuring that digital infrastructure remains resilient as quantum computing capabilities continue to evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Scotland publishes AI guidance for schools

The Scottish government has published national guidance on the use of AI in schools, aiming to support the safe and ethical adoption of AI in classrooms. The document provides advice for teachers and pupils as AI use continues to expand across society.

The guidance outlines potential benefits of AI alongside risks that need to be considered, and includes examples of appropriate classroom use. It was developed with the EIS teaching union, local government and Education Scotland.

Education Secretary Jenny Gilruth said AI should support creativity, critical thinking and personalised learning while protecting pupils’ rights and privacy. She added that technology must not replace teachers or human relationships in education.

Andrea Bradley said AI should remain a tool for teachers and not replace professional judgement. The non-statutory guidance allows schools and local authorities flexibility to develop their own policies as AI continues to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU strengthens semiconductor strategy through Chips Act dialogue

Executive Vice-President Henna Virkkunen will host a high-level dialogue in Brussels to assess the implementation of the European Chips Act Regulation and gather industry feedback ahead of its planned revision.

Stakeholders from across the semiconductor ecosystem are expected to exchange views and present recommendations to shape future policy direction.

An initiative that forms part of the broader strategy led by the European Commission to reinforce technological sovereignty and competitiveness, rather than relying heavily on external suppliers.

The Chips Act seeks to strengthen Europe’s semiconductor ecosystem, improve supply chain resilience, and reduce strategic dependencies in critical technologies.

The dialogue follows a public consultation and call for evidence conducted in autumn 2025, with findings set to inform the upcoming legislative revision.

Industry representatives will provide direct input through a report outlining challenges, opportunities, and proposed policy adjustments, contributing to a more targeted and effective framework for semiconductor development.

Looking ahead, the revision of the Chips Act will be integrated into a wider Technological Sovereignty package designed to boost the capacity of Europe’s digital industries.

By combining stakeholder engagement with policy reform, the European Commission aims to ensure that semiconductor innovation and production can expand across the EU rather than remain constrained by reliance on external suppliers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ICO and Ofcom issue guidance on age assurance and online safety

The Information Commissioner’s Office and Ofcom have issued a joint statement outlining how age assurance measures should align with online safety and data protection requirements.

A guidance that focuses on protecting children from harm online instead of treating safety and privacy as separate obligations, reflecting closer coordination between the two regulators.

The statement is directed at digital services likely to be accessed by children and falling within the scope of the Online Safety Act and UK data protection laws.

It provides a practical overview of existing policies, helping organisations understand how to meet both regulatory frameworks while implementing age assurance technologies.

Rather than introducing new rules, the guidance clarifies how current requirements interact in practice. It highlights the importance of designing systems that both verify users’ ages and safeguard personal data, ensuring that safety measures do not undermine privacy protections.

The approach encourages organisations to integrate compliance into service design instead of addressing obligations separately.

By aligning regulatory expectations, the ICO and Ofcom aim to support organisations in delivering safer online environments for children while maintaining strong data protection standards.

The joint effort signals a broader move towards coordinated digital regulation, where safety and privacy are addressed together to reflect the complexities of modern online services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malaysia launches AI platform Rakan Tani to support farmers and stabilise incomes

The National AI Office (NAIO), through its NAIO Lab, is advancing Malaysia’s AI-driven development by building an ecosystem that supports innovation, collaboration, and startups. NAIO Lab aims to position the country as a hub for AI innovation where developers can experiment and create practical solutions.

Rakan Tani, the first project under NAIO Lab, is an AI-powered digital platform designed to transform the agricultural sector. It connects farmers with buyers early in the crop cycle and uses AI-driven order matching to help secure competitive prices and improve financial predictability.

The platform integrates multiple AI-driven features, including pre-harvest commerce, subsidy access via national ID systems, agriculture financing using pre-harvest orders as collateral, real-time cash payouts through digital banking, and logistics coordination with distributors and providers. It is delivered via WhatsApp and supports both Malay and English, with a pilot planned in Terengganu in May 2025.

NAIO Lab also provides AI startups with resources, mentorship, and funding, enabling collaboration between experts, researchers, and entrepreneurs. The initiative is supported by partnerships across government, academia, and industry, including the Ministry of Digital, Ministry of Agriculture and Food Security, GAIV, UPM, and Segi Fresh, with the goal of accelerating AI adoption and supporting sustainable economic growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Quantum readiness gains momentum according to OECD report

The OECD (Organisation for Economic Co-operation and Development) highlights how businesses are preparing for quantum computing, recognising it as a transformative technology instead of relying solely on conventional computing methods.

Quantum readiness is framed as a long-term capability-building effort in which firms gradually develop skills, infrastructure, and partnerships to explore commercial applications while navigating uncertainty.

Drawing on research, surveys, and interviews with public and private organisations across 10 countries, the OECD identifies both the practical steps companies take to build readiness and the barriers that slow adoption.

Early efforts focus on low-cost awareness and exploration, including attending workshops, training sessions, and industry events, allowing firms to familiarise themselves with emerging opportunities instead of waiting for fully mature systems.

Despite growing interest, companies face significant challenges. Technological immaturity complicates pilots and feasibility studies, while many firms lack a clear understanding of potential business applications.

Access to quantum resources, funding for research and development, and staff training are expensive, particularly for small- and medium-sized enterprises. Furthermore, there is a shortage of talent with both quantum computing expertise and domain-specific knowledge.

As a result, readiness tends to be concentrated among large, R&D-intensive firms, while smaller companies often recognise quantum computing’s potential but delay action.

Such an uneven adoption risks creating a divide in the digital economy, with early adopters moving ahead and other firms falling behind instead of engaging proactively.

To address these challenges, the OECD notes that public and private support mechanisms are critical. Networking and collaboration platforms connect firms with researchers, technology providers, and industry peers, fostering knowledge exchange and collective experimentation.

Business advisory and technology extension services help companies assess capabilities, test solutions, and access specialised facilities.

Grants for research and development lower the costs of experimentation and encourage collaboration, while stakeholder consultations ensure that support measures remain aligned with business needs.

Many companies are also establishing internal quantum labs and innovation hubs to trial applications and build expertise in a controlled environment, combining research with practical exploration instead of relying solely on external guidance.

Looking ahead, the OECD recommends expanding education and skills pipelines, strengthening industry-academic partnerships, and designing policies that support broader participation in quantum adoption.

Hybrid approaches that integrate quantum computing with AI and high-performance computing may offer practical commercial entry points for early applications.

Policymakers are encouraged to balance near-term exploratory pilots with forward-looking support for software development, interoperability, and workforce growth, enabling firms to move from experimentation to deployment effectively.

By following OECD guidance, companies can enhance innovation, improve competitiveness, and ensure that readiness efforts span sectors and geographies rather than remain limited to a few early adopters.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!