Sora strengthens AI video safety through consent and traceability controls

OpenAI has outlined a safety framework for Sora that embeds protections into how AI-generated video content is created, shared, and managed.

The system introduces visible and invisible provenance signals, including C2PA metadata and watermarks, designed to ensure that generated media can be identified and traced.

The framework emphasises consent and control. Users can generate video content from images of real individuals only after confirming they have permission, while the ‘characters’ feature enables controlled use of personal likeness, with the ability to revoke access at any time.

Additional safeguards apply to content involving minors or young-looking individuals, with stricter moderation rules and enforced watermarking.

Safety mechanisms operate across the entire lifecycle of content. Generation is subject to layered filtering that assesses prompts and outputs for harmful material, including sexual content, self-harm promotion, and illegal activity.

These automated systems are complemented by human review and continuous testing to address emerging risks linked to increasingly realistic video and audio outputs.

The system also introduces protections specific to audio and user interaction. Generated speech is analysed for policy violations, and attempts to replicate the style of living artists or existing works are restricted.

Users of Sora retain control over their content through reporting tools, sharing settings, and the ability to remove material, reflecting a broader approach that aligns AI-generated media with safety, transparency, and accountability standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

NVIDIA introduces infrastructure-level security model for autonomous AI agents

OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.

According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.

The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.

OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.

Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.

NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.

The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI improves stroke care and reduces patient risks in major study

The system, which analyses medical scans and provides treatment recommendations, was associated with better outcomes compared with standard approaches to stroke care. Researchers said the tool offers a more efficient and scalable method for improving treatment, particularly in resource-constrained healthcare systems.

The findings are based on more than 21,000 patients treated across 77 hospitals in China. Patients supported by the AI-driven clinical decision support system experienced fewer new vascular events, including stroke recurrence, heart attack, or related death, over follow-up periods of up to 12 months.

At three months, new vascular events occurred in 2.9% of patients using the system, compared with 3.9% in those receiving usual care, representing a 26% reduction. The benefit persisted at 12 months, with rates of 4% in the intervention group versus 5.5% in the control group.

Patients receiving AI-supported treatment also showed improved performance on key stroke care quality measures, although no significant differences were observed in disability, mortality, or bleeding outcomes between the groups.

Researchers noted limitations, including the study design, which randomised hospitals rather than individual patients, and potential differences in follow-up care. However, they highlighted the system’s ease of integration into hospital workflows and its potential to strengthen stroke care delivery and long-term prevention strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media ban in Ecuador targets youth crime recruitment

A proposal to restrict minors’ online activity is gaining momentum in Ecuador, where lawmakers are considering a social media ban for children under 15 as part of a broader response to rising organised crime.

Under discussion in the National Assembly, the initiative introduced by Assembly member Katherine Pacheco Machuca would amend the Code of Childhood and Adolescence to block access to platforms enabling public interaction, content sharing, and messaging. The proposal defines social networks broadly, covering services that allow users to create accounts, connect with others, and exchange content.

Unlike similar debates elsewhere, the justification for the social media ban is rooted less in mental health or privacy concerns and more in security. Ecuador has experienced a sharp deterioration in public safety, with rising homicide rates, expanding criminal networks, and increasing pressure on state institutions.

Recent findings from Ecuador’s Organised Crime Observatory indicate that around 27% of minors approached by criminal groups report initial contact through social media platforms. Surveys conducted by ChildFund Ecuador further suggest that vulnerable adolescents are increasingly exposed to recruitment tactics that combine economic incentives with normalised portrayals of violence.

In that context, the proposed social media ban is framed as a preventative measure against criminal recruitment rather than solely a child protection tool. The initiative forms part of a wider regulatory shift, including new cybersecurity legislation and draft laws targeting recruitment practices conducted through digital channels.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe boosts AI, talent and investment to compete with US and China

Efforts to strengthen technological competitiveness in Europe focus on advancing AI capabilities, developing new forms of talent and improving access to investment.

Discussions at the CTx Tech Experience in Seville highlighted a growing consensus that innovation must scale more effectively if the region is to compete globally.

Participants emphasised that Europe continues to face structural challenges, including fragmented markets, regulatory complexity and limited capital for high-growth companies.

These constraints have made it more difficult for startups to expand, prompting calls for stronger coordination between public institutions and private investors.

AI is increasingly viewed as the foundation of the transformation. Industry leaders pointed to the emergence of new business opportunities driven by AI, alongside the need to translate innovation into scalable commercial outcomes.

At the same time, labour market dynamics are shifting towards hybrid skillsets that combine technical expertise with business understanding and critical thinking.

In such a context, strengthening Europe’s innovation capacity is seen as essential to competing with global powers such as the US and China.

As technological competition intensifies, the ability to align talent, capital and policy frameworks will play a decisive role in shaping the region’s position within the global digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Social media linked to declining well-being among young people

The World Happiness Report 2026 has identified a growing decline in well-being among young people, with increased social media use emerging as a key contributing factor. These findings suggest that digital habits are increasingly shaping life satisfaction, particularly across Western societies.

The report notes that younger age groups now report significantly lower happiness levels compared to previous decades.

In regions such as North America and Western Europe, the decline coincides with a sharp rise in time spent on social media platforms. Researchers highlight that heavy usage is associated with measurable reductions in well-being, especially among younger users.

Alongside these trends, the report continues to rank Finland as the happiest country globally, reflecting broader stability in Nordic nations. However, such stability contrasts with emerging concerns about mental health and social outcomes in more industrialised regions, where digital environments are playing an increasingly influential role.

While the report identifies risks including cyberbullying, depression and online exploitation, it does not advocate for complete restrictions. Instead, it emphasises the need for carefully designed regulatory approaches that balance protection with the potential benefits of digital connectivity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI-generated songs used in $10 million streaming fraud

A large-scale fraud scheme using AI-generated music has exposed vulnerabilities in streaming platforms and royalty systems. Billions of fake streams were used to divert payments away from legitimate artists and rights holders.

The scheme ran from 2017 to 2024 and involved uploading hundreds of thousands of AI-generated tracks. Automated programs were then used to stream the songs at scale, inflating play counts and generating revenue.

The operation relied on thousands of bot accounts, bulk email registrations and cloud-based systems. Streaming activity was spread across many tracks to reduce detection and maintain consistent earnings over time.

Michael Smith, a 54-year-old from North Carolina, has pleaded guilty to conspiracy to commit wire fraud in federal court. Prosecutors say he obtained more than $10 million and agreed to forfeit over $8 million in proceeds.

Authorities say the case highlights how AI and automation can be used to manipulate digital platforms. The court will determine the final sentence as concerns grow over similar schemes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Inspired Education introduces AI-driven learning for primary schools

Inspired Education has unveiled a new AI-enabled primary teaching model designed to modernise traditional learning systems. The programme aims to better align education with how children learn in a digital and fast-changing environment.

The model combines core academic subjects in the morning with applied learning in the afternoon. Students focus on life skills such as problem-solving, entrepreneurship and communication alongside standard curriculum content.

Learning is structured around mastery rather than age, allowing children to progress at their own pace. AI-powered tools are used to personalise lessons and support faster and more adaptive learning outcomes.

The first early-access programme will launch in Central London in January 2027. Further rollouts are planned across cities, including Lisbon, Milan, Madrid, Mexico City, São Paulo and Auckland.

Developers say the approach responds to growing demand from parents for AI-integrated education. The initiative reflects broader efforts to prepare students with digital, practical and future-ready skills.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot