Sora strengthens AI video safety through consent and traceability controls

OpenAI has outlined a safety framework for Sora that embeds protections into how AI-generated video content is created, shared, and managed.

The system introduces visible and invisible provenance signals, including C2PA metadata and watermarks, designed to ensure that generated media can be identified and traced.

The framework emphasises consent and control. Users can generate video content from images of real individuals only after confirming they have permission, while the ‘characters’ feature enables controlled use of personal likeness, with the ability to revoke access at any time.

Additional safeguards apply to content involving minors or young-looking individuals, with stricter moderation rules and enforced watermarking.

Safety mechanisms operate across the entire lifecycle of content. Generation is subject to layered filtering that assesses prompts and outputs for harmful material, including sexual content, self-harm promotion, and illegal activity.

These automated systems are complemented by human review and continuous testing to address emerging risks linked to increasingly realistic video and audio outputs.

The system also introduces protections specific to audio and user interaction. Generated speech is analysed for policy violations, and attempts to replicate the style of living artists or existing works are restricted.

Users of Sora retain control over their content through reporting tools, sharing settings, and the ability to remove material, reflecting a broader approach that aligns AI-generated media with safety, transparency, and accountability standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI investment reshapes euro area markets and financial systems

Philip R. Lane, Member of the Executive Board of the ECB, highlighted in his speech at the ECB-SAFE-RCEA International Conference on the Climate-Macro-Finance Interface (3CMFI) that € area firms with high AI intensity have experienced stronger revenue growth, operating margins, and earnings per share.

The advantage narrows when financial institutions are excluded, and internal funding remains essential, as well-capitalised firms are more likely to adopt AI while smaller firms face investment barriers.

European venture capital and private credit are growing but remain far below US levels, limiting start-up scaling and prompting some to relocate abroad.

Banks are embracing AI extensively, particularly for fraud detection, marketing, chatbots, and credit scoring. Proprietary tools are mostly developed in-house, while specialised external providers support cybersecurity and regulatory reporting.

AI boosts operational efficiency, risk assessment, and credit pricing, yet concentration in a few frontier firms and rising reliance on market-based finance introduce potential financial risks.

Lane noted that monetary policy implications are uncertain, as AI may enhance productivity and incomes differently depending on whether it is labour- or capital-augmenting.

High capital expenditure and increased energy demand during AI adoption could add inflationary pressure, while global concentration of AI activity in the US and China may limit domestic investment, influencing the € area’s natural rate of interest.

The European Central Bank is systematically integrating AI into its analytical and operational environment. Machine-learning tools support forecasting, scenario analysis, and extraction of signals from alternative data, while workflow automation and agentic AI enhance efficiency and reduce manual workload.

The ECB’s digitalisation programme aims to scale AI across business processes, ensuring technology complements expert judgement while maintaining reliability, traceability, and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA introduces infrastructure-level security model for autonomous AI agents

OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.

According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.

The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.

OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.

Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.

NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.

The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Corning licenses new ferrule technology to boost AI data centre fibre density

Corning has expanded its data centre connectivity portfolio through a licensing agreement with US Conec, gaining access to PRIZM TMT optical ferrule technology designed to increase fibre density within data centre environments, particularly for AI infrastructure.

The move reflects the growing pressure on data centre operators to handle higher connection densities as AI workloads scale and cluster architectures become more demanding.

The PRIZM TMT ferrule uses expanded-beam technology with precision-aligned microlenses rather than direct fibre contact, an approach intended to improve installation reliability, reduce sensitivity to contamination, and speed deployment.

As AI deployments expand, data centres are rapidly increasing the number of connected accelerators and shifting from traditional copper links to optical connections, driving a need for compact, high-performance connectors within tightly packed server and switch racks.

Mike O’Day, Senior Vice President and General Manager of Corning Optical Communications, said the company is strengthening its ability to deliver ‘scalable, fibre-rich solutions’ that help customers build ‘larger, faster, and more efficient AI clusters.’

The agreement positions Corning to address the connectivity demands that accompany large-scale AI infrastructure build-outs, where high connection density and consistent performance are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scotland sets up national AI agency

The Scottish government has launched a dedicated national agency to drive AI strategy and support local tech companies. Leaders say this effort could help boost the economy and establish the nation as a hub for AI development.

Scotland’s strategy highlights existing tech firms and data projects, including plans for major computing campuses and partnerships with global technology companies. Several research institutions and supercomputing initiatives are contributing to innovation.

Healthcare is a focus for AI adoption, with studies showing that AI tools could improve cancer detection, speed up diagnoses, and reduce workload. Academic projects also aim to develop tools to detect early signs of dementia.

Scottish government officials have acknowledged ethical, workforce and environmental concerns around AI deployment. They say policies will include responsible use, job planning and efforts to maximise renewable energy in support of data infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe boosts AI, talent and investment to compete with US and China

Efforts to strengthen technological competitiveness in Europe focus on advancing AI capabilities, developing new forms of talent and improving access to investment.

Discussions at the CTx Tech Experience in Seville highlighted a growing consensus that innovation must scale more effectively if the region is to compete globally.

Participants emphasised that Europe continues to face structural challenges, including fragmented markets, regulatory complexity and limited capital for high-growth companies.

These constraints have made it more difficult for startups to expand, prompting calls for stronger coordination between public institutions and private investors.

AI is increasingly viewed as the foundation of the transformation. Industry leaders pointed to the emergence of new business opportunities driven by AI, alongside the need to translate innovation into scalable commercial outcomes.

At the same time, labour market dynamics are shifting towards hybrid skillsets that combine technical expertise with business understanding and critical thinking.

In such a context, strengthening Europe’s innovation capacity is seen as essential to competing with global powers such as the US and China.

As technological competition intensifies, the ability to align talent, capital and policy frameworks will play a decisive role in shaping the region’s position within the global digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Terafab initiative from Elon Musk targets AI and space computing

Elon Musk unveiled his ambitious Terafab project in Austin, describing it as the ‘most epic chip-building exercise in history.’ The initiative, led by Tesla, xAI, and SpaceX, aims to produce 1 trillion watts of compute power annually, much of it intended for space applications.

The project will start with a state-of-the-art semiconductor manufacturing facility in Austin, supporting AI development, humanoid robotics, and space data centres. Musk highlighted current supply chain limitations, stating that building Terafab is essential to secure the chips his companies need.

Musk also shared his vision for a future shaped by ‘amazing abundance.’ Plans include launching satellites from the lunar surface and enabling civilian space travel to destinations such as Saturn, blending cutting-edge technology with long-term space ambitions.

Terafab represents a bold attempt to merge AI, robotics, and space exploration, positioning Musk’s companies at the forefront of next-generation technology and extraterrestrial innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telefónica Tech moves to combine AI and quantum computing

Telefónica Tech has partnered with three European firms to bring AI and quantum computing closer together. The collaboration aims to improve how advanced models are developed and deployed across different environments.

The initiative brings together Qilimanjaro Quantum Tech, Multiverse Computing and Qcentroid. Their combined expertise is expected to support more efficient, compact and locally deployable AI systems.

Quantum computing is seen as a way to reduce the heavy processing demands of large AI models. Faster computation could yield more accurate results while reducing the time required to solve complex problems.

Each partner contributes specialised capabilities, from quantum hardware and algorithms to software platforms and orchestration tools. These technologies could support applications such as simulations, edge AI and rapid prototyping.

Telefónica Tech is also strengthening its role in integrating AI and quantum solutions for enterprise clients. The move reflects a broader push to build scalable, sovereign and next-generation digital infrastructure in Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot