Australia eSafety warns on AI companion harms

Australia’s online safety regulator has found major gaps in how popular AI companion chatbots protect children from harmful and sexually explicit material. The transparency report assessed four services and concluded that age verification and content filters were inadequate for users under 18.

Regulator Julie Inman Grant said many AI companions marketed as offering friendship or emotional support can expose young users to explicit chat and encourage harmful thoughts without effective safeguards. Most failed to guide users to support when self-harm or suicide issues appeared.

The report also showed several platforms lacked robust content monitoring or dedicated trust and safety teams, leaving children vulnerable to inappropriate inputs and outputs from AI systems. Firms relied on basic age self-declaration at signup rather than reliable checks.

New enforceable safety codes now require AI chatbots to block age-inappropriate content and offer crisis support tools, with potential civil penalties for breaches. Some providers have already updated age assurance features or restricted access in Australia following the regulator’s notices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data watchdogs seek safeguards in biotech law

The European Data Protection Board and the European Data Protection Supervisor have issued a joint opinion on the proposed European Biotech Act. Both bodies support efforts to streamline biotech regulation and modernise clinical trial rules.

Regulators welcome plans to harmonise the application of the Clinical Trials Regulation and create a single legal basis for processing personal data in trials. Greater legal clarity for sponsors and investigators is seen as a key benefit.

Strong safeguards are urged due to the sensitivity of health and genetic data. Recommendations include clearer definitions of data controller roles and limiting the proposed 25-year retention rule to essential trial files.

Further advice calls for defined purposes when reusing trial data, alignment with the AI Act, routine pseudonymisation, and lawful frameworks for regulatory sandboxes under the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-EFFECT builds EU testing facility for AI in critical energy infrastructure

As Europe moves towards its climate-neutrality goals, integrating AI into energy systems is being presented as a way to improve efficiency, resilience, and sustainability. The EU-funded AI-EFFECT project is developing a European testing and experimentation facility (TEF) to support the development and adoption of AI solutions for the energy industry while ensuring safety, reliability, and compliance with EU regulations.

The TEF is described as a virtual network linking existing laboratories and computing resources across several EU countries. It is designed to provide standardised testing environments, risk and certification workflows, and replicable methods for developing, testing, and validating AI applications for critical energy infrastructures under diverse, real-world conditions.

The facility operates through four national nodes in Denmark, Germany, the Netherlands, and Portugal, each focused on a different set of energy challenges. In Denmark, the node led by the Technical University of Denmark is testing AI in virtual and physical multi-energy systems, including coordination between electric power grid operations and district heating systems in the Triangle Region in Jutland and on the island of Bornholm.

In the Netherlands, the node at Delft University of Technology is extending the university’s ‘control room of the future’ with AI capabilities to address grid congestion as renewable generation increases.

In Portugal, the node led by INESC TEC is developing a trusted local energy data space intended to address privacy concerns and connectivity gaps through secure, consent-based energy data sharing. The AI-EFFECT project says consumers and prosumers will be able to manage data rights and permissions in line with EU regulations while working with AI-driven service providers on co-creation and testing.

In Germany, the Fraunhofer-led node is focused on AI for power distribution systems and is developing a near-realistic cyber-physical model to benchmark AI performance in congestion management and distributed energy resource integration against traditional engineering approaches.

Alberto Dognini, project coordinator of EPRI Europe, Ireland, wrote in an Enlit news item: ‘Together, these four nodes form the backbone of AI-EFFECT’s mission to make AI a trusted partner in Europe’s energy transition.’ He added: ‘From optimising multi-energy systems to enabling secure data sharing and improving grid resilience, these nodes will accelerate innovation while reducing risk for operators and consumers alike.’

AI-EFFECT is also sharing its work through public-facing initiatives, including the EPRI Current Podcast. In the episode ‘Exploring the AI-EFFECT on Europe’s Energy Future’, participants discuss the architecture and building blocks supporting distributed nodes across multiple countries and examine how the TEF could shape the future of Europe’s energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ITU to host AI for Good Global Summit in Geneva

The International Telecommunication Union (ITU) will organise the AI for Good Global Summit from 7 to 10 July 2026 at Palexpo in Geneva, Switzerland, according to an official announcement by the Swiss authorities.

On 6 and 7 July, the United Nations Global Dialogue on AI Governance will take place ahead of the summit. The dialogue is convened within the framework of a UN General Assembly resolution and will bring together policymakers, experts, and representatives of civil society to discuss approaches to AI governance.

The events will be held in parallel with the World Summit on the Information Society (WSIS) Forum (from 6 to 10 July), which focuses on issues related to digital cooperation and the development of the information society.

According to the official announcement, the co-location of these events is intended to facilitate exchanges between technical and policy communities working on AI and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sora strengthens AI video safety through consent and traceability controls

OpenAI has outlined a safety framework for Sora that embeds protections into how AI-generated video content is created, shared, and managed.

The system introduces visible and invisible provenance signals, including C2PA metadata and watermarks, designed to ensure that generated media can be identified and traced.

The framework emphasises consent and control. Users can generate video content from images of real individuals only after confirming they have permission, while the ‘characters’ feature enables controlled use of personal likeness, with the ability to revoke access at any time.

Additional safeguards apply to content involving minors or young-looking individuals, with stricter moderation rules and enforced watermarking.

Safety mechanisms operate across the entire lifecycle of content. Generation is subject to layered filtering that assesses prompts and outputs for harmful material, including sexual content, self-harm promotion, and illegal activity.

These automated systems are complemented by human review and continuous testing to address emerging risks linked to increasingly realistic video and audio outputs.

The system also introduces protections specific to audio and user interaction. Generated speech is analysed for policy violations, and attempts to replicate the style of living artists or existing works are restricted.

Users of Sora retain control over their content through reporting tools, sharing settings, and the ability to remove material, reflecting a broader approach that aligns AI-generated media with safety, transparency, and accountability standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

NVIDIA introduces infrastructure-level security model for autonomous AI agents

OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.

According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.

The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.

OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.

Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.

NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.

The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI improves stroke care and reduces patient risks in major study

The system, which analyses medical scans and provides treatment recommendations, was associated with better outcomes compared with standard approaches to stroke care. Researchers said the tool offers a more efficient and scalable method for improving treatment, particularly in resource-constrained healthcare systems.

The findings are based on more than 21,000 patients treated across 77 hospitals in China. Patients supported by the AI-driven clinical decision support system experienced fewer new vascular events, including stroke recurrence, heart attack, or related death, over follow-up periods of up to 12 months.

At three months, new vascular events occurred in 2.9% of patients using the system, compared with 3.9% in those receiving usual care, representing a 26% reduction. The benefit persisted at 12 months, with rates of 4% in the intervention group versus 5.5% in the control group.

Patients receiving AI-supported treatment also showed improved performance on key stroke care quality measures, although no significant differences were observed in disability, mortality, or bleeding outcomes between the groups.

Researchers noted limitations, including the study design, which randomised hospitals rather than individual patients, and potential differences in follow-up care. However, they highlighted the system’s ease of integration into hospital workflows and its potential to strengthen stroke care delivery and long-term prevention strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media ban in Ecuador targets youth crime recruitment

A proposal to restrict minors’ online activity is gaining momentum in Ecuador, where lawmakers are considering a social media ban for children under 15 as part of a broader response to rising organised crime.

Under discussion in the National Assembly, the initiative introduced by Assembly member Katherine Pacheco Machuca would amend the Code of Childhood and Adolescence to block access to platforms enabling public interaction, content sharing, and messaging. The proposal defines social networks broadly, covering services that allow users to create accounts, connect with others, and exchange content.

Unlike similar debates elsewhere, the justification for the social media ban is rooted less in mental health or privacy concerns and more in security. Ecuador has experienced a sharp deterioration in public safety, with rising homicide rates, expanding criminal networks, and increasing pressure on state institutions.

Recent findings from Ecuador’s Organised Crime Observatory indicate that around 27% of minors approached by criminal groups report initial contact through social media platforms. Surveys conducted by ChildFund Ecuador further suggest that vulnerable adolescents are increasingly exposed to recruitment tactics that combine economic incentives with normalised portrayals of violence.

In that context, the proposed social media ban is framed as a preventative measure against criminal recruitment rather than solely a child protection tool. The initiative forms part of a wider regulatory shift, including new cybersecurity legislation and draft laws targeting recruitment practices conducted through digital channels.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!