Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI investment reshapes euro area markets and financial systems

Philip R. Lane, Member of the Executive Board of the ECB, highlighted in his speech at the ECB-SAFE-RCEA International Conference on the Climate-Macro-Finance Interface (3CMFI) that € area firms with high AI intensity have experienced stronger revenue growth, operating margins, and earnings per share.

The advantage narrows when financial institutions are excluded, and internal funding remains essential, as well-capitalised firms are more likely to adopt AI while smaller firms face investment barriers.

European venture capital and private credit are growing but remain far below US levels, limiting start-up scaling and prompting some to relocate abroad.

Banks are embracing AI extensively, particularly for fraud detection, marketing, chatbots, and credit scoring. Proprietary tools are mostly developed in-house, while specialised external providers support cybersecurity and regulatory reporting.

AI boosts operational efficiency, risk assessment, and credit pricing, yet concentration in a few frontier firms and rising reliance on market-based finance introduce potential financial risks.

Lane noted that monetary policy implications are uncertain, as AI may enhance productivity and incomes differently depending on whether it is labour- or capital-augmenting.

High capital expenditure and increased energy demand during AI adoption could add inflationary pressure, while global concentration of AI activity in the US and China may limit domestic investment, influencing the € area’s natural rate of interest.

The European Central Bank is systematically integrating AI into its analytical and operational environment. Machine-learning tools support forecasting, scenario analysis, and extraction of signals from alternative data, while workflow automation and agentic AI enhance efficiency and reduce manual workload.

The ECB’s digitalisation programme aims to scale AI across business processes, ensuring technology complements expert judgement while maintaining reliability, traceability, and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA introduces infrastructure-level security model for autonomous AI agents

OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.

According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.

The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.

OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.

Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.

NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.

The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sydney set to become hub for AI innovation with Oracle centre

Oracle has launched the AI Customer Excellence Centre (AI CEC) in Sydney to help organisations adopt and scale AI technologies across Australia and Oceania. The centre will act as a hub for collaboration and skills, letting businesses test AI solutions in real-world settings.

The AI CEC provides access to Oracle and partner technologies, with flexible deployment options through Oracle Cloud Infrastructure (OCI). Organisations can receive training, test early-stage AI innovations, and pilot proof-of-concept projects in secure cloud environments.

The centre supports industries such as healthcare, public sector, financial services, and telecommunications, helping companies accelerate AI adoption while improving efficiency and decision-making.

Experts highlight the centre’s potential to bridge the gap between AI experimentation and measurable business impact. Rising compute demand shows AI moving from pilots to production, while hands-on testing helps organisations reduce risk and validate initiatives.

Oracle plans to continue collaborating with governments, partners, and industry to ensure responsible, secure, and trustworthy AI adoption, reinforcing Australia’s position as a leader in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK pushes platforms to tackle AI abuse and online violence against women

The Department for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade.

In a letter published on 23 March 2026, Liz Kendall outlined expectations for platforms operating under the Online Safety Act.

The letter states that the government has strengthened criminal law and regulatory frameworks, including new offences related to harmful pornographic practices and intimate image abuse.

It confirms that sharing or threatening to share sexually explicit deepfakes without consent constitutes a criminal offence, while the non-consensual creation of such content has also been criminalised and is being designated as a priority offence under the Act.

Further measures include amendments to the Crime and Policing Bill to ban so-called ‘nudification’ tools and extend illegal content duties to AI chatbots.

The government is also introducing a requirement for platforms to remove non-consensual intimate images within 48 hours, with a focus on reducing repeated reporting burdens for victims.

The Secretary of State urged companies to implement recommendations from Ofcom’s guidance on online safety for women and girls, including risk assessments, stronger privacy settings, and limits on the visibility of harmful content.

Platforms are expected to comply by the end of the year, with progress to be monitored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Pinterest chief calls for stricter youth rules

The chief executive of Pinterest has voiced support for governments banning access to social media for people under 16. He cited rising concerns about mental health, screen addiction and online harms among young users.

He praised the Australian decision to ban social media for under-16s and urged other nations to adopt similar protections. He argued that existing tech safety measures have fallen short of keeping children secure online.

The executive warned that AI enhancements in social platforms may amplify behavioural influence on teens. He compared the inaction by tech companies to past resistance by harmful industries to public health safeguards.

He also highlighted surveys showing parental worries about explicit content and excessive screen time. Pinterest’s view supports calls for clear age limits, better tools for parents and stronger platform accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI improves stroke care and reduces patient risks in major study

The system, which analyses medical scans and provides treatment recommendations, was associated with better outcomes compared with standard approaches to stroke care. Researchers said the tool offers a more efficient and scalable method for improving treatment, particularly in resource-constrained healthcare systems.

The findings are based on more than 21,000 patients treated across 77 hospitals in China. Patients supported by the AI-driven clinical decision support system experienced fewer new vascular events, including stroke recurrence, heart attack, or related death, over follow-up periods of up to 12 months.

At three months, new vascular events occurred in 2.9% of patients using the system, compared with 3.9% in those receiving usual care, representing a 26% reduction. The benefit persisted at 12 months, with rates of 4% in the intervention group versus 5.5% in the control group.

Patients receiving AI-supported treatment also showed improved performance on key stroke care quality measures, although no significant differences were observed in disability, mortality, or bleeding outcomes between the groups.

Researchers noted limitations, including the study design, which randomised hospitals rather than individual patients, and potential differences in follow-up care. However, they highlighted the system’s ease of integration into hospital workflows and its potential to strengthen stroke care delivery and long-term prevention strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Corning licenses new ferrule technology to boost AI data centre fibre density

Corning has expanded its data centre connectivity portfolio through a licensing agreement with US Conec, gaining access to PRIZM TMT optical ferrule technology designed to increase fibre density within data centre environments, particularly for AI infrastructure.

The move reflects the growing pressure on data centre operators to handle higher connection densities as AI workloads scale and cluster architectures become more demanding.

The PRIZM TMT ferrule uses expanded-beam technology with precision-aligned microlenses rather than direct fibre contact, an approach intended to improve installation reliability, reduce sensitivity to contamination, and speed deployment.

As AI deployments expand, data centres are rapidly increasing the number of connected accelerators and shifting from traditional copper links to optical connections, driving a need for compact, high-performance connectors within tightly packed server and switch racks.

Mike O’Day, Senior Vice President and General Manager of Corning Optical Communications, said the company is strengthening its ability to deliver ‘scalable, fibre-rich solutions’ that help customers build ‘larger, faster, and more efficient AI clusters.’

The agreement positions Corning to address the connectivity demands that accompany large-scale AI infrastructure build-outs, where high connection density and consistent performance are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scotland sets up national AI agency

The Scottish government has launched a dedicated national agency to drive AI strategy and support local tech companies. Leaders say this effort could help boost the economy and establish the nation as a hub for AI development.

Scotland’s strategy highlights existing tech firms and data projects, including plans for major computing campuses and partnerships with global technology companies. Several research institutions and supercomputing initiatives are contributing to innovation.

Healthcare is a focus for AI adoption, with studies showing that AI tools could improve cancer detection, speed up diagnoses, and reduce workload. Academic projects also aim to develop tools to detect early signs of dementia.

Scottish government officials have acknowledged ethical, workforce and environmental concerns around AI deployment. They say policies will include responsible use, job planning and efforts to maximise renewable energy in support of data infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot