Australia eSafety warns on AI companion harms

Australia’s online safety regulator has found major gaps in how popular AI companion chatbots protect children from harmful and sexually explicit material. The transparency report assessed four services and concluded that age verification and content filters were inadequate for users under 18.

Regulator Julie Inman Grant said many AI companions marketed as offering friendship or emotional support can expose young users to explicit chat and encourage harmful thoughts without effective safeguards. Most failed to guide users to support when self-harm or suicide issues appeared.

The report also showed several platforms lacked robust content monitoring or dedicated trust and safety teams, leaving children vulnerable to inappropriate inputs and outputs from AI systems. Firms relied on basic age self-declaration at signup rather than reliable checks.

New enforceable safety codes now require AI chatbots to block age-inappropriate content and offer crisis support tools, with potential civil penalties for breaches. Some providers have already updated age assurance features or restricted access in Australia following the regulator’s notices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s CMA sets AI consumer law guidance

The UK Competition and Markets Authority has issued guidance warning firms that AI agents must follow the same consumer protection laws as human staff. Businesses remain legally responsible for AI actions, even when third parties supply tools.

Companies are advised to be transparent when customers interact with AI systems, particularly where people might assume a human response. Clear labelling and honest explanations of capabilities are considered essential for informed consumer decisions.

Proper training and testing of AI tools should ensure respect for refund rights, contract terms and accurate product information. Human oversight is recommended to prevent errors, misleading claims and so-called hallucinated outputs.

Rapid fixes are expected when problems emerge, especially for services affecting large audiences or vulnerable users. In the UK, breaches of consumer law can trigger enforcement action, heavy fines and mandatory compensation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data watchdogs seek safeguards in biotech law

The European Data Protection Board and the European Data Protection Supervisor have issued a joint opinion on the proposed European Biotech Act. Both bodies support efforts to streamline biotech regulation and modernise clinical trial rules.

Regulators welcome plans to harmonise the application of the Clinical Trials Regulation and create a single legal basis for processing personal data in trials. Greater legal clarity for sponsors and investigators is seen as a key benefit.

Strong safeguards are urged due to the sensitivity of health and genetic data. Recommendations include clearer definitions of data controller roles and limiting the proposed 25-year retention rule to essential trial files.

Further advice calls for defined purposes when reusing trial data, alignment with the AI Act, routine pseudonymisation, and lawful frameworks for regulatory sandboxes under the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU Market Integration Package prompts feedback from Circle

Circle has submitted feedback to the European Commission on its proposed Market Integration Package, aiming to strengthen capital markets integration and supervision across the EU.

The response praises digital finance reforms while recommending refinements to support institutional adoption and liquidity growth. Key recommendations include reforming the DLT Pilot Regime with adaptive thresholds, a clear path to permanent legislation, and accelerated updates.

Circle also calls for broader use of MiCA-compliant e-money tokens (EMTs) in securities settlement, ensuring alignment with the CSD Regulation and considering non-EU-issued stablecoins for cross-border interoperability.

The company urges careful calibration of centralised supervision under the European Securities and Markets Authority, focusing on systemic crypto firms and reducing administrative complexity for smaller providers.

Legal certainty regarding the use of EMTs as collateral is also highlighted, enabling the EU markets to remain competitive globally.

Circle emphasises the potential of clear and proportionate regulation to bridge traditional finance with on-chain infrastructure. The company positions regulated stablecoins like USDC and EURC as key tools for modernising Europe’s capital markets and unlocking new efficiency and liquidity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europol-backed operation shuts down thousands of dark web fraud sites

A global law enforcement operation supported by Europol has led to the shutdown of more than 373,000 dark web websites linked to fraudulent activity and the advertisement of child sexual abuse material.

The operation, known as ‘Operation Alice’, was launched on 9 March 2026 under the leadership of German authorities, with participation from 23 countries. The investigation, which began in 2021, initially targeted a dark web platform referred to as ‘Alice with Violence CP’.

According to Europol, investigators identified a single operator responsible for managing a network of hundreds of thousands of onion domains. These websites advertised child sexual abuse material and cybercrime-as-a-service offerings, including access to stolen financial data and systems.

Authorities state that the services were fraudulent, designed to extract payments without delivering the advertised material.

The operation has so far resulted in the identification of 440 customers worldwide, with further investigations ongoing against more than 100 individuals. Law enforcement agencies also seized 105 servers and multiple electronic devices during the coordinated action.

Europol provided analytical support, facilitated information exchange, and assisted in tracing cryptocurrency transactions linked to the network.

Authorities also reported that measures were taken throughout the investigation to identify and protect children at risk. An international arrest warrant has been issued for the suspected operator, who is reported to have generated significant profits through the scheme.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sora strengthens AI video safety through consent and traceability controls

OpenAI has outlined a safety framework for Sora that embeds protections into how AI-generated video content is created, shared, and managed.

The system introduces visible and invisible provenance signals, including C2PA metadata and watermarks, designed to ensure that generated media can be identified and traced.

The framework emphasises consent and control. Users can generate video content from images of real individuals only after confirming they have permission, while the ‘characters’ feature enables controlled use of personal likeness, with the ability to revoke access at any time.

Additional safeguards apply to content involving minors or young-looking individuals, with stricter moderation rules and enforced watermarking.

Safety mechanisms operate across the entire lifecycle of content. Generation is subject to layered filtering that assesses prompts and outputs for harmful material, including sexual content, self-harm promotion, and illegal activity.

These automated systems are complemented by human review and continuous testing to address emerging risks linked to increasingly realistic video and audio outputs.

The system also introduces protections specific to audio and user interaction. Generated speech is analysed for policy violations, and attempts to replicate the style of living artists or existing works are restricted.

Users of Sora retain control over their content through reporting tools, sharing settings, and the ability to remove material, reflecting a broader approach that aligns AI-generated media with safety, transparency, and accountability standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Licence revocations hit unregistered crypto firms in Canada

Canada has increased crypto oversight, revoking registrations for nearly three dozen firms due to compliance failures. The move follows investigative reporting that uncovered widespread irregularities in the sector.

The Financial Transactions and Reports Analysis Centre of Canada removed 23 companies in one week, adding to previous actions against about a dozen other crypto firms.

Officials described the shift as part of a broader effort to address risks tied to virtual currencies, including fraud and money laundering.

Findings from the International Consortium of Investigative Journalists’ investigation highlighted clusters of crypto businesses operating without proper registration, particularly in Toronto.

Many of these services reportedly focused on converting digital assets into cash, raising concerns about gaps in oversight and compliance with anti-money laundering rules.

Authorities also flagged suspicious transaction patterns, including activity linked to wallets allegedly associated with Iran-backed groups. While regulators have promised further action, analysts warn that delayed enforcement and structural weaknesses may continue to expose the system to illicit financial flows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pinterest chief calls for stricter youth rules

The chief executive of Pinterest has voiced support for governments banning access to social media for people under 16. He cited rising concerns about mental health, screen addiction and online harms among young users.

He praised the Australian decision to ban social media for under-16s and urged other nations to adopt similar protections. He argued that existing tech safety measures have fallen short of keeping children secure online.

The executive warned that AI enhancements in social platforms may amplify behavioural influence on teens. He compared the inaction by tech companies to past resistance by harmful industries to public health safeguards.

He also highlighted surveys showing parental worries about explicit content and excessive screen time. Pinterest’s view supports calls for clear age limits, better tools for parents and stronger platform accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot