Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Licence revocations hit unregistered crypto firms in Canada

Canada has increased crypto oversight, revoking registrations for nearly three dozen firms due to compliance failures. The move follows investigative reporting that uncovered widespread irregularities in the sector.

The Financial Transactions and Reports Analysis Centre of Canada removed 23 companies in one week, adding to previous actions against about a dozen other crypto firms.

Officials described the shift as part of a broader effort to address risks tied to virtual currencies, including fraud and money laundering.

Findings from the International Consortium of Investigative Journalists’ investigation highlighted clusters of crypto businesses operating without proper registration, particularly in Toronto.

Many of these services reportedly focused on converting digital assets into cash, raising concerns about gaps in oversight and compliance with anti-money laundering rules.

Authorities also flagged suspicious transaction patterns, including activity linked to wallets allegedly associated with Iran-backed groups. While regulators have promised further action, analysts warn that delayed enforcement and structural weaknesses may continue to expose the system to illicit financial flows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sydney set to become hub for AI innovation with Oracle centre

Oracle has launched the AI Customer Excellence Centre (AI CEC) in Sydney to help organisations adopt and scale AI technologies across Australia and Oceania. The centre will act as a hub for collaboration and skills, letting businesses test AI solutions in real-world settings.

The AI CEC provides access to Oracle and partner technologies, with flexible deployment options through Oracle Cloud Infrastructure (OCI). Organisations can receive training, test early-stage AI innovations, and pilot proof-of-concept projects in secure cloud environments.

The centre supports industries such as healthcare, public sector, financial services, and telecommunications, helping companies accelerate AI adoption while improving efficiency and decision-making.

Experts highlight the centre’s potential to bridge the gap between AI experimentation and measurable business impact. Rising compute demand shows AI moving from pilots to production, while hands-on testing helps organisations reduce risk and validate initiatives.

Oracle plans to continue collaborating with governments, partners, and industry to ensure responsible, secure, and trustworthy AI adoption, reinforcing Australia’s position as a leader in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK pushes platforms to tackle AI abuse and online violence against women

The Department for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade.

In a letter published on 23 March 2026, Liz Kendall outlined expectations for platforms operating under the Online Safety Act.

The letter states that the government has strengthened criminal law and regulatory frameworks, including new offences related to harmful pornographic practices and intimate image abuse.

It confirms that sharing or threatening to share sexually explicit deepfakes without consent constitutes a criminal offence, while the non-consensual creation of such content has also been criminalised and is being designated as a priority offence under the Act.

Further measures include amendments to the Crime and Policing Bill to ban so-called ‘nudification’ tools and extend illegal content duties to AI chatbots.

The government is also introducing a requirement for platforms to remove non-consensual intimate images within 48 hours, with a focus on reducing repeated reporting burdens for victims.

The Secretary of State urged companies to implement recommendations from Ofcom’s guidance on online safety for women and girls, including risk assessments, stronger privacy settings, and limits on the visibility of harmful content.

Platforms are expected to comply by the end of the year, with progress to be monitored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Pinterest chief calls for stricter youth rules

The chief executive of Pinterest has voiced support for governments banning access to social media for people under 16. He cited rising concerns about mental health, screen addiction and online harms among young users.

He praised the Australian decision to ban social media for under-16s and urged other nations to adopt similar protections. He argued that existing tech safety measures have fallen short of keeping children secure online.

The executive warned that AI enhancements in social platforms may amplify behavioural influence on teens. He compared the inaction by tech companies to past resistance by harmful industries to public health safeguards.

He also highlighted surveys showing parental worries about explicit content and excessive screen time. Pinterest’s view supports calls for clear age limits, better tools for parents and stronger platform accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland sets up national AI agency

The Scottish government has launched a dedicated national agency to drive AI strategy and support local tech companies. Leaders say this effort could help boost the economy and establish the nation as a hub for AI development.

Scotland’s strategy highlights existing tech firms and data projects, including plans for major computing campuses and partnerships with global technology companies. Several research institutions and supercomputing initiatives are contributing to innovation.

Healthcare is a focus for AI adoption, with studies showing that AI tools could improve cancer detection, speed up diagnoses, and reduce workload. Academic projects also aim to develop tools to detect early signs of dementia.

Scottish government officials have acknowledged ethical, workforce and environmental concerns around AI deployment. They say policies will include responsible use, job planning and efforts to maximise renewable energy in support of data infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes scandal puts Elon Musk and X under scrutiny in France

French prosecutors have escalated concerns about deepfakes linked to Elon Musk’s platform X, alerting US authorities to suspicions that manipulated content may have been used to influence the company’s valuation.

According to the Paris prosecutor’s office, the controversy surrounding sexually explicit deepfakes generated by Grok, X’s AI tool, may have been deliberately amplified to artificially boost the value of X and its associated AI entity ahead of a planned stock market listing in June 2026.

Authorities in France confirmed they had contacted the US Department of Justice and legal representatives at the Securities and Exchange Commission to share findings related to the deepfakes investigation and potential financial implications.

The case builds on an ongoing French probe into X, which initially focused on alleged algorithmic interference in domestic politics. Investigations have since expanded to include the spread of Holocaust denial content and the dissemination of sexualised deepfakes through Grok.

French regulators have taken additional steps, including summoning Musk for a voluntary interview and conducting searches at X’s local offices, actions he has described as politically motivated. Parallel investigations have also been launched in the UK and across the European Union into the use of AI tools to generate harmful deepfakes involving women and minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Social media ban in Ecuador targets youth crime recruitment

A proposal to restrict minors’ online activity is gaining momentum in Ecuador, where lawmakers are considering a social media ban for children under 15 as part of a broader response to rising organised crime.

Under discussion in the National Assembly, the initiative introduced by Assembly member Katherine Pacheco Machuca would amend the Code of Childhood and Adolescence to block access to platforms enabling public interaction, content sharing, and messaging. The proposal defines social networks broadly, covering services that allow users to create accounts, connect with others, and exchange content.

Unlike similar debates elsewhere, the justification for the social media ban is rooted less in mental health or privacy concerns and more in security. Ecuador has experienced a sharp deterioration in public safety, with rising homicide rates, expanding criminal networks, and increasing pressure on state institutions.

Recent findings from Ecuador’s Organised Crime Observatory indicate that around 27% of minors approached by criminal groups report initial contact through social media platforms. Surveys conducted by ChildFund Ecuador further suggest that vulnerable adolescents are increasingly exposed to recruitment tactics that combine economic incentives with normalised portrayals of violence.

In that context, the proposed social media ban is framed as a preventative measure against criminal recruitment rather than solely a child protection tool. The initiative forms part of a wider regulatory shift, including new cybersecurity legislation and draft laws targeting recruitment practices conducted through digital channels.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US releases national AI policy framework

The Trump Administration unveiled a national AI framework to boost competitiveness, security, and benefits for Americans. The plan seeks to ensure that AI innovation supports all citizens while maintaining public trust in the technology.

Six key objectives form the foundation of the policy. These include protecting children online, empowering parents with tools to manage digital safety, strengthening communities and small businesses, respecting intellectual property, defending free speech, and fostering innovation.

The framework also prioritises workforce development to prepare Americans for AI-driven job opportunities.

Federal uniformity is considered critical to the plan’s success. The Administration warns that a patchwork of state regulations could stifle innovation and reduce the United States’ ability to lead globally.

Congress is encouraged to collaborate closely to implement the framework nationwide.

The Administration emphasises that the United States must lead the AI race, ensuring the benefits of AI reach all Americans while addressing challenges such as privacy, security, and equitable access to opportunities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot