Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK pushes platforms to tackle AI abuse and online violence against women

The Department for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade.

In a letter published on 23 March 2026, Liz Kendall outlined expectations for platforms operating under the Online Safety Act.

The letter states that the government has strengthened criminal law and regulatory frameworks, including new offences related to harmful pornographic practices and intimate image abuse.

It confirms that sharing or threatening to share sexually explicit deepfakes without consent constitutes a criminal offence, while the non-consensual creation of such content has also been criminalised and is being designated as a priority offence under the Act.

Further measures include amendments to the Crime and Policing Bill to ban so-called ‘nudification’ tools and extend illegal content duties to AI chatbots.

The government is also introducing a requirement for platforms to remove non-consensual intimate images within 48 hours, with a focus on reducing repeated reporting burdens for victims.

The Secretary of State urged companies to implement recommendations from Ofcom’s guidance on online safety for women and girls, including risk assessments, stronger privacy settings, and limits on the visibility of harmful content.

Platforms are expected to comply by the end of the year, with progress to be monitored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes scandal puts Elon Musk and X under scrutiny in France

French prosecutors have escalated concerns about deepfakes linked to Elon Musk’s platform X, alerting US authorities to suspicions that manipulated content may have been used to influence the company’s valuation.

According to the Paris prosecutor’s office, the controversy surrounding sexually explicit deepfakes generated by Grok, X’s AI tool, may have been deliberately amplified to artificially boost the value of X and its associated AI entity ahead of a planned stock market listing in June 2026.

Authorities in France confirmed they had contacted the US Department of Justice and legal representatives at the Securities and Exchange Commission to share findings related to the deepfakes investigation and potential financial implications.

The case builds on an ongoing French probe into X, which initially focused on alleged algorithmic interference in domestic politics. Investigations have since expanded to include the spread of Holocaust denial content and the dissemination of sexualised deepfakes through Grok.

French regulators have taken additional steps, including summoning Musk for a voluntary interview and conducting searches at X’s local offices, actions he has described as politically motivated. Parallel investigations have also been launched in the UK and across the European Union into the use of AI tools to generate harmful deepfakes involving women and minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe boosts AI, talent and investment to compete with US and China

Efforts to strengthen technological competitiveness in Europe focus on advancing AI capabilities, developing new forms of talent and improving access to investment.

Discussions at the CTx Tech Experience in Seville highlighted a growing consensus that innovation must scale more effectively if the region is to compete globally.

Participants emphasised that Europe continues to face structural challenges, including fragmented markets, regulatory complexity and limited capital for high-growth companies.

These constraints have made it more difficult for startups to expand, prompting calls for stronger coordination between public institutions and private investors.

AI is increasingly viewed as the foundation of the transformation. Industry leaders pointed to the emergence of new business opportunities driven by AI, alongside the need to translate innovation into scalable commercial outcomes.

At the same time, labour market dynamics are shifting towards hybrid skillsets that combine technical expertise with business understanding and critical thinking.

In such a context, strengthening Europe’s innovation capacity is seen as essential to competing with global powers such as the US and China.

As technological competition intensifies, the ability to align talent, capital and policy frameworks will play a decisive role in shaping the region’s position within the global digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers call for faster enforcement of digital competition rules

Members of the European Parliament are calling for more rapid progress in implementing the bloc’s digital competition framework, with particular focus on the Digital Markets Act.

In a recent resolution, lawmakers urged the European Commission to ensure timely and effective enforcement of the rules designed to regulate large online platforms. The legislation aims to address concerns around market dominance and promote fair competition across the digital economy.

The discussions reflect ongoing concerns that delays in enforcement could undermine the framework’s effectiveness, particularly as major technology companies continue to expand their influence. Platforms such as Google, Apple and Meta are among those expected to comply with the new obligations.

At the same time, policymakers are balancing regulatory oversight with the need to maintain innovation and competitiveness. The debate forms part of a broader effort in the EU to strengthen digital governance and reinforce the region’s position in global technology markets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU faces pressure to strengthen digital safeguards ahead of elections

Emmanuel Macron has called for stronger enforcement of the EU digital rules, urging Ursula von der Leyen to act against risks linked to foreign interference in elections. The request comes amid growing concern over attempts to influence democratic processes across Europe.

In a letter addressed to the Commission, Macron stressed the importance of safeguarding electoral integrity in a challenging geopolitical environment.

He wrote:

‘In a geopolitical context marked by a multiplication of hostile stances against the European model and its democratic values, it is crucial that the Union… ensure the integrity of civic discourse and electoral processes’.

The proposal focuses on stricter enforcement instead of new legislation, particularly regarding the Digital Services Act. European authorities are encouraged to ensure that online platforms properly assess and mitigate systemic risks, including the spread of manipulated content and coordinated disinformation.

Attention is also directed toward algorithmic amplification, AI-generated content labelling and the removal of fake accounts.

As multiple elections approach across the EU, policymakers are considering how to apply existing regulatory tools more effectively to protect democratic systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU digital wallet nears rollout

Interoperability tests for the European Digital Identity Wallet have marked a significant step towards deployment, following a major industry-wide exercise. Systems were tested under real conditions to ensure compatibility across providers.

The initiative forms part of the EU’s plan to provide citizens with a secure digital wallet for identification and online services. The system will allow users to store identity data and access services, including electronic signatures.

Results showed that most test scenarios were successfully completed, confirming that independent systems can work together effectively. The exercise also highlighted areas requiring further refinement ahead of wider implementation.

EU officials and industry leaders said the progress supports the development of a unified digital ecosystem. The wallet is expected to simplify everyday services while strengthening security and trust in digital identity solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO promotes safe AI use and gender equality in Caribbean workshop

A regional workshop in Kingston has been organised by UNESCO to explore the relationship between AI, gender equality and online safety, reflecting wider efforts to support inclusive digital governance across the Caribbean.

Discussions examined the impact of technology-facilitated gender-based violence, including harassment, impersonation and image-based abuse, which continue to affect women and girls disproportionately.

Generative AI was presented as both an opportunity and a risk, with concerns linked to bias, deepfakes, misinformation and non-consensual content.

More than 50 participants from government, civil society and youth organisations engaged in practical sessions aimed at strengthening awareness and digital skills. A participatory approach encouraged peer learning and critical thinking, aligning with UNESCO’s ethical AI principles.

Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them.

The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica.

The pursuit of equality must extend into every space where women live, work, and where they connect and express themselves – including the digital world,

For Eric Falt, Regional Director and Representative of UNESCO.

The initiative forms part of broader efforts to ensure that digital transformation supports inclusion rather than reinforcing existing disparities, while equipping stakeholders with tools for safe and responsible AI use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok disinformation study raises concerns over AI content and EU regulation

A new study by Science Feedback indicates that TikTok has a higher proportion of misleading content than other major platforms operating in the EU.

The analysis covered France, Poland, Slovakia and Spain, assessing content across multiple thematic areas including health, politics and climate.

Findings suggest that approximately one in four posts on TikTok contained misleading elements, placing the platform ahead of competitors such as Facebook, YouTube and X. Health-related narratives were the most prominent category, reflecting broader patterns observed across digital ecosystems.

Researchers describe disinformation as a persistent feature embedded within platform structures instead of an isolated occurrence.

The study also highlights a growing presence of AI-generated content, particularly in video formats, where synthetic material accounted for a significant share of misleading posts. Despite existing platform policies, most identified content lacked clear labelling.

The regulatory context remains under development.

While the Digital Services Act integrates voluntary commitments from the EU disinformation code, it does not impose mandatory requirements for identifying AI-generated material.

Ongoing debates therefore focus on transparency, accountability and the evolving responsibilities of digital platforms within the European information environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!