Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why young people across South Asia turn to AI

Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.

Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.

Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.

Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.

Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.

Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after global backlash

Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.

The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.

UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.

Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.

International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.

Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France weighs social media ban for under 15s

France’s health watchdog has warned that social media harms adolescent mental health, particularly among younger girls. The assessment is based on a five-year scientific review of existing research.

ANSES said online platforms amplify harmful pressures, cyberbullying and unrealistic beauty standards. Experts found that girls, LGBT youths and vulnerable teens face higher psychological risks.

France is debating legislation to ban social media access for children under 15. President Emmanuel Macron supports stronger age restrictions and platform accountability.

The watchdog urged changes to algorithms and default settings to prioritise child well-being. Similar debates have emerged globally following Australia’s introduction of a teenage platform ban.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK considers social media limits for youth

Keir Starmer has told Labour MPs that he is open to an Australian-style ban on social media for young people, following concerns about the amount of time children spend on screens.

The prime minister said reports of very young children using phones for hours each day have increased anxiety about the effects of digital platforms on under-16s.

Starmer previously opposed such a ban, arguing that enforcement would prove difficult and might instead push teenagers towards unregulated online spaces rather than safer platforms. Growing political momentum across Westminster, combined with Australia’s decision to act, has led to a reassessment of that position.

Speaking to MPs, Starmer said different enforcement approaches were being examined and added that phone use during school hours should be restricted.

UK ministers have also revisited earlier proposals aimed at reducing the addictive design of social media and strengthening safeguards on devices sold to teenagers.

Support for stricter measures has emerged across party lines, with senior figures from Labour, the Conservatives, the Liberal Democrats and Reform UK signalling openness to a ban.

A final decision is expected within months as ministers weigh child safety, regulation and practical implementation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversy fuels political backlash in Northern Ireland

A Northern Ireland politician, Cara Hunter of the Social Democratic and Labour Party (SDLP), has quit X after renewed concerns over Grok AI misuse. She cited failures to protect women and children online.

The decision follows criticism of Grok AI features enabling non-consensual sexualised images. UK regulators have launched investigations under online safety laws.

UK ministers plan to criminalise creating intimate deepfakes and supplying related tools. Ofcom is examining whether X breached its legal duties.

Political leaders and rights groups say enforcement must go further. X says it removes illegal content and has restricted Grok image functions on the social media.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Spanish bill targets AI misuse of images and voices

Spain’s government has approved draft legislation that would tighten consent rules for AI-generated content, aiming to curb deepfakes and strengthen protections for the use of people’s images and voices. The proposal responds to growing concerns in Europe about AI being used to create harmful material, especially sexual content produced without the subject’s permission.

Under the draft, the minimum age to consent to the use of one’s own image would be set at 16, and stricter limits would apply to reusing images found online or reproducing a person’s voice or likeness through AI without authorisation. Spain’s Justice Minister Félix Bolaños warned that sharing personal photos on social media should not be treated as blanket approval for others to reuse them in different contexts.

The reform explicitly targets commercial misuse by classifying the use of AI-generated images or voices for advertising or other business purposes without consent as illegitimate. At the same time, it would still allow creative, satirical, or fictional uses involving public figures, so long as the material is clearly labelled as AI-generated.

Spain’s move aligns with broader EU efforts, as the bloc is working toward rules that would require member states to criminalise non-consensual sexual deepfakes by 2027. The push comes amid rising scrutiny of AI tools and real-world cases that have intensified calls for more precise legal boundaries, including a recent request by the Spanish government for prosecutors to review whether specific AI-generated material could fall under child pornography laws.

The bill is not yet final. It must go through a public consultation process before returning to the government for final approval and then heading to parliament.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!