Meta expands AI safety tools for teens

Meta has announced new AI safety tools to give parents greater control over how teenagers use its AI features. The update will first launch on Instagram, allowing parents to disable one-on-one chats between teens and AI characters.

Parents will be able to block specific AI assistants and see topics teens discuss with them. Meta said the goal is to encourage transparency and support families as young users learn to navigate AI responsibly.

Teen protections already include PG-13-guided responses and restrictions on sensitive discussions, such as self-harm or eating disorders. The company said it also uses AI detection systems to apply safeguards when suspected minors misreport their age.

The new parental controls will roll out in English early next year across the US, UK, Canada, and Australia. Meta said it will continue updating features to address parents’ concerns about privacy, safety, and teen wellbeing online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta seeks delay in complying with Dutch court order on Facebook and Instagram timelines

Meta has yet to adjust Facebook and Instagram’s timelines despite an Amsterdam court ruling that found its current design violates European law. The company says it needs more time to make the required changes and has asked the court to extend its deadline until 31 January 2026.

The dispute stems from Meta’s use of algorithmic recommendation systems that determine what posts appear on users’ feeds and in what order. Both Instagram and Facebook have the option to set your timeline to chronological order. However, the option is hard to find and is set back to the original algorithmic timeline as soon as users close the app.

The Amsterdam court earlier ruled that these systems, which reset user preferences and hide options for chronological viewing, breach the Digital Services Act (DSA) by denying users genuine autonomy, freedom of choice, and control over how information is presented.

The judge ordered Meta to modify both apps within two weeks or face penalties of €100,000 per day, up to €5 million. More than two weeks later, Meta has yet to comply, arguing that the technical changes cannot be completed within the court’s timeline.

Dutch civil rights group Bits of Freedom, which brought the case, criticised the delay as a refusal to take responsibility. ‘The legislator wants it, experts say it can be done, and the court says it must be done. Yet Meta fails to bring its platforms into line with our legislation,’ said Evelyn Austin, the organisation’s director said in a statement.

The Amsterdam Court of Appeal will review Meta’s request for an extension on 27 October.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Zuckerberg to testify in landmark trial over social media’s harm to youth

A US court has mandated that Mark Zuckerberg, CEO of Meta, must appear and testify in a high-stakes trial about social media’s effects on children and adolescents. The case, brought by parents and school districts, alleges that platforms contributed to mental health harms by deploying addictive algorithms and weak moderation in their efforts to retain user engagement.

The plaintiffs argue that platforms including Facebook, Instagram, TikTok and Snapchat failed to protect young users, particularly through weak parental controls and design choices that encourage harmful usage patterns. They contend that the executives and companies neglected risks in favour of growth and profits.

Meta had argued that such platforms are shielded from liability under US federal law (Section 230) and that high-level executives should not be dragged into testimony. But the judge rejected those defenses, saying that hearing directly from executives is integral to assessing accountability and proving claims of negligence.

Legal experts say the decision marks an inflection point: social media’s architecture and leadership may now be put under the microscope in ways previously reserved for sectors like tobacco and pharmaceuticals. The trial could set a precedent for how tech chief executives are held personally responsible for harms tied to platform design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU states split over children’s social media rules

European leaders remain divided over how to restrict children’s use of social media platforms. While most governments agree stronger protections are needed, there is no consensus on enforcement or age limits.

Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.

France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.

Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Roblox faces Dutch investigation over child welfare concerns

Dutch officials will study how the gaming platform affects young users, focusing on safety, mental health, and privacy. The assessment aims to identify both the benefits and risks of Roblox. Authorities say the findings will help guide new policies and support parents in protecting their children online.

Roblox has faced mounting criticism over unsafe content and the presence of online predators. Reports of games containing violent or sexual material have raised alarms among parents and child protection groups.

The US state of Louisiana recently sued Roblox, alleging that it enabled systemic child exploitation through negligence. Dutch experts argue that similar concerns justify a thorough review in the Netherlands.

Previous Dutch investigations have examined platforms such as Instagram, TikTok, and Snapchat under similar children’s rights frameworks. Policymakers hope the Roblox review will set clearer standards for digital child safety across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Renew Europe urges European Commission to curb addictive design and bolster child safety online

Renew Europe is urging the European Commission to deploy its legal tools, including the Digital Services Act (DSA), GDPR and the AI Act, to curb ‘addictive design’ and protect young people’s mental health, as evidence from the Commission’s Joint Research Centre shows intensive social media use among adolescents.

Momentum is building across Brussels and the Member States. The EU digital ministers endorsed the ‘Jutland Declaration’ on child safety online. The push comes after von der Leyen’s call for tougher limits on children’s social media use in her State of the Union address and the Commission’s publication of DSA guidelines for platforms on minor protection.

Renew wants clearer rules against dark patterns and mandatory child-safe defaults such as limiting night-time notifications, switching off autoplay, banning screenshots of minors’ content, and removing filters linked to body-image risks.

The group also calls for robust, privacy-preserving age checks and regular updates to DSA guidance, alongside stronger enforcement powers for the national Digital Services Coordinators. Further action may come via the Digital Fairness Act, now out for consultation until 24 October 2025, an act targeting addictive design and misleading influencer practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot