Meta seeks delay in complying with Dutch court order on Facebook and Instagram timelines

Meta has yet to adjust Facebook and Instagram’s timelines despite an Amsterdam court ruling that found its current design violates European law. The company says it needs more time to make the required changes and has asked the court to extend its deadline until 31 January 2026.

The dispute stems from Meta’s use of algorithmic recommendation systems that determine what posts appear on users’ feeds and in what order. Both Instagram and Facebook have the option to set your timeline to chronological order. However, the option is hard to find and is set back to the original algorithmic timeline as soon as users close the app.

The Amsterdam court earlier ruled that these systems, which reset user preferences and hide options for chronological viewing, breach the Digital Services Act (DSA) by denying users genuine autonomy, freedom of choice, and control over how information is presented.

The judge ordered Meta to modify both apps within two weeks or face penalties of €100,000 per day, up to €5 million. More than two weeks later, Meta has yet to comply, arguing that the technical changes cannot be completed within the court’s timeline.

Dutch civil rights group Bits of Freedom, which brought the case, criticised the delay as a refusal to take responsibility. ‘The legislator wants it, experts say it can be done, and the court says it must be done. Yet Meta fails to bring its platforms into line with our legislation,’ said Evelyn Austin, the organisation’s director said in a statement.

The Amsterdam Court of Appeal will review Meta’s request for an extension on 27 October.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Zuckerberg to testify in landmark trial over social media’s harm to youth

A US court has mandated that Mark Zuckerberg, CEO of Meta, must appear and testify in a high-stakes trial about social media’s effects on children and adolescents. The case, brought by parents and school districts, alleges that platforms contributed to mental health harms by deploying addictive algorithms and weak moderation in their efforts to retain user engagement.

The plaintiffs argue that platforms including Facebook, Instagram, TikTok and Snapchat failed to protect young users, particularly through weak parental controls and design choices that encourage harmful usage patterns. They contend that the executives and companies neglected risks in favour of growth and profits.

Meta had argued that such platforms are shielded from liability under US federal law (Section 230) and that high-level executives should not be dragged into testimony. But the judge rejected those defenses, saying that hearing directly from executives is integral to assessing accountability and proving claims of negligence.

Legal experts say the decision marks an inflection point: social media’s architecture and leadership may now be put under the microscope in ways previously reserved for sectors like tobacco and pharmaceuticals. The trial could set a precedent for how tech chief executives are held personally responsible for harms tied to platform design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU states split over children’s social media rules

European leaders remain divided over how to restrict children’s use of social media platforms. While most governments agree stronger protections are needed, there is no consensus on enforcement or age limits.

Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.

France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.

Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Roblox faces Dutch investigation over child welfare concerns

Dutch officials will study how the gaming platform affects young users, focusing on safety, mental health, and privacy. The assessment aims to identify both the benefits and risks of Roblox. Authorities say the findings will help guide new policies and support parents in protecting their children online.

Roblox has faced mounting criticism over unsafe content and the presence of online predators. Reports of games containing violent or sexual material have raised alarms among parents and child protection groups.

The US state of Louisiana recently sued Roblox, alleging that it enabled systemic child exploitation through negligence. Dutch experts argue that similar concerns justify a thorough review in the Netherlands.

Previous Dutch investigations have examined platforms such as Instagram, TikTok, and Snapchat under similar children’s rights frameworks. Policymakers hope the Roblox review will set clearer standards for digital child safety across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Renew Europe urges European Commission to curb addictive design and bolster child safety online

Renew Europe is urging the European Commission to deploy its legal tools, including the Digital Services Act (DSA), GDPR and the AI Act, to curb ‘addictive design’ and protect young people’s mental health, as evidence from the Commission’s Joint Research Centre shows intensive social media use among adolescents.

Momentum is building across Brussels and the Member States. The EU digital ministers endorsed the ‘Jutland Declaration’ on child safety online. The push comes after von der Leyen’s call for tougher limits on children’s social media use in her State of the Union address and the Commission’s publication of DSA guidelines for platforms on minor protection.

Renew wants clearer rules against dark patterns and mandatory child-safe defaults such as limiting night-time notifications, switching off autoplay, banning screenshots of minors’ content, and removing filters linked to body-image risks.

The group also calls for robust, privacy-preserving age checks and regular updates to DSA guidance, alongside stronger enforcement powers for the national Digital Services Coordinators. Further action may come via the Digital Fairness Act, now out for consultation until 24 October 2025, an act targeting addictive design and misleading influencer practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!