Meta expands AI safety tools for teens

Meta has announced new AI safety tools to give parents greater control over how teenagers use its AI features. The update will first launch on Instagram, allowing parents to disable one-on-one chats between teens and AI characters.

Parents will be able to block specific AI assistants and see topics teens discuss with them. Meta said the goal is to encourage transparency and support families as young users learn to navigate AI responsibly.

Teen protections already include PG-13-guided responses and restrictions on sensitive discussions, such as self-harm or eating disorders. The company said it also uses AI detection systems to apply safeguards when suspected minors misreport their age.

The new parental controls will roll out in English early next year across the US, UK, Canada, and Australia. Meta said it will continue updating features to address parents’ concerns about privacy, safety, and teen wellbeing online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU warns Meta and TikTok over transparency failures

The European Commission has found that Meta and TikTok violated key transparency obligations under the EU’s Digital Services Act (DSA). According to preliminary findings, both companies failed to provide adequate data access to researchers studying public content on their platforms.

The Commission said Facebook, Instagram, and TikTok imposed ‘burdensome’ conditions that left researchers with incomplete or unreliable data, hampering efforts to investigate the spread of harmful or illegal content online.

Meta faces additional accusations of breaching the DSA’s rules on user reporting and complaints. The Commission said the ‘Notice and Action’ systems on Facebook and Instagram were not user-friendly and contained ‘dark patterns’, manipulative design choices that discouraged users from reporting problematic content.

Moreover, Meta allegedly failed to give users sufficient explanations when their posts or accounts were removed, undermining transparency and accountability requirements set by the law.

Both companies have the opportunity to respond before the Commission issues final decisions. However, if the findings are confirmed, Meta and TikTok could face fines of up to 6% of their global annual revenue.

The EU executive also announced new rules, effective next week, that will expand data access for ‘vetted’ researchers, allowing them to study internal platform dynamics and better understand how large social media platforms shape online information flows.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Zuckerberg to testify in landmark trial over social media’s harm to youth

A US court has mandated that Mark Zuckerberg, CEO of Meta, must appear and testify in a high-stakes trial about social media’s effects on children and adolescents. The case, brought by parents and school districts, alleges that platforms contributed to mental health harms by deploying addictive algorithms and weak moderation in their efforts to retain user engagement.

The plaintiffs argue that platforms including Facebook, Instagram, TikTok and Snapchat failed to protect young users, particularly through weak parental controls and design choices that encourage harmful usage patterns. They contend that the executives and companies neglected risks in favour of growth and profits.

Meta had argued that such platforms are shielded from liability under US federal law (Section 230) and that high-level executives should not be dragged into testimony. But the judge rejected those defenses, saying that hearing directly from executives is integral to assessing accountability and proving claims of negligence.

Legal experts say the decision marks an inflection point: social media’s architecture and leadership may now be put under the microscope in ways previously reserved for sectors like tobacco and pharmaceuticals. The trial could set a precedent for how tech chief executives are held personally responsible for harms tied to platform design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s ‘Vibes’ feed lets users scroll and remix entirely AI-generated videos

Meta Platforms has introduced Vibes, a new short-form video feed built entirely around AI-generated content, available within its Meta AI app and on the meta.ai website.

The feed allows users to browse videos generated by creators and communities, start videos from scratch via text prompts or upload visual elements, and remix existing videos by adding music or changing styles. Users can then publish these clips to the Vibes feed or cross-post to Instagram Stories, Facebook, and Reels.

Meta says the goal is to make the Meta AI app a hub for creative video generation: ‘You can bring your ideas to life … or remix a video from the feed to make it your own.’ While Meta noted the feature is launching as a preview, it also points to broader ambitions in generative video as part of its AI strategy.

However, media commentary is already acknowledging scepticism. Early feedback has labelled some of the feed’s output as ‘AI slop’, mass-produced synthetic videos that lack authentic human creativity, fueling questions about quality and user demand.

Meta’s timing comes amid heavy investment in its AI efforts and a drive to monetise generative video content and new creator tools. The company sees this as more than experiment, potentially a new vector for engagement and distribution inside its social ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta changes WhatsApp terms to block third-party AI assistants

Meta-owned WhatsApp has updated the terms of its Business API to forbid general-purpose AI chatbots from being hosted or distributed via its platform. The change will take effect on 15 January 2026.

Under the revised terms, WhatsApp will not allow providers of AI or machine-learning technologies, including large language models, generative AI platforms, or general-purpose AI assistants, to use the WhatsApp Business Solution when such technologies are the primary functionality being provided.

Meta says the Business API was designed for companies to communicate with their customers, not as a distribution channel for standalone AI assistants. The company emphasises that this update does not affect businesses using AI for defined functions like customer support, reservations or order tracking.

The move is significant for the AI ecosystem. Several startups and major players had offered their assistants via WhatsApp, including the likes of OpenAI (ChatGPT), Perplexity AI and others. These will now have to rethink how they integrate or distribute on WhatsApp.

Meta also notes that the volume of messages from these chatbots imposed strain on WhatsApp’s infrastructure and deviated from the intended business-to-customer messaging model. Furthermore, by limiting such usage Meta retains stronger control over how its platform is monetised.

For third-party AI providers, the implication is clear: WhatsApp will no longer serve as a platform for generic assistants but rather for business workflows or task-specific bots. This redefinition realigns the platform’s strategy and draws a clearer boundary between enterprise usage and public-facing AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta champions open hardware to power the next generation of AI data centres

The US tech giant, Meta, believes open hardware will define the future of AI data centre infrastructure. Speaking at the Open Compute Project Global Summit, the company outlined a series of innovations designed to make large-scale AI systems more efficient, sustainable, and collaborative.

Meta, one of the OCP’s founding members, said open source hardware remains essential to scaling the physical infrastructure required for the next generation of AI.

During the summit, Meta joined industry peers in supporting OCP’s Open Data Center Initiative, which calls for shared standards in power, cooling, and mechanical design.

The company also unveiled a new generation of network fabrics for AI training clusters, integrating NVIDIA’s Spectrum Ethernet to enable greater flexibility and performance.

As part of the effort, Meta became an initiating member of Ethernet for Scale-Up Networking, aiming to strengthen connectivity across increasingly complex AI systems.

Meta further introduced the Open Rack Wide (ORW) form factor, an open source data rack standard optimised for the power and cooling demands of modern AI.

Built on ORW specifications, AMD’s new Helios rack was presented as the most advanced AI rack yet, embodying the shift toward interoperable and standardised infrastructure.

Meta also showcased new AI hardware platforms built to improve performance and serviceability for large-scale generative AI workloads.

Sustainability remains central to Meta’s strategy. The company presented ‘Design for Sustainability’, a framework to reduce hardware emissions through modularity, reuse, and extended lifecycles.

It also shared how its Llama AI models help track emissions across millions of components. Meta said it will continue to

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot