EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI brings conversational edits to Instagram Stories

Instagram is rolling out generative AI editing for Stories, expanding June’s tools with smarter prompts and broader effects. Type what you want removed or changed, and Meta AI does it. Think conversational edits, similar to Google Photos.

New controls include an Add Yours sticker for sharing your custom look with friends. A Presets browser shows available styles at a glance. Seasonal effects launch for Halloween, Diwali, and more.

Restyle Video brings preset effects to short clips, with options to add flair or remove objects. Edits aim to be fast, fun, and reversible. Creativity first, heavy lifting handled by AI.

Text gets a glow-up: Instagram is testing AI restyle for captions. Pick built-ins like ‘chrome’ or ‘balloon,’ or prompt Meta AI for custom styles.

Meta AI hasn’t wowed Instagram users, but this could change sentiment. The pitch: fewer taps, better results, and shareable looks. If it sticks, creating Stories becomes meaningfully easier.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI safety tools for teens

Meta has announced new AI safety tools to give parents greater control over how teenagers use its AI features. The update will first launch on Instagram, allowing parents to disable one-on-one chats between teens and AI characters.

Parents will be able to block specific AI assistants and see topics teens discuss with them. Meta said the goal is to encourage transparency and support families as young users learn to navigate AI responsibly.

Teen protections already include PG-13-guided responses and restrictions on sensitive discussions, such as self-harm or eating disorders. The company said it also uses AI detection systems to apply safeguards when suspected minors misreport their age.

The new parental controls will roll out in English early next year across the US, UK, Canada, and Australia. Meta said it will continue updating features to address parents’ concerns about privacy, safety, and teen wellbeing online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU warns Meta and TikTok over transparency failures

The European Commission has found that Meta and TikTok violated key transparency obligations under the EU’s Digital Services Act (DSA). According to preliminary findings, both companies failed to provide adequate data access to researchers studying public content on their platforms.

The Commission said Facebook, Instagram, and TikTok imposed ‘burdensome’ conditions that left researchers with incomplete or unreliable data, hampering efforts to investigate the spread of harmful or illegal content online.

Meta faces additional accusations of breaching the DSA’s rules on user reporting and complaints. The Commission said the ‘Notice and Action’ systems on Facebook and Instagram were not user-friendly and contained ‘dark patterns’, manipulative design choices that discouraged users from reporting problematic content.

Moreover, Meta allegedly failed to give users sufficient explanations when their posts or accounts were removed, undermining transparency and accountability requirements set by the law.

Both companies have the opportunity to respond before the Commission issues final decisions. However, if the findings are confirmed, Meta and TikTok could face fines of up to 6% of their global annual revenue.

The EU executive also announced new rules, effective next week, that will expand data access for ‘vetted’ researchers, allowing them to study internal platform dynamics and better understand how large social media platforms shape online information flows.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Zuckerberg to testify in landmark trial over social media’s harm to youth

A US court has mandated that Mark Zuckerberg, CEO of Meta, must appear and testify in a high-stakes trial about social media’s effects on children and adolescents. The case, brought by parents and school districts, alleges that platforms contributed to mental health harms by deploying addictive algorithms and weak moderation in their efforts to retain user engagement.

The plaintiffs argue that platforms including Facebook, Instagram, TikTok and Snapchat failed to protect young users, particularly through weak parental controls and design choices that encourage harmful usage patterns. They contend that the executives and companies neglected risks in favour of growth and profits.

Meta had argued that such platforms are shielded from liability under US federal law (Section 230) and that high-level executives should not be dragged into testimony. But the judge rejected those defenses, saying that hearing directly from executives is integral to assessing accountability and proving claims of negligence.

Legal experts say the decision marks an inflection point: social media’s architecture and leadership may now be put under the microscope in ways previously reserved for sectors like tobacco and pharmaceuticals. The trial could set a precedent for how tech chief executives are held personally responsible for harms tied to platform design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s ‘Vibes’ feed lets users scroll and remix entirely AI-generated videos

Meta Platforms has introduced Vibes, a new short-form video feed built entirely around AI-generated content, available within its Meta AI app and on the meta.ai website.

The feed allows users to browse videos generated by creators and communities, start videos from scratch via text prompts or upload visual elements, and remix existing videos by adding music or changing styles. Users can then publish these clips to the Vibes feed or cross-post to Instagram Stories, Facebook, and Reels.

Meta says the goal is to make the Meta AI app a hub for creative video generation: ‘You can bring your ideas to life … or remix a video from the feed to make it your own.’ While Meta noted the feature is launching as a preview, it also points to broader ambitions in generative video as part of its AI strategy.

However, media commentary is already acknowledging scepticism. Early feedback has labelled some of the feed’s output as ‘AI slop’, mass-produced synthetic videos that lack authentic human creativity, fueling questions about quality and user demand.

Meta’s timing comes amid heavy investment in its AI efforts and a drive to monetise generative video content and new creator tools. The company sees this as more than experiment, potentially a new vector for engagement and distribution inside its social ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta changes WhatsApp terms to block third-party AI assistants

Meta-owned WhatsApp has updated the terms of its Business API to forbid general-purpose AI chatbots from being hosted or distributed via its platform. The change will take effect on 15 January 2026.

Under the revised terms, WhatsApp will not allow providers of AI or machine-learning technologies, including large language models, generative AI platforms, or general-purpose AI assistants, to use the WhatsApp Business Solution when such technologies are the primary functionality being provided.

Meta says the Business API was designed for companies to communicate with their customers, not as a distribution channel for standalone AI assistants. The company emphasises that this update does not affect businesses using AI for defined functions like customer support, reservations or order tracking.

The move is significant for the AI ecosystem. Several startups and major players had offered their assistants via WhatsApp, including the likes of OpenAI (ChatGPT), Perplexity AI and others. These will now have to rethink how they integrate or distribute on WhatsApp.

Meta also notes that the volume of messages from these chatbots imposed strain on WhatsApp’s infrastructure and deviated from the intended business-to-customer messaging model. Furthermore, by limiting such usage Meta retains stronger control over how its platform is monetised.

For third-party AI providers, the implication is clear: WhatsApp will no longer serve as a platform for generic assistants but rather for business workflows or task-specific bots. This redefinition realigns the platform’s strategy and draws a clearer boundary between enterprise usage and public-facing AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot