Meta tests compromise plan in EU WhatsApp AI access dispute

European Commission officials are examining whether Meta’s policy on access to WhatsApp for AI providers may raise competition concerns in the European Economic Area.

Changes to the WhatsApp Business Solution terms are at the centre of the investigation, particularly as they affect how third-party AI providers can offer services on the platform. The Commission is assessing whether the policy could limit access for competing AI services and reduce choice for users and businesses.

Messaging platforms are becoming important distribution channels for AI-powered services. As chatbots and AI assistants become more integrated into everyday communication tools, access to widely used platforms such as WhatsApp may become an important factor in competition between providers.

Commission officials have said they will examine whether Meta’s conduct complies with the EU competition rules. Opening an investigation does not mean that the Commission has reached a conclusion or found an infringement.

The broader EU scrutiny of large digital platforms is increasingly focused on how access to infrastructure, services and user ecosystems is managed as AI tools become more widely adopted.

Why does it matter?

Competition questions are expanding into AI distribution channels. Messaging platforms can shape which AI services reach users and businesses at scale, making access rules an important part of the emerging AI market. The outcome could influence how major platforms design access policies for third-party AI providers while regulators seek to preserve competition and user choice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

Meta gives parents deeper insight into teen algorithms

Meta has introduced new supervision features designed to give parents greater visibility into the content shaping teenagers’ experiences on Instagram.

The updated tools allow parents and guardians to view the general topics their teens engage with through Instagram’s ‘Your Algorithm’ feature, which helps shape recommendations on Reels and Explore. Meta said parents in selected markets will soon receive notifications when teens add new interests, such as basketball, photography or musicals, helping explain why recommended content may change over time.

The company said the feature remains subject to existing teen safety protections and content restrictions already applied to Teen Accounts, including limits on certain content for users aged 13 and above and enforcement of Meta’s Community Standards.

Meta has also consolidated supervision tools for Instagram, Facebook, Messenger and Meta Horizon into a single Family Centre hub. Parents can now manage supervised accounts, safety settings and invitations across multiple apps without switching between separate platforms.

Meta said the number of US teens enrolled in supervision on Instagram has more than doubled over the past year. Additional updates planned for the coming months include aggregated activity insights, such as total time spent across Meta’s apps, to give families broader visibility into teen online habits.

Why does it matter?

The update shows how major platforms are responding to pressure for greater transparency around their recommendation systems, particularly regarding teenagers. While the tools do not reveal the full logic of Instagram’s algorithm, they give parents more visibility into the interest categories shaping teen content feeds and create another layer of oversight around personalised recommendations, screen time and online safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Child safety concerns dominate Europe’s digital agenda

A growing majority of Europeans believe stronger online protections for children and young people should remain a top policy priority, according to new findings from the Special Eurobarometer on the Digital Decade.

The European Commission said 92% of Europeans consider further action to protect children and young people online a top priority, reflecting sustained concern over the impact of digital platforms on younger users.

Mental health risks linked to social media ranked among the biggest concerns, with 93% of respondents calling for stronger protections. Cyberbullying, online harassment, and better age-restriction mechanisms for inappropriate content were also highlighted by 92% of respondents.

Concerns over AI and online manipulation also remain high. The survey found that 39% of respondents cited privacy or data protection as a barrier to using AI, followed by accuracy or incorrect information at 36% and ethical issues or misuse of generative AI tools at 32%.

Around 87% of Europeans agreed that online manipulation, including disinformation, foreign interference, AI-generated content and deepfakes, poses a threat to democratic processes. Another 80% said AI development should be carefully regulated to ensure safety, even if oversight places constraints on developers.

The findings also show continuing concern over online platforms. Europeans reported being personally affected by fake news and disinformation, misuse of personal data and insufficient protections for minors, with concerns over fake news and child protection showing the sharpest increases since 2024.

Why does it matter?

The findings show that public concern over digital technologies in Europe is increasingly centred on safety, rights and accountability, particularly for children and young people. They also suggest that trust in platforms and AI systems will depend not only on innovation and access, but also on visible safeguards against manipulation, harmful content, privacy risks, and weak protections for minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Commission moves to standardise AI transparency obligations

The European Commission has published draft guidelines outlining how transparency obligations under Article 50 of the AI Act should be applied across certain AI systems. The guidance is intended to help competent authorities, providers and deployers ensure compliance in a consistent, effective and uniform manner.

Prepared in parallel with a separate Code of Practice on the marking and labelling of AI-generated content, the draft guidelines clarify the scope of legal obligations and address areas not covered by the code. The focus is on helping users identify when they are interacting with AI systems or encountering AI-generated content.

A targeted consultation is open until 3 June, allowing stakeholders to provide feedback on the draft framework. The consultation will inform the final version of the guidelines, which are intended to support more consistent implementation and enforcement of Article 50 obligations across the EU.

The initiative reflects a broader regulatory push in the European Union to strengthen oversight of AI transparency, particularly as generative AI tools become more widely used in content creation, communication and digital services.

Why does it matter?

Transparency obligations are central to the AI Act‘s approach to trust in digital environments. Clear disclosure and labelling requirements can help users understand when they are interacting with AI systems or encountering AI-generated material, reducing risks linked to manipulation, misinformation and misplaced reliance on machine-generated outputs.

Consistent guidance also matters for legal certainty. Providers and deployers need clearer expectations on how Article 50 applies in practice, while regulators need a common basis for enforcement across member states.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

Pakistan expands media literacy efforts with UNESCO support

UNESCO and Pakistan have launched ‘Digital Citizens for Peace’, a media literacy initiative designed to counter hate speech and disinformation by training young journalists and content creators. The programme responds to Pakistan’s widening gap between internet connectivity and critical digital literacy, as social media increasingly becomes the main source of news for millions of users.

Through immersive Media and Information Literacy camps, mentorship programmes, and open-access educational toolkits, the initiative aims to strengthen responsible digital engagement and encourage fact-based content creation across the country.

The project also seeks to create long-term institutional impact by integrating media literacy resources into universities and community education programmes.

UNESCO and the Interactive Resource Centre are developing video-based educational tools to support the broader National Media and Information Literacy Roadmap of Pakistan, helping young people navigate digital platforms more critically while promoting social cohesion and responsible online participation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Swiss media groups launch responsible AI journalism framework

Swiss media organisations have adopted a national code of conduct for the responsible use of AI, aiming to strengthen transparency, copyright protection and public trust in journalism.

The initiative is backed by major Swiss publishing groups, private radio and television organisations, the Swiss Broadcasting Corporation and the national news agency Keystone-ATS. It is based on the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

The code states that media companies and their employees remain responsible for all published editorial content, whether produced by journalists or with the support of AI systems. It also commits media organisations to train staff in AI use, respect copyright, follow data protection rules and take steps to prevent the spread of false information.

Swiss media groups also agreed to inform the public transparently about their use of AI, including through dedicated information pages, and to introduce binding marking obligations for AI-supported content. The framework is designed as a self-regulatory tool at a time when public concern over AI-generated content remains high.

To support implementation, the code provides for a two-tier reporting and control mechanism. The relevant departments within media companies will first handle questions and complaints. In contrast, an independent AI ombudsperson will act as a second instance for serious or unresolved cases and publish an annual report.

Swiss President Guy Parmelin said AI could strengthen journalism if used responsibly and transparently, while warning that fake news threatens journalistic credibility and social cohesion. Legislative changes needed to implement the Council of Europe convention in Switzerland are expected by the end of 2026.

Why does it matter?

The Swiss code shows how media organisations are moving to set AI governance standards before legal obligations fully take shape. Its significance lies in linking AI-assisted journalism with editorial responsibility, transparency, copyright, data protection and complaint mechanisms, rather than treating AI labelling as the only issue. The model could influence how other media sectors balance innovation with public trust and accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Instagram pulls the plug on encrypted chats

Instagram will no longer support end-to-end encrypted chats from 8 May 2026, ending an optional privacy feature for some direct messages on the platform.

Users affected by the change are being prompted to download any messages or media from encrypted chats that they wish to keep before the feature is removed. Instagram’s help page says users may need to update the app to access or download their end-to-end encrypted chats.

End-to-end encryption allows only the people in a conversation to read messages or hear calls, with messages protected by encryption keys linked to authorised devices. On Instagram, however, encrypted chats were an optional feature rather than the default for all direct messages.

After 8 May 2026, users will no longer be able to send or receive end-to-end encrypted messages or calls on Instagram. The help page also notes that users can still report messages from encrypted chats and that shared content may still be forwarded outside an encrypted conversation.

The change marks a rollback of a privacy feature on one of Meta’s major social platforms, even as end-to-end encryption remains central to debates over secure communications, platform safety and user confidentiality.

Why does it matter?

End-to-end encryption is widely seen as a core privacy protection because it limits access to message content, including by the platform itself. Its removal from Instagram encrypted chats raises questions about how major platforms prioritise privacy features, user safety, product complexity and interoperability across their messaging services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot