New Mexico wins major case against Meta

A jury has found Meta Platforms liable for misleading consumers and endangering children in a landmark case brought by the New Mexico Department of Justice. The verdict marks the first successful trial by a US state against a major tech firm over child safety concerns.

Jurors awarded civil penalties totalling 375 million dollars after finding violations of consumer protection law. The case focused on claims that platform design choices exposed young users to harmful and exploitative content.

Evidence presented in court included internal company documents and testimony suggesting awareness of risks to children. Allegations centred on failures to prevent exploitation, as well as features linked to addictive behaviour and exposure to harmful material.

Further proceedings in the US are scheduled, with authorities seeking additional penalties and mandated changes to platform safety measures. Proposed actions include stronger age verification and improved protections for minors online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cross-device browsing arrives with Samsung Browser for Windows

Samsung Electronics has launched Samsung Browser for Windows, expanding its mobile browsing experience to desktop users. The release focuses on cross-device continuity, allowing users to resume browsing sessions seamlessly between smartphones and PCs.

Users can move between devices without losing progress, extending beyond basic bookmark and history syncing. Integration with Samsung Pass also enables secure storage of personal data, simplifying logins and autofill across websites.

A key addition is the introduction of agentic AI capabilities developed in partnership with Perplexity. The built-in assistant understands page context and user activity, helping manage tabs, summarise content, and deliver more precise search results without leaving the browser.

Availability covers Windows 10 and 11 devices, while AI features are currently limited to the US and South Korea. A wider rollout is expected as Samsung continues to expand its intelligent browsing ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICO and Ofcom issue guidance on age assurance and online safety

The Information Commissioner’s Office and Ofcom have issued a joint statement outlining how age assurance measures should align with online safety and data protection requirements.

A guidance that focuses on protecting children from harm online instead of treating safety and privacy as separate obligations, reflecting closer coordination between the two regulators.

The statement is directed at digital services likely to be accessed by children and falling within the scope of the Online Safety Act and UK data protection laws.

It provides a practical overview of existing policies, helping organisations understand how to meet both regulatory frameworks while implementing age assurance technologies.

Rather than introducing new rules, the guidance clarifies how current requirements interact in practice. It highlights the importance of designing systems that both verify users’ ages and safeguard personal data, ensuring that safety measures do not undermine privacy protections.

The approach encourages organisations to integrate compliance into service design instead of addressing obligations separately.

By aligning regulatory expectations, the ICO and Ofcom aim to support organisations in delivering safer online environments for children while maintaining strong data protection standards.

The joint effort signals a broader move towards coordinated digital regulation, where safety and privacy are addressed together to reflect the complexities of modern online services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU watchdogs launch GDPR transparency sweep

The European Data Protection Board has launched a Europe-wide enforcement initiative to examine transparency and information obligations under the GDPR. The programme forms part of its Coordinated Enforcement Framework for 2026.

Twenty-five national data protection authorities will assess how organisations inform people about the processing of their personal data. Reviews will involve formal investigations and fact-finding exercises across multiple sectors.

Authorities plan to exchange findings later in the year to build a shared picture of compliance trends. A consolidated report will guide follow-up measures at both the national and EU levels.

The framework supports closer regulatory cooperation and consistent GDPR enforcement. Previous coordinated actions examined cloud services, data protection officers, access rights and the right to erasure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU privacy bodies back cybersecurity overhaul

The European Data Protection Board and the European Data Protection Supervisor have backed proposals to strengthen the EU cybersecurity law while safeguarding personal data. Their joint opinion addresses reforms to the Cybersecurity Act and updates to the NIS2 Directive.

Regulators support plans to reinforce the mandate of the European Union Agency for Cybersecurity and expand cybersecurity certification across digital supply chains. Clearer coordination between ENISA and privacy authorities is seen as essential for consistent oversight.

Advice also calls for limits on the processing of personal data and for prior consultation on technical rules affecting privacy. Certification schemes should align with the GDPR and help organisations demonstrate compliance.

Additional recommendations include broader cybersecurity skills training and a single EU entry point for personal data breach notifications. Proposed changes would also classify digital identity wallet providers as essential entities under the EU security rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Luxembourg court overturns major GDPR fine against Amazon

The Administrative Court of Luxembourg has annulled a €746 million GDPR fine imposed on Amazon, citing procedural failings by the national regulator. Judges ruled that authorities did not properly assess the company’s level of fault before setting the penalty.

The sanction was issued in July 2021 by the National Commission for Data Protection over alleged breaches of the GDPR and appealed in March 2025. While violations were upheld, the court found the watchdog failed to determine whether the conduct was intentional or negligent.

Judges said European case law requires a clear evaluation of responsibility before fines are calculated. The ruling concluded that the penalty was imposed in an almost automatic manner without the necessary legal analysis.

The case will now be reassessed by the Luxembourgish regulator. Amazon said it welcomed the decision and maintained it acted in good faith while working with authorities on privacy compliance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada’s watchdog highlights surge in AI impersonation scams

A growing wave of AI-driven scams is prompting warnings from Competition Bureau Canada, as fraudsters increasingly impersonate government officials through deepfake technology and fake websites.

Authorities report a steady rise in complaints linked to deceptive schemes designed to exploit public trust.

Scammers are using synthetic media to mimic well-known political figures, including senior government officials, to extract personal information and spread misleading narratives.

Such tactics demonstrate how AI tools are being weaponised for social engineering rather than for legitimate communication.

The trend reflects a broader shift in digital fraud, where increasingly sophisticated techniques blur the line between authentic and fabricated content. As synthetic identities become more convincing, individuals find it harder to verify the legitimacy of online interactions and official communications.

In response, authorities in Canada are intensifying awareness efforts during Fraud Prevention Month, offering expert guidance on identifying and avoiding scams.

The development underscores the urgent need for stronger safeguards and public education to counter evolving AI-enabled threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IWF report reveals a rapid growth of synthetic child abuse material online

A surge in AI-generated child sexual abuse material has raised urgent concerns across Europe, with the Internet Watch Foundation reporting record levels of harmful content online.

Findings of the IWF report indicate that AI is accelerating both the scale and severity of abuse, transforming how offenders create and distribute illicit material.

Data from 2025 reveals a sharp increase in AI-generated imagery and video, with over 8,000 cases identified and a dramatic rise in highly severe content.

Synthetic videos have grown at an unprecedented rate, reflecting how emerging tools are being used to produce increasingly realistic and extreme scenarios rather than traditional formats.

Analysis of offender behaviour highlights a disturbing trend toward automation and accessibility.

Discussions on dark web forums suggest that future agentic AI systems may enable the creation of fully produced abusive content with minimal technical skill. The integration of audio and image manipulation further deepens risks, particularly where real children’s likenesses are involved.

Calls for regulatory action are intensifying as policymakers in the EU debate reforms to the Child Sexual Abuse Directive.

Advocacy groups emphasise the need for comprehensive criminalisation, alongside stronger safety-by-design requirements, arguing that technological innovation must not outpace child protection frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!