New South Wales criminalises AI sexual deepfakes

Australia’s New South Wales state has clarified that creating, sharing, or threatening to share sexually explicit images, videos, or audio of a person without consent is a criminal offence, including where the material has been digitally altered or generated using AI.

The state government strengthened protections in 2025 by amending the Crimes Act 1900 to cover digitally generated deepfakes. The law already applied to sexually explicit image material, but now also covers content created or altered by AI to place someone in a sexual situation they were never in.

The reforms mean that non-consensual sexual images or audio are covered regardless of how they were made. Threatening to create or share such material is also a criminal offence in New South Wales, with penalties of up to three years in prison, a fine of up to A$11,000, or both.

Courts can also order offenders to remove or delete the material. Failure to comply with such an order can result in up to 2 years’ imprisonment, a fine of up to A$5,500, or both.

The law operates alongside existing child abuse material offences. Under criminal law, any material depicting a person under 18 in a sexually explicit way can be treated as child abuse material, including AI-generated content.

Criminal proceedings against people under 16 can begin only with the approval of the Director of Public Prosecutions, which is intended to ensure that only the most serious matters involving young people enter the criminal justice system.

Limited exemptions apply for proper purposes, including genuine medical, scientific, law enforcement, or legal proceedings-related purposes. A review of the law will take place 12 months after it comes into effect to assess how it is working and whether changes are needed.

The changes are intended to address the misuse of AI and deepfake technology to harass, shame, or exploit people through fake digital content. New South Wales says its criminal law works alongside national online safety frameworks, including the work of Australia’s eSafety Commissioner, as It seeks to keep privacy and consent protections aligned with emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch court backs Solvinity DigiD contract despite US data access fears

The District Court of The Hague has rejected an attempt by three Dutch citizens to block the government from renewing its contract with Solvinity, the company responsible for hosting and technically managing systems linked to DigiD.

The plaintiffs argued that Solvinity’s planned acquisition by US-based IT provider Kyndryl could place sensitive data from more than 16 million DigiD users under US jurisdiction, potentially exposing it to US authorities and creating risks to critical public services such as healthcare, pensions, taxes, and unemployment systems.

Despite these concerns, the court ruled in favour of the Dutch State, allowing the agreement to proceed. Judges did not accept arguments that the deal would immediately threaten data security or justify halting the contract.

The decision leaves further scrutiny to the Investment Assessment Office, which is reviewing national security risks linked to the acquisition. The case highlights ongoing tensions around digital sovereignty and data protection in the Netherlands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French CNIL hosts global privacy talks in Paris

The French Commission Nationale de l’Informatique et des Libertés will host the G7 roundtable of data protection and privacy authorities in June 2026. The meeting aims to strengthen international cooperation amid rapid digital and AI developments.

The roundtable, created in 2021, brings together data protection authorities from G7 countries and the EU. It focuses on sharing legal and technological developments and encouraging coordinated approaches to common challenges.

Key areas of work for 2026 include emerging technologies, enforcement cooperation and the free flow of data. The discussions are expected to address growing concerns about data protection amid expanding AI use.

The CNIL stated that the French presidency will prioritise dialogue and practical cooperation, aiming to support global governance that respects fundamental rights, and that the event will take place in Paris.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Major publishers book again Meta’s Llama over AI training

Meta and Mark Zuckerberg are facing a new copyright lawsuit from five major publishers, Hachette, Macmillan, McGraw-Hill, Elsevier, and Cengage, along with author Scott Turow. The plaintiffs accuse the company of using millions of copyrighted books, journal articles, textbooks, and scholarly works to train its Llama AI models without permission. Filed in the US District Court for the Southern District of New York (Manhattan federal court), the proposed complaint seeks monetary compensation, an injunction, and the destruction of allegedly infringing copies held by Meta.

The complaint argues that Meta’s AI strategy relied on protected works from trade, education, and academic publishing, including content allegedly taken from pirate libraries such as LibGen and Anna’s Archive, as well as broad web scrapes containing subscription-only material. The publishers also claim Zuckerberg personally directed or authorised the conduct, a charge Meta is expected to contest vigorously.

At the centre of the lawsuit is a policy question now shaping AI governance worldwide: whether large-scale copying for model training can be justified as fair use or requires permission, transparency, and compensation? Meta and other AI developers argue that training enables transformative innovation, while rights holders say commercial models are being built from creative and scholarly labour without licensing. A previous Meta win in an author’s case showed that courts may accept fair-use arguments, but only where plaintiffs fail to prove clear market harm.

Either way, the publishers are trying to make that market-harm argument harder to dismiss. Their filing describes Llama as an ‘infinite substitution machine’, capable of generating long-form books, educational materials, and scholarly-style outputs that may compete with human-authored works. The case also points to the alleged erosion of licensing markets, arguing that harm occurs not only when AI outputs imitate books, but also when copyrighted works are copied into commercial training pipelines without consent.

The US Copyright Office’s 2025 report said that fair use in generative AI training requires case-by-case analysis, with market effects and the source of the training material playing central roles. In the EU, the AI Act has shifted the debate toward transparency by requiring general-purpose AI providers to publish summaries of their training data and to comply with the EU copyright rules, including rights reservations for text and data mining.

Why does it matter?

The Meta case is the manifestation of a global shift in digital governance: AI copyright disputes are no longer isolated lawsuits, but part of a broader effort to define lawful data supply chains. Anthropic’s $1.5 billion settlement over pirated books, the EU’s training-data transparency regulation, and continuing legal disputes in the US all point in the same direction: courts and regulators are asking whether AI innovation can remain competitive while respecting the rights, labour, and markets that make high-quality knowledge possible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and Oxford University launch global AI course for courts

A free online course aimed at preparing judicial systems for the growing role of AI in legal decision-making has been launched, with UNESCO in partnership with the University of Oxford positioned at the centre of the initiative.

AI is already shaping court processes, influencing evidence assessment, and affecting access to justice. Yet, many legal professionals lack structured guidance to evaluate such systems within a rule-of-law framework.

The UNESCO programme introduces a practical, human rights-based approach to AI, combining legal, ethical, and operational perspectives.

Developed with institutions including Oxford’s Saïd Business School and Blavatnik School of Government, the course equips participants with tools to assess algorithmic outputs, manage risks of bias, and maintain judicial independence in increasingly digital court environments.

Central to UNESCO’s initiative is a newly developed AI and Rule of Law Checklist, designed to help courts scrutinise AI systems and their outputs, including use as evidence.

The course also addresses broader concerns, including fairness, transparency, accountability, and the protection of vulnerable groups, reflecting rising global reliance on AI across justice systems.

Supported by the EU, the course is available globally, free of charge, with certification from the University of Oxford. As AI becomes embedded in judicial processes, capacity-building efforts aim to ensure technological adoption strengthens rather than undermines the rule of law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UN prepares first Global Dialogue on AI governance ahead of Geneva meeting

The United Nations is advancing preparations for the first Global Dialogue on Artificial Intelligence Governance, set to take place in Geneva on 6–7 July 2026 alongside the AI for Good Summit.

Speaking at a UN Geneva press briefing, Egriselda López, Permanent Representative of El Salvador and co-chair of the Dialogue, said the initiative was established by UN member states as a universal forum to discuss AI governance. The process is intended to bring together governments and stakeholders with the aim of producing tangible outcomes.

López said the initial meeting will be structured around thematic clusters, including one focusing on AI opportunities and implications and another addressing the digital divide. She added that consultations with member states and stakeholders are ongoing to ensure an inclusive format for the discussions.

Rein Tammsaar, Permanent Representative of Estonia and co-chair of the Dialogue, said the forum aims to connect existing AI initiatives and best practices from around the world. He stressed the importance of interoperability and coordination, noting that the Dialogue seeks to create synergies rather than duplicate existing efforts.

According to Tammsaar, additional thematic areas will include interoperability, safety, and human rights. While human rights are expected to be a cross-cutting issue, stakeholders have also called for it to be addressed as a standalone theme.

Amandeep Gill, UN Secretary-General’s Envoy on Technology, described the initiative as part of a broader approach to ensuring that AI benefits humanity as a whole. He said the Dialogue is designed as a ‘dialogue of dialogues’, enabling governments, experts and other stakeholders to exchange knowledge in a rapidly evolving technological environment.

Gill also highlighted the role of the Independent International Scientific Panel on AI, which is expected to present its findings at the Geneva meeting. He noted that global capacity to both use and govern AI remains uneven, underlining the need to address disparities between countries.

Officials emphasised that the Dialogue is intended to complement existing initiatives rather than centralise governance efforts. It will focus on issues such as safety and human rights, while discussions on military uses of AI fall outside its mandate.

A second Global Dialogue on AI Governance meeting is planned for May 2027 in New York, as part of ongoing efforts to develop a more coordinated and inclusive global approach to AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot