UN invites leaders for AI governance dialogue

The co-chairs of the first Global Dialogue on AI Governance have invited member states and stakeholders to express interest in co-chairing thematic discussions during the meeting, which will take place in Geneva on 6–7 July 2026 alongside the ITU AI for Good Global Summit under UN General Assembly resolution 79/325.

The discussions will be organised around four themes: the social, economic, ethical, cultural, linguistic, and technical implications of AI; bridging AI divides through capacity-building and digital access; safe, secure, and trustworthy AI, including interoperability between governance approaches; and human rights issues such as transparency, accountability, and human oversight.

Each thematic session will be jointly chaired by one member state and one stakeholder representative, with the aim of fostering multistakeholder exchanges on experiences, best practices, and policy cooperation. Governments are asked to nominate high-level representatives, while stakeholders are encouraged to nominate senior experts relevant to the selected theme.

Selected co-chairs will support dialogue design, facilitate exchanges, and contribute to inclusive and balanced participation.

According to the UN, the initiative aims to bring together diverse perspectives from governments, industry, academia and civil society. The process is intended to strengthen collaboration and inform future AI governance approaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

G7 working group advances cybersecurity approach for AI systems

The German Federal Office for Information Security published guidance developed by the G7 Cybersecurity Working Group outlining elements for a Software Bill of Materials for AI. The document aims to support both public and private sector stakeholders in improving transparency in AI systems.

The guidance builds on a shared G7 vision introduced in 2025 and focuses on strengthening cybersecurity throughout the AI supply chain. It sets out baseline components that should be included in an AI SBOM to better track and understand system dependencies.

The document outlines seven baseline building blocks that should form part of an AI Software Bill of Materials (SBOM for AI), designed to improve visibility into how AI systems are built and how their components interact across the supply chain.

At the foundation is a Metadata cluster, which records information about the SBOM itself, including who created it, which tools and formats were used, when it was generated, and how software dependencies relate to one another.

The framework then moves to System Level Properties, covering the AI system as a whole. This includes the system’s components, producers, data flows, intended application areas, and the processing of information between internal and external services.

A dedicated Models cluster focuses on the AI models embedded within the system, documenting details such as model identifiers, versions, architectures, training methods, limitations, licenses, and dependencies. The goal is to make the origins and characteristics of models easier to trace and assess.

The document also introduces a Dataset Properties cluster to improve transparency into the data used throughout the AI lifecycle. It captures dataset provenance, content, statistical properties, sensitivity levels, licensing, and the tools used to create or modify datasets.

Beyond software and data, the framework includes an Infrastructure cluster that maps the software and hardware dependencies required to run AI systems, including links to hardware bills of materials where relevant.

Cybersecurity considerations are grouped under Security Properties, which document implemented safeguards such as encryption, access controls, adversarial robustness measures, compliance frameworks, and vulnerability references.

Finally, the framework proposes a Key Performance Indicators cluster that includes metrics related to both security and operational performance, including robustness, uptime, latency, and incident response indicators.

According to the paper, the objective is to provide practical direction that organisations can adopt to enhance visibility and manage risks linked to AI technologies. The framework is intended to support more secure development and deployment practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New South Wales criminalises AI sexual deepfakes

Australia’s New South Wales state has clarified that creating, sharing, or threatening to share sexually explicit images, videos, or audio of a person without consent is a criminal offence, including where the material has been digitally altered or generated using AI.

The state government strengthened protections in 2025 by amending the Crimes Act 1900 to cover digitally generated deepfakes. The law already applied to sexually explicit image material, but now also covers content created or altered by AI to place someone in a sexual situation they were never in.

The reforms mean that non-consensual sexual images or audio are covered regardless of how they were made. Threatening to create or share such material is also a criminal offence in New South Wales, with penalties of up to three years in prison, a fine of up to A$11,000, or both.

Courts can also order offenders to remove or delete the material. Failure to comply with such an order can result in up to 2 years’ imprisonment, a fine of up to A$5,500, or both.

The law operates alongside existing child abuse material offences. Under criminal law, any material depicting a person under 18 in a sexually explicit way can be treated as child abuse material, including AI-generated content.

Criminal proceedings against people under 16 can begin only with the approval of the Director of Public Prosecutions, which is intended to ensure that only the most serious matters involving young people enter the criminal justice system.

Limited exemptions apply for proper purposes, including genuine medical, scientific, law enforcement, or legal proceedings-related purposes. A review of the law will take place 12 months after it comes into effect to assess how it is working and whether changes are needed.

The changes are intended to address the misuse of AI and deepfake technology to harass, shame, or exploit people through fake digital content. New South Wales says its criminal law works alongside national online safety frameworks, including the work of Australia’s eSafety Commissioner, as It seeks to keep privacy and consent protections aligned with emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch court backs Solvinity DigiD contract despite US data access fears

The District Court of The Hague has rejected an attempt by three Dutch citizens to block the government from renewing its contract with Solvinity, the company responsible for hosting and technically managing systems linked to DigiD.

The plaintiffs argued that Solvinity’s planned acquisition by US-based IT provider Kyndryl could place sensitive data from more than 16 million DigiD users under US jurisdiction, potentially exposing it to US authorities and creating risks to critical public services such as healthcare, pensions, taxes, and unemployment systems.

Despite these concerns, the court ruled in favour of the Dutch State, allowing the agreement to proceed. Judges did not accept arguments that the deal would immediately threaten data security or justify halting the contract.

The decision leaves further scrutiny to the Investment Assessment Office, which is reviewing national security risks linked to the acquisition. The case highlights ongoing tensions around digital sovereignty and data protection in the Netherlands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French CNIL hosts global privacy talks in Paris

The French Commission Nationale de l’Informatique et des Libertés will host the G7 roundtable of data protection and privacy authorities in June 2026. The meeting aims to strengthen international cooperation amid rapid digital and AI developments.

The roundtable, created in 2021, brings together data protection authorities from G7 countries and the EU. It focuses on sharing legal and technological developments and encouraging coordinated approaches to common challenges.

Key areas of work for 2026 include emerging technologies, enforcement cooperation and the free flow of data. The discussions are expected to address growing concerns about data protection amid expanding AI use.

The CNIL stated that the French presidency will prioritise dialogue and practical cooperation, aiming to support global governance that respects fundamental rights, and that the event will take place in Paris.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Major publishers book again Meta’s Llama over AI training

Meta and Mark Zuckerberg are facing a new copyright lawsuit from five major publishers, Hachette, Macmillan, McGraw-Hill, Elsevier, and Cengage, along with author Scott Turow. The plaintiffs accuse the company of using millions of copyrighted books, journal articles, textbooks, and scholarly works to train its Llama AI models without permission. Filed in the US District Court for the Southern District of New York (Manhattan federal court), the proposed complaint seeks monetary compensation, an injunction, and the destruction of allegedly infringing copies held by Meta.

The complaint argues that Meta’s AI strategy relied on protected works from trade, education, and academic publishing, including content allegedly taken from pirate libraries such as LibGen and Anna’s Archive, as well as broad web scrapes containing subscription-only material. The publishers also claim Zuckerberg personally directed or authorised the conduct, a charge Meta is expected to contest vigorously.

At the centre of the lawsuit is a policy question now shaping AI governance worldwide: whether large-scale copying for model training can be justified as fair use or requires permission, transparency, and compensation? Meta and other AI developers argue that training enables transformative innovation, while rights holders say commercial models are being built from creative and scholarly labour without licensing. A previous Meta win in an author’s case showed that courts may accept fair-use arguments, but only where plaintiffs fail to prove clear market harm.

Either way, the publishers are trying to make that market-harm argument harder to dismiss. Their filing describes Llama as an ‘infinite substitution machine’, capable of generating long-form books, educational materials, and scholarly-style outputs that may compete with human-authored works. The case also points to the alleged erosion of licensing markets, arguing that harm occurs not only when AI outputs imitate books, but also when copyrighted works are copied into commercial training pipelines without consent.

The US Copyright Office’s 2025 report said that fair use in generative AI training requires case-by-case analysis, with market effects and the source of the training material playing central roles. In the EU, the AI Act has shifted the debate toward transparency by requiring general-purpose AI providers to publish summaries of their training data and to comply with the EU copyright rules, including rights reservations for text and data mining.

Why does it matter?

The Meta case is the manifestation of a global shift in digital governance: AI copyright disputes are no longer isolated lawsuits, but part of a broader effort to define lawful data supply chains. Anthropic’s $1.5 billion settlement over pirated books, the EU’s training-data transparency regulation, and continuing legal disputes in the US all point in the same direction: courts and regulators are asking whether AI innovation can remain competitive while respecting the rights, labour, and markets that make high-quality knowledge possible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot