
10-14 November 2025
HIGHLIGHT OF THE WEEK
Strengthening democracy in the digital age: The EU’s new ‘Democracy Shield’
The EU Commission has unveiled the European Democracy Shield, a new initiative aimed at strengthening and protecting democracies across the European Union. The Shield lays out a wide range of measures designed to empower citizens, safeguard institutions, and promote resilience against emerging threats to democratic processes.
The announcement is accompanied by a detailed 30-page document, but a one-page factsheet is also available for a quick overview. We’ll endeavour to keep our breakdown somewhere in the middle.
As ever, the focus of our newsletter is on the digital dimension, so we will highlight the aspects of the initiative that intersect with technology, online information, and the digital space.

The new measures that make up the Democracy Shield fall into three priority areas:
- reinforcing situational awareness and support response capacity to safeguard the integrity of the information space,
- strengthening democratic institutions, free and fair elections and free and independent media,
- boosting societal resilience and citizens’ engagement.
Measures in the first priority area include:
- The Commission and the European Board for Digital Services will prepare a Digital Services Act (DSA) incidents and crisis protocol. The protocol will help authorities coordinate and respond quickly to major disruptions or information operations, including those that span multiple countries. The protocol will work alongside existing EU crisis tools, such as the Cybersecurity Blueprint, to keep actions aligned and consistent. The European Centre for Democratic Resilience and its stakeholder platform may help support cooperation between these mechanisms.
- The Commission will continue its engagement with signatories of the Code of Conduct on Disinformation to strengthen measures to tackle manipulative techniques online. This may include working with platforms to make recommender systems more transparent and to demonetise disinformation (including removing financial incentives for disinformation, such as advertising revenue) and to develop new indicators to track the platforms’ progress. The Commission will also explore additional steps with signatories, such as improving the detection and labelling of AI-generated or manipulated content and developing voluntary user-verification tools. These efforts would complement the AI Act and other EU rules. The upcoming EU Digital Identity Wallets, available by the end of 2026, could help support these measures by enabling secure online identification.
- An independent European Network of Fact-Checkers will be set up with the Commission’s support. The network will strengthen independent fact-checking across all EU languages and neighbouring countries, especially during crises. The network will also set up an independent repository that gathers fact-checks from trusted organisations, making it easier for journalists, platforms, researchers, and civil society to access verified information and coordinate across borders. It will also support cross-border cooperation and offer legal and psychological protection for fact-checkers, helping boost Europe’s overall resilience to disinformation.
- The mandate of the European Digital Media Observatory will be expanded. It will develop stronger monitoring tools, especially for elections and crises, covering all EU member states and candidate countries.
- The Commission will support setting up a common research support framework to strengthen monitoring of information manipulation and disinformation campaigns. This will include giving academics and researchers better access to data and advanced technology, including data available through the DSA and the Political Advertising Regulation. This work will also help develop tools to detect AI-generated or manipulated content—like deepfake audio, images, and videos—and track new forms of coordinated online manipulation, including bots, cross-platform tactics, and algorithm-driven amplification.
Measures in the second priority area include:
- The Commission, supported by ECNE and the European AI Office, will work with member states and stakeholders to create guidance on using AI responsibly in elections. It will encourage political actors to adopt voluntary commitments on the responsible use of new technologies (notably AI) in political activities and share best practices.
- The Commission will update the DSA Elections Toolkit, with support from national Digital Services Coordinators and ECNE. The aim is to incorporate lessons from recent elections and help very large online platforms and very large search engines (VLOPs and VLOSEs) manage risks to civic debate and elections, as is their DSA obligation.
- The Commission will support the setting up of a voluntary EU network of influencers to raise awareness about relevant EU rules on engaging in political campaigning and promote the exchange of best practices. The Commission will also encourage ethical standards and voluntary commitments, including regarding information integrity, and support work by influencers to promote digital literacy.
- The EU will reinforce cooperation with international election observers to strengthen capacities to monitor disinformation on social media during election observation outside the EU.
- The Commission will review the Audiovisual Media Services Directive (AVMSD). It will assess ways to support media sustainability under the Audiovisual Media Services Directive, including boosting the prominence of media of general interest and modernising advertising rules. The role of influencers will be considered in the AVMSD review and the forthcoming Digital Fairness Act to complement existing rules.
- The Commission will address challenges to the media ecosystem in the context of the review of the Directive on copyright in the Digital Single Market. The aim is to address challenges from generative AI, including online piracy and the unauthorised use of copyrighted material to train AI models, which threaten media revenues, quality, and diversity.
- The Commission will provide guidance to maintain competition and foster media plurality and diversity in the context of the revision of the Merger Guidelines. Under the Digital Markets Act, the Commission will prioritise promoting independent and diverse media, including through greater transparency in online advertising.
- The Commission will support the digital transformation of media. Efforts will focus on pan-European platforms for real-time news in multiple languages, innovative content formats, and exploring the future pathways for the EU’s tech environment, with an initial focus on the future of social networking platforms to strengthen EU digital sovereignty.
- The Commission will support quality independent media and journalism beyond the EU border. It will scale up rapid-response measures with partners to provide tools that counter digital censorship, surveillance, and shutdowns, ensuring that citizens, civil society, and journalists under authoritarian regimes can access reliable information.
Measures in the third priority area include:
- The Commission will develop the 2026 Basic Skills Support Scheme for Schools, which will include citizenship and digital skills. Part of this package will be a 2030 Roadmap on the future of digital education and skills, building on the review of the Digital Education Action Plan, focusing on digital skills, AI literacy and critical thinking and boosting democratic resilience in the digital world.
- The Commission will update the Guidelines for teachers and educators on disinformation and digital literacy. These guidelines will address issues like generative AI, misinformation, and social media. It will also develop an EU citizenship competence framework and guidelines to strengthen citizenship education, support media literacy, and inform curricula and training programmes.
- The Commission will set up a European civic tech hub to support the civic tech sector. It will stimulate innovation in online platforms that enable participation in democracy, leveraging AI for participation. It will also organise a civic tech hackathon to showcase innovative projects.
The bottom line. This is the EU’s most direct attempt yet to hard-code democratic values into its digital infrastructure. By targeting everything from recommender algorithms to AI deepfakes and influencer networks, the Shield aims to change the very environment where opinion is formed. The goal is clear: to ensure that in the battle for attention, integrity has a fighting chance.
IN OTHER NEWS LAST WEEK
Recalibrating the digital agenda: Highlights from the WSIS+20 Rev 1 document
A revised version of the WSIS+20 outcome document – Revision 1 – was published on 7 November by the co-facilitators of the intergovernmental process. The document will serve as the basis for continued negotiations among UN member states ahead of the high-level meeting of the General Assembly on 16–17 December 2025.
While maintaining the overall structure of the Zero Draft released in August, Revision 1 introduces several changes and new elements.
The new text includes revised – and in several places stronger – language emphasising the need to close rather than bridge digital divides, presented as multidimensional challenges that must be addressed to achieve the WSIS vision. At the same time, some issues were deprioritised: for instance, references to e-waste and the call for global reporting standards on environmental impacts were removed from the environment section.
Several new elements also appear. In the enabling environments section, states are urged to take steps towards avoiding or refraining from unilateral measures inconsistent with international law. There is also a new recognition of the importance of inclusive participation in standard-setting. The financial mechanisms section introduces an invitation for the Secretary-General to consider establishing a task force on future financial mechanisms for digital development, with outcomes to be reported to the UNGA at its 81st session. The internet governance section now includes a reference to the NetMundial+10 Guidelines.
Language on the Internet Governance Forum (IGF) remains largely consistent with the Zero Draft, including with regard to making the forum a permanent one and requesting the Secretary-General to make proposals concerning the IGF’s future funding. New text invites the IGF to further strengthen the engagement of governments and other stakeholders from developing countries in discussions on internet governance and emerging technologies.
Several areas saw shifts in tone. Language in the human rights section has been softened in parts (e.g. references to surveillance safeguards and threats to journalists now being removed). And there is a change in how the interplay between WSIS and the GDC is framed – the emphasis is now on alignment (rather than integration) – a shift reflected in the revised language on the joint WSIS-GDC implementation roadmap, and updates to the described roles of ECOSOC and the CSTD.
EU’s revised Chat Control: A step back from mass scanning, but privacy worries persist
After more than three years of difficult negotiations, the EU’s Regulation to Prevent and Combat Child Sexual Abuse (CSAR) — also known as the ‘Chat Control’ proposal — has returned in a significantly revised form. In a dramatic shift, the latest version proposed by Denmark abandons mandatory scanning obligations, instead making CSAM detection voluntary for messaging providers.
Under the new text, platforms are no longer legally required to scan every link, image, or video shared by users. Rather, they may choose to opt into CSAM scanning. This change is being framed as a compromise — a way to preserve encryption and user privacy while still giving space for abuse-detection tools.
But the reform comes with caveats. Article 4 introduces what’s called a ‘mitigation measure,’ targeted at services deemed high risk. For those platforms, the law could demand that they take ‘all appropriate risk mitigation measures.’ The language is vague, and critics argue it’s a back door to re-imposing scanning obligations down the line. Indeed, the Danish Presidency’s proposal includes a review clause: the European Commission could, in the future, reassess whether detection duties should be mandatory — potentially proposing new legislation to reintroduce them.
On the targeting front, the European Parliament continues to push for court-ordered scanning. Their vision would restrict detection orders to specific people or groups suspected of involvement in child sexual abuse — a more precise and rights-sensitive approach.
The Danish compromise, by contrast, lacks this kind of suspect-based targeting. According to the Council’s draft, detection orders may be issued only to “identifiable parts or components” of a service — like certain channels or user groups — but it doesn’t explicitly limit orders only to those under suspicion.
A particularly controversial provision strikes at anonymity. In the new draft, users may no longer be able to create fully anonymous chat or email accounts. To register, they might have to provide ID or even their face — potentially making users more identifiable. Critics warn that it could seriously chill sensitive use cases: think LGBTQ+ conversations, whistle-blowers talking to journalists, or political activists operating under pseudonyms.
There are also strong obligations on service providers to build age-verification systems to protect minors. But how exactly these systems would work — without violating privacy, creating data risk, or excluding users — remains unclear. Several digital-rights groups have raised alarms that mandatory age checks could undermine anonymity or force providers to collect more data than necessary.
The crux of the matter. While the new proposal marks a step back from mandatory mass scanning, the voluntary framework could evolve over time into something more coercive.
What’s next? Tough negotiations, in a short period of time – Denmark will likely be keen to finalise the file before the Cypriot presidency begins in January.
New trilateral alliance to protect kids online
A new cooperation is underway between three major regulators: eSafety Commissioner (Australia), Ofcom (UK) and the European Commission’s DG CNECT, aimed at protecting children’s rights, safety and privacy online.
The overall goal is to support children and families in using the internet more safely and confidently — by fostering digital literacy, critical thinking and by making online platforms more accountable.
The bottom line. While growing alignment on child online safety is welcomed, and regulatory enforcement broadly supported, the push for age verification remains contested. All eyes will be on the trilateral technical group’s output.
OpenAI pushes back as NYT seeks millions of ChatGPT conversations
OpenAI is pushing back against a major legal request from The New York Times for access to millions of ChatGPT conversation records, citing privacy concerns and the potential erosion of user trust.
What began as a request for 1.4 billion chats has now been narrowed to around 20 million conversations spanning December 2022 through November 2024, but OpenAI maintains that even sharing this reduced set could expose sensitive user information.
While the company previously allowed the automatic deletion of user chats within 30 days, a court preservation order from May 2025 now requires that deleted conversations be retained indefinitely in many cases. OpenAI has said it will attempt to “de-identify” the data before sharing, but privacy experts caution that anonymised chats can often be re-identified, leaving users exposed.
The case highlights a growing tension between content owners seeking enforcement or accountability and AI platform providers, balancing user privacy with legal obligations
In perspective. This dispute may set precedents for how large-scale AI conversation data is governed, influencing future policies on data retention, deletion, and user privacy. This shifts the paradigm of digital permanence; it’s no longer just about data being stored forever, but about it being used to build the next generation of technology.
LAST WEEK IN GENEVA

CERN unveils AI strategy to advance research and operations
CERN has approved a comprehensive AI strategy to guide its use across research, operations, and administration. The strategy unites initiatives under a coherent framework to promote responsible and impactful AI for science and operational excellence.
It focuses on four main goals: accelerating scientific discovery, improving productivity and reliability, attracting and developing talent, and enabling AI at scale through strategic partnerships with industry and member states.
Common tools and shared experiences across sectors will strengthen CERN’s community and ensure effective deployment.
Implementation will involve prioritised plans and collaboration with EU programmes, industry, and member states to build capacity, secure funding, and expand infrastructure. Applications of AI will support high-energy physics experiments, future accelerators, detectors, and data-driven decision-making.
LOOKING AHEAD

The UN Commission on Science and Technology for Development (CSTD) will hold its 2025–2026 inter-sessional panel on 17 November at the Palais des Nations in Geneva. The agenda focuses on science, technology and innovation in the age of AI, with expert contributions from academia, international organisations, and the private sector. Delegations will also review progress on WSIS implementation ahead of the WSIS+20 process, and receive updates on the Global Digital Compact (GDC) and data governance work within the CSTD.
A day later, the CSTD’s multi-stakeholder working group on data governance at all levels will meet for the fourth time. Over two days (18 and 19 November), delegates will review recent inputs, discuss principles, interoperability, benefit-sharing, and secure data flows, and plan next steps for the Working Group’s progress report to the UN General Assembly. The meeting will also address leadership and scheduling of future sessions, aiming to strengthen international cooperation on data governance for sustainable development.
The International Telecommunication Union (ITU) will hold the World Telecommunication Development Conference 2025 (WTDC‑25) in Baku, Azerbaijan, from 17–28 November 2025. The conference will bring together governments, regulators, industry, civil society, and other stakeholders to discuss strategies for universal, meaningful, and affordable connectivity. Participants will review policy frameworks, investment priorities, and digital‑development initiatives, and adopt a Declaration, Strategic Plan, and Action Plan to guide global ICT development through 2029.
READING CORNER
What can fiction teach us about diplomacy? Explore how 2024-2025 bestsellers reveal real-world hard power, soft power, and economic diplomacy.




































