How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026

20 Feb 2026 14:00h - 15:00h

How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how to govern artificial intelligence through a European-wide code of practice that aims to balance innovation with democratic safeguards [1-2].


Benifei explained that the AI Act will not enumerate every technical mitigation, but will rely on a co-legislative code of practice developed together with civil society, developers, enterprises and academia to reflect the evolving AI landscape [2]. The code is intended to create a culture of restraint that limits misinformation, cyber-bullying and criminal misuse of AI [3]. He stressed that the framework must be clear yet flexible, avoiding vague language while specifying the objectives [4]. Implementation will be entrusted to the European AI Office, which must have sufficient powers to enforce the rules and ensure private actors comply, thereby building public trust [5].


Sean argued that leaders must first create the conditions that allow safe and beneficial AI development, noting that such conditions are currently lacking [12-13]. He warned that CEOs feel constrained by competitive geopolitical pressure and that this should be seen as a red alarm bell [14-16]. His recommendation was to enable global coordination-including Europe, the United States and China-so that companies can adopt additional safety steps, share expertise, and even pause development at critical points [17-19].


Paola highlighted the importance of focusing on specific use cases, arguing that the gap between AI’s perceived power and its trustworthy deployment lies in context-dependent validation [21-23]. She suggested that defining trust controls for domains such as medicine versus customer service will unlock both productivity and confidence in the technology [24].


Benifei reiterated that safety and diffusion must proceed in parallel, because certain AI deployments pose huge cross-border risks without international cooperation [26-28]. He pointed to military applications and loss-of-control scenarios as areas that require public-institutional leadership rather than relying on industry self-regulation [30-34]. He concluded with an urgent call for leaders to stop delaying and to use summit opportunities to advance concrete progress [35-37].


The moderator summed up that innovation and trust can coexist, urging participants to consult the code’s safety chapters as potential standards for other regions and to continue collaborative work [38-44].


Keypoints

Major discussion points


A co-legislative Code of Practice is essential for AI risk mitigation.


Brando explains that the AI Act will rely on a code of practice developed together with civil society, developers, enterprises and academia to create “a culture of restraint” and prevent democratic and existential risks while remaining flexible enough to adapt to the evolving AI landscape [2-4][7].


Effective enforcement requires strong public-sector capacity.


He stresses that the European AI Office must be equipped to implement the code, ensuring that private actors comply and that the rules become “applicable, effective, and… build trust” [5].


International cooperation is needed to create safe development conditions.


Sean calls for a global effort-across Europe, the United States, China and beyond-to give companies the “conditions… to take additional steps, to put additional focus on safety, to share expertise… and potentially even to slow down before critical points” [12-19].


Trust must be anchored in context-specific use-case governance.


Paola highlights the gap between the perceived power of AI and its trustworthy deployment, arguing that “when we start to focus on context, the right use cases… we unlock not only productivity, but trust in the same breath” [21-24].


High-risk domains such as military AI and loss-of-control scenarios demand urgent public-institution action.


Brando warns that without international cooperation “we are facing huge risks” in areas like military use and loss-of-control, and that “it must come from the public institutions… Don’t lose any more time” [28-37].


Overall purpose / goal of the discussion


The panel aimed to shape a pragmatic governance framework for artificial intelligence that balances rapid innovation with the protection of democratic values, human rights, and safety. Participants presented the EU’s forthcoming Code of Practice, highlighted the need for enforceable mechanisms, and offered concrete recommendations for leaders to foster trustworthy, globally coordinated AI development.


Overall tone and its evolution


– The conversation begins with a constructive and collaborative tone, emphasizing the creation of inclusive, flexible rules and the building of trust [2-4].


– It shifts to a cautiously urgent tone as speakers stress the current lack of safe conditions, competitive pressures on CEOs, and the necessity of immediate, coordinated action [12-19][28-37].


– The closing remarks return to a forward-looking, hopeful tone, reaffirming that innovation and trust can coexist and encouraging continued collaboration [38-43].


Thus, the discussion moves from collaborative framing, through urgent calls for action, to a reaffirmation of optimism about achieving trustworthy AI governance.


Speakers

Paola – Secretary General, European Digital Media Observatory; gender advocate. Expertise: media disinformation, trust in AI use-case contexts. [S1][S3]


Speaker 2 – Moderator/Chair of the panel at the AI Impact Summit; responsible for guiding the discussion and posing questions to panelists. [S4][S5][S6]


Brando Benifei – Member of the European Parliament; focuses on AI legislation, the European AI Office, code of practice, and building trust while safeguarding human rights. [S7][S8]


Sean – Director, Future Conflict & Cyber Security, International Institute for Strategic Studies; expertise in AI governance, cybersecurity policy, and safe AI development. [S11]


Additional speakers:


Professor Bengio – Prominent AI researcher (Yoshua Bengio) referenced for explaining the co-legislative process behind the code of practice. (no external source provided)


Professor Banjo – Researcher mentioned regarding systemic risks, loss-of-control AI, and related safety research. (no external source provided)


Professor Eiger – Academic referenced (pronunciation difficulty noted) in the context of AI safety and diffusion discussions. (no external source provided)


Full session reportComprehensive analysis and detailed insights

Opening & Code of Practice – Brando Benifei opened the panel by explaining that the EU’s forthcoming AI Act will be complemented by a Code of Practice, developed through a co-legislative process involving civil society, developers, enterprises of all sizes and academia [2]. The Code is intended to create a “culture of restraint” that can address both existential and systemic risks to democracy and citizens’ freedoms [2-4]. Benifei stressed that the Code must be clear yet flexible, avoiding vague language while articulating concrete objectives [4]. Its purpose is to foster public confidence that innovation can proceed without compromising human rights and fundamental values [7]. To make the Code effective, he called for the European AI Office to be equipped with sufficient powers and resources so that it can enforce the rules, verify compliance of private actors and thereby build trust [5-6].


Moderator’s Prompt – After Benifei’s remarks, the moderator asked Sean to formulate a one-minute recommendation for summit leaders [13].


Sean’s Recommendation – Sean argued that current conditions are inadequate for safe AI development. He noted that CEOs of leading AI firms feel constrained by intense geopolitical competition and therefore cannot take the extra safety steps they would like [13-14]. Describing this as a “red alarm bell,” he urged leaders to create enabling conditions that allow companies to prioritise safety, share expertise and, when necessary, pause development at critical junctures [15-18]. Crucially, Sean called for global coordination that brings together Europe, the United States, China and other regions on an equal footing [18-19].


Paola’s Focus – Paola shifted the discussion to domain-specific trust mechanisms. She highlighted a “gigantic gap” between the public’s perception of AI’s power and organisations’ ability to deploy it responsibly [22-23]. By concentrating on appropriate use-cases-recognising that trust controls differ markedly between sectors such as medicine and customer-service-she argued that productivity and confidence can be unlocked simultaneously [24].


Benifei’s Follow-up – Benifei reiterated that safety and diffusion must proceed in parallel; they should not be presented as opposing goals [26-28]. He warned that without international cooperation, AI deployments in high-risk areas-particularly military applications and loss-of-control scenarios-pose “huge risks” [30-31]. He asserted that responsibility for addressing these systemic threats lies with public institutions, not with businesses, and issued an urgent plea for political leaders to act without further delay [32-37].


Moderator’s Closing – The moderator concluded by affirming that innovation and trust can coexist and that the Code’s safety chapters could serve as reference standards for other jurisdictions [38-41]. Participants were invited to review the Code, continue collaborative work, and maintain momentum in building trustworthy AI [42-44].


Points of Consensus

* Inclusive co-legislative Code of Practice – Benifei and the moderator agreed that the Code should be drafted with input from civil society, industry and academia, be clear yet adaptable, and act as an international benchmark [2-4][7][40].


* Public-sector leadership on high-risk AI – Benifei emphasised that governments must create the conditions for safe development and lead coordination on high-risk domains such as military AI and loss-of-control scenarios [30-34].


* Parallel pursuit of safety and diffusion – Both Benifei and the moderator stressed that safety measures need not hinder AI diffusion; both can advance together [26-28][38-42].


* Global coordination – Sean and Benifei called for bringing major AI actors from the EU, the US and China to a common platform to enable coordinated safety actions [18-19][28-29].


Points of Nuance

* Governance of high-risk AI – Benifei argued that public institutions, not businesses, must drive cooperation on military and loss-of-control AI [30-34], whereas Sean emphasised that leaders must first create conditions that enable companies to adopt additional safety steps [12-19].


* Scope of the trust framework – Benifei advocated a single, flexible Code of Practice for the whole AI ecosystem [2-4], while Paola insisted that trust must be built through sector-specific use-case controls tailored to domains such as healthcare or customer service [21-24].


* Urgency versus condition-building – Benifei’s call to “not lose any more time” urged immediate political action [35-37]; Sean, by contrast, warned that without first establishing enabling conditions, rapid action may be ineffective [13-19].


Key Take-aways

* The Code of Practice is designed to mitigate systemic and existential AI risks while remaining adaptable to technological change.


* Its inclusive co-legislative development aims to make it a reference model for other countries, with the safety chapters offering potential international standards.


* International cooperation is essential; leaders must devise mechanisms that allow firms worldwide to prioritise safety despite competitive pressures.


* Public institutions, rather than private firms alone, should steer coordination on high-risk domains such as military AI and loss-of-control scenarios.


* Trust is best achieved when context-specific use-case controls are defined, recognising that risk profiles differ across sectors.


* Innovation and trust are not mutually exclusive; they can be pursued together when safety and diffusion are pursued in parallel.


Resolutions and Action Items

1. Empower the European AI Office with the necessary authority, resources and enforcement tools to implement the Code [5-6].


2. Encourage political leaders to convene forums that create conditions for companies to adopt extra safety measures, even under geopolitical competition [13-18].


3. Promote ongoing international dialogue on AI safety, especially concerning military use and loss-of-control risks [30-34].


4. Invite stakeholders to review the Code’s safety chapters and consider adopting them as standards in their own jurisdictions [40-41].


5. Continue multistakeholder collaboration to align innovation with trust-building measures across domains [38-44].


Unresolved Issues

* Concrete mechanisms for achieving effective global coordination and enforcement of the Code across regions.


* Specific policies to alleviate geopolitical competition that hinder firms from implementing safety steps.


* Detailed governance frameworks for military AI and autonomous-system loss-of-control risks.


* Methods to ensure consistent compliance verification among small, medium and large AI developers.


* Operational guidelines for domain-specific trust controls and their integration into existing workflows.


Suggested Compromises

* Pursue safety and diffusion in parallel, allowing rapid deployment while maintaining robust risk mitigation.


* Adopt flexible, co-legislative rules that are clear in intent yet adaptable to diverse contexts and technological evolutions.


* Combine public-sector leadership with industry participation, ensuring businesses contribute expertise without bearing sole responsibility for high-risk governance.


Thought-Provoking Comments and Their Impact

* Benifei’s introduction of a co-legislative Code of Practice highlighted a novel shift from rigid regulation to a flexible, multi-stakeholder framework aimed at building trust while protecting rights [2]. This set the foundation for the entire discussion, prompting subsequent speakers to consider operationalisation and enforcement.


* Sean’s “red alarm bell” about CEOs’ inability to take extra safety steps under geopolitical pressure expanded the debate from European policy design to the practical constraints faced by industry, underscoring the need for global, equitable coordination [13-18].


* Paola’s emphasis on use-case specificity redirected attention to concrete deployment challenges, arguing that trust must be anchored in sector-specific controls, thereby linking high-level policy to on-the-ground practice [21-24].


* Benifei’s later warning not to contrast safety with diffusion and his call for public-institutional leadership on military AI elevated the stakes, reinforcing the urgency of international cooperation and the moral imperative for swift political action [26-37].


Overall, the discussion progressed from establishing a collaborative, flexible regulatory foundation, through recognising real-world industry constraints and the necessity of global coordination, to affirming that innovation and trust can be jointly pursued when concrete, context-aware safeguards are put in place. The panel’s consensus on inclusive governance, public-sector leadership and parallel safety-diffusion provides a solid basis for future AI policy development, while the identified nuances point to areas where further negotiation and research are required.


Session transcriptComplete transcript of the session
Brando Benifei

So from different places in the world as a possible way of how to deal with the frontier aspects of AI development. Because instead of detailing in the legislative act every aspect of the risk mitigation that we ask now to the big developers, we decided to put in the AI act the provision of having this code of practice that would come, as Professor Bengio explained very clearly, from a co -legislative process involving civil society, developers, small, medium, big enterprises and academia in an exercise that would allow to build a more adherent to the… present situation and evolution. of the AI landscape of a set of rules to actually prevent existential risks, but also we call them systemic risks that deal with our democracies, with our freedom as citizens.

I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent risks of damaging our democratic processes by spreading misinformation or contrasting the cyberbullying or the criminal actions through the use of AI. And we… I think we built a very clear framework. because I think it’s very important to be clear, not to have vague proposals that are very loosely interpretable, but maintaining a certain degree of flexibility, we are clear on what we want to pursue. However, I think it will be very important, and so I need to subscribe to what Professor Banjo said at the end of his speech, that this is our effort from the Parliament side that we provide the European AI Office all the means to actually implement this code of practice, because it’s true that, as it was said, many companies are already complying with many of the risk mitigation aspects that are in the code of practice, but we need to be sure that we can, again, be at the same level of this very power.

private actors to do our part in making the rules that we decided applicable, effective, and so build trust. In the end, to conclude, this is our objective. We want the code of practice to contribute in building trust among our citizens on the fact that we can innovate without sacrificing human rights and protection of our fundamental values. Thank you.

Speaker 2

Thank you very much, Brando. Now, we still have very few minutes left, so I would like to exploit the opportunity of your presence to ask you, maybe if you can say in one minute, Sean, you have already said this, but maybe you can reformulate or come up with one recommendation for the leaders. at this summit on the way that we can govern AI in the future? What would you say to them?

Sean

In one minute, I would say the role of our leaders, the role of us as scholars, the role of us as governance experts is to create the conditions for the safe and beneficial development of AI. Right now, I do not believe those conditions entirely exist because exactly of the things that the CEOs of the leading companies say. They say they would like to be able to take additional steps, but under the competitive geopolitical pressure they’re in, they do not feel that they are able to. We should be hearing that. That should be a red alarm bell for us. And so what we need to do is figure out how do we create the conditions where it is possible for them.

to take these additional steps, to put additional focus on safety, to share expertise if needed, to coordinate and potentially even to slow down before critical points. And that doesn’t just mean European companies, it doesn’t just mean US companies, it also means our colleagues in China who are making such impressive progress. We need to figure out what is a way in which we can bring everyone to the table as equals and figure out how to cooperate on this challenge of our time.

Speaker 2

Paola?

Paola

I would say focus on the use cases. So right now there’s a gigantic gap between sort of our perception of this incredible power of the technology and how quickly organizations can deploy it. And the bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service. And so I think when we start to focus on context, the right use cases, what trust controls look like in those domains, in local context, that’s where we unlock not only productivity, but trust in the same breath.

Brando Benifei

Well, in my opinion, we need to, again, as it was said earlier by Professor Eiger, it’s very difficult for me to figure the pronunciation. But anyway, we need to not contrast, not put in contrast safety at the highest terms and the focus on diffusion, on action. On impact, the title of this summit. I think this can go in parallel and it must go in parallel because there are. areas of deployment of AI where without international cooperation we are facing huge risks. We hope that the code of practice will be a way to enlarge this discussion and build a reference point as I said but we need to go even further. We have issues regarding military use of AI.

We have issues regarding the loss of control risks that also Professor Banjo has been looking a lot at with his research that are in need of further cooperation. I don’t think this will come from the business but not because they are bad. It’s not their role. It must come from the public institutions and so we need to send this message to our leaders. Don’t lose any more time. You need to sit down and use these occasions to do progress. We need that. and we do not need to lose any more time on this.

Speaker 2

Thank you very much. So I would like to close this very interesting panel simply to say that what we have tried to discuss and conclude in this session is the fact that innovation and trust can go together and we can find different ways to make sure that trust is ensured or enabled in a particular country and in a particular continent, but we will need to continue working together and we are also happy to have presented to you some elements of the Code of Practice. Please take a look at that Code of Practice, in particular look at the safety chapters, and you will see that these are probably standards to which other countries can sign up to.

And thanks a lot for your participation. We look forward to continuing this discussion with you and with all the colleagues in this summit. Thank you very much and thanks to our panelists. Thank you very much. Thank you. you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (35)
Factual NotesClaims verified against the Diplo knowledge base (11)
Confirmedhigh

“The EU’s forthcoming AI Act will be complemented by a Code of Practice, developed through a co‑legislative process involving civil society, developers, enterprises of all sizes and academia.”

The knowledge base states that the Code of practice should emerge from a co-legislative process involving all stakeholders to address systemic and existential risks [S13].

Confirmedhigh

“The Code is intended to create a “culture of restraint” that can address both existential and systemic risks to democracy and citizens’ freedoms.”

S13 confirms that the Code aims to address systemic and existential risks through a multi‑stakeholder approach.

Confirmedmedium

“The Code must be clear yet flexible, avoiding vague language while articulating concrete objectives.”

S14 highlights the importance of concentrating on concrete applications and clear definitions, supporting the claim of clarity and flexibility.

Confirmedhigh

“Its purpose is to foster public confidence that innovation can proceed without compromising human rights and fundamental values.”

S105 describes a human‑rights‑based approach that protects fundamental rights while promoting innovation, aligning with the stated purpose.

Confirmedhigh

“The European AI Office should be equipped with sufficient powers and resources so that it can enforce the rules, verify compliance of private actors and thereby build trust.”

S107 reports that the European Commission is establishing a European AI Office with enforcement powers and a separate budget line for the AI Act.

Confirmedhigh

“CEOs of leading AI firms feel constrained by intense geopolitical competition and therefore cannot take the extra safety steps they would like.”

S117 references Dario Amodei’s observation that companies are competing intensely with China, limiting their ability to devote resources to safety.

Additional Contextmedium

“Leaders should create enabling conditions that allow companies to prioritise safety, share expertise and, when necessary, pause development at critical junctures.”

S116 notes that international cooperation on minimum safety standards is needed, but geopolitical competition makes coordination difficult, underscoring the need for enabling conditions.

Additional Contextmedium

“Global coordination that brings together Europe, the United States, China and other regions on an equal footing is essential.”

S116 calls for international cooperation on AI safety standards, highlighting the challenge of achieving equal footing among major regions.

Confirmedhigh

“Safety and diffusion must proceed in parallel; they should not be presented as opposing goals.”

S105 emphasizes that protecting fundamental rights and promoting innovation are complementary objectives, confirming the parallel‑track view.

Additional Contextmedium

“Without international cooperation, AI deployments in high‑risk areas—particularly military applications and loss‑of‑control scenarios—pose huge risks.”

S116 mentions the need for cooperation on high‑risk AI applications and the dangers of loss‑of‑control, providing additional nuance to the risk claim.

Additional Contextlow

“The Code’s safety chapters could serve as reference standards for other jurisdictions.”

S104 discusses frameworks that foster innovation while protecting rights and can act as reference models for other jurisdictions, supporting this assertion.

External Sources (117)
S1
How prevent external interferences to EU Election 2024 – v.2 | IGF 2023 Town Hall #162 — Paula Gori:Thank you very much. Spoiler, I’m not the Minister of Truth, and I’ll tell you why. Hello, everybody. I’m Pao…
S2
Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes — – **Thora** – PhD researcher from Iceland examining how large platforms and search engines undermine democracy; research…
S4
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S5
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S6
S8
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Brado Benefai- (Appears to be the same person as Brando Benifei, mentioned in introduction) -Brando Benifei- Member of…
S9
Open Forum #72 European Parliament Delegation to the IGF & the Youth IGF — – Brando Benifei: Member of European Parliament (mentioned but not in speakers list)
S10
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Moderator – Massimo Marioni:AI’s role in securing the future. Dr. Helmut Reisinger, Chief Executive Officer, EMEA and LA…
S11
Governing the Future of the Internet — Sean KanuckDirector, Future Conflict & Cyber Security, International Institute for Strategic Studies
S12
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S13
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — – Brando Benifei- Paula Goldman – Brando Benifei- Sean O’Heigeartaigh Benifei advocates for a comprehensive legislativ…
S14
Main Session | Policy Network on Artificial Intelligence — Brando Benifei: Yes, well, a lot of different things have been asked, I tried to answer a few. In fact, on the issue o…
S15
Internet Governance Forum 2024 — During theMain Session on Policy Network on Artificial Intelligence, Brando Benifei and others acknowledged “the challen…
S16
Emerging Markets: Resilience, Innovation, and the Future of Global Development — My response to this one, governments and sovereign nations have their own self-interests. I think in this day and age, p…
S17
Opening remarks — Ruminating upon the genesis of governance principles since 2009, the speaker singles out the set of guidelines, or decal…
S18
Lightning Talk #29 Multistakeholder Engagement in Africa’s WSIS+20 Review — – **Speaker 2** – Honorable Adjara from Benin, government official involved in the Cotonou Declaration Speaker 2: Okay….
S19
IGF Parliamentary track – Session 2 — Audience: My name is Catherine Mumma. I’m a senator from Kenya. I’m just wondering, the issue of legislation is varied w…
S20
WS #155 Digital Leap- Enhancing Connectivity in the Offline World — Omar Ansari: Thank you very much, Mahesh. Just to quickly answer your question, the colleague from Vietnam. I am fro…
S21
Finnovation — In conclusion, the analysis revealed various insights and perspectives on the development of financial sector tools, the…
S22
Conversational AI in low income & resource settings | IGF 2023 — Digital patient engagement is crucial for maintaining relationships with patients even after they leave the hospital. Pl…
S23
Opening of the session — Greater international cooperation is necessary in the context of threats. In summary, the analysis distils into a narra…
S24
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — Economic | Legal and regulatory Hidary argues that to build a truly global company, businesses must establish partnersh…
S25
Practical Toolkits for AI Risk Mitigation for Businesses — In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector a…
S26
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Adopted by the Council of Europe, includes modules for risk analysis, stakeholder engagement, impact assessment, and mit…
S27
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S28
WS #64 Designing Digital Future for Cyber Peace & Global Prosperity — Audience: Thank you, Dr. Subi, for the great question. And it’s a difficult one and one that I’ve been kind of graspin…
S29
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — However, concerns are raised about weak enforcement cultures in developing countries if they were to adopt ex-ante regul…
S30
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — Another significant aspect highlighted is the role of multi-stakeholder engagement in the Internet Governance Forum (IGF…
S31
C O N T E N T S — The successful implementation of the Policy requires robust collaboration between the private and public sectors. …
S32
Regional cooperation for safer online consumer markets (UNCTAD) — In conclusion, the rise of online shopping brings concerns about the safety of products and the lack of information avai…
S33
Launch of the Joint Report “Digital Trade for Development” — Embracing a holistic approach is deemed essential for the advancement of digital trade. Policymaking must be comprehensi…
S34
World Economic Forum 2025 Annual Meeting Opening Ceremony: Summary — Development | Economic Hoffmann emphasizes that the World Economic Forum serves as a platform for public-private cooper…
S35
World Economic Forum Town Hall on AI Ethics and Trust — Trust requires context and cannot be evaluated without specific use cases. Botsman argues that asking whether people tru…
S36
AI Governance Dialogue: Steering the future of AI — This comment addresses a fundamental flaw in top-down governance approaches, highlighting that trust cannot be imposed e…
S37
From principles to practice: Governing advanced AI in action — **Systemic Societal Risks**: Broader societal impacts, particularly profound labor market disruption that could create s…
S38
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S39
Military AI: Operational dangers and the regulatory void — Meanwhile, less technologically advanced states fear the use of military AI against them when they cannot develop this t…
S40
AI governance needs urgent international coordination — AGIS Reports analysisemphasises that as AI systems become pervasive, they create significant global challenges, includin…
S41
HIGH LEVEL LEADERS SESSION IV — This indicates the recognition that companies have a role to play in shaping policies and providing examples of good pra…
S42
Dynamic Coalition Collaborative Session — This IGF session, moderated by Wout de Natris van der Borght and organized by three Dynamic Coalitions (CRIOT, IoT, and …
S43
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Both speakers, despite their different professional backgrounds (historian/philosopher vs. neuroscientist/educator), une…
S44
How Trust and Safety Drive Innovation and Sustainable Growth — “So on issues that are very clear, where there are clear harms, we have stepped in to regulate.”[53]”For the rest of it,…
S45
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Focusing on context‑specific use cases to foster trust Goldman argues that trust controls must be tailored to the speci…
S46
I hereby declare that this dissertation is my own original work. — With such a premium placed on trustworthiness, how do successful information sharing mechanisms build trust among member…
S47
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S48
WS #187 Bridging Internet AI Governance From Theory to Practice — – **Risk-based approaches**: Multiple speakers supported prioritizing governance based on risk levels and application co…
S49
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S50
AI adoption vs governance: A contradiction in Australian businesses — A study conducted by Datacom and engaged 318 business decision-makers working in Australian organisationshas unveiled a …
S51
AI and international peace and security: Key issues and relevance for Geneva — Enhancing international cooperation on the responsible use of AI in the military domain is crucial for ensuring that AI …
S52
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S53
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Legal and regulatory | Economic The role of policy researchers is crucial, and encouraging private sector engagement in…
S54
Successes & challenges: cyber capacity building coordination | IGF 2023 — Claire Stoffels:Thank you, Enrico. Hello, everyone. My name is Claire Stoffels. I’m the Digital for Development focal po…
S55
Open Forum #26 High-level review of AI governance from Inter-governmental P — 1. Balancing Innovation and Security: Governments face the task of fostering innovation while addressing potential risks…
S56
AI for Social Empowerment_ Driving Change and Inclusion — He asks how governments and institutions can govern AI responsibly to minimise labour market disruption and ensure a smo…
S57
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: And guys, I know this seems daunting. It is not. I promise, I did it myself last week. It’s actually k…
S58
Building Sovereign and Responsible AI Beyond Proof of Concepts — -Government Role vs. Private Sector Challenges: Discussion of the tension between waiting for government regulation/guid…
S59
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Qian Xiao:OK, well, I’m doing a lot of research on the international governance of AI. And from our perspective, we thin…
S60
Laying the foundations for AI governance — This comment introduced a different geopolitical perspective that complicated the discussion in important ways. While it…
S61
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S62
Global AI Policy Framework: International Cooperation and Historical Perspectives — So global coordination will always require an inclusive participation from all stakeholders across all regions. especial…
S63
How to make AI governance fit for purpose? — – Jennifer Bachus- Anne Bouverot- Shan Zhongde- Chuen Hong Lew Given that AI technologies are inherently global, effect…
S64
Artificial General Intelligence and the Future of Responsible Governance — So we need to be in close collaboration in order to mitigate these risks.
S65
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Benifei advocates for a comprehensive legislative framework through co-legislative processes and codes of practice, whil…
S66
WS #98 Towards a global, risk-adaptive AI governance framework — Melinda Claybaugh: I mostly echo what other people said, but just on the point about the EU AI Act, I think that it’s a…
S67
Artificial Intelligence & Emerging Tech — Umut Pajaro Velasquez:Hello everyone, well as Jennifer will say I will be presenting mainly the outputs from the youth l…
S68
WS #64 Designing Digital Future for Cyber Peace & Global Prosperity — Audience: Thank you, Dr. Subi, for the great question. And it’s a difficult one and one that I’ve been kind of graspin…
S69
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — There is strong consensus that traditional approaches are inadequate and that effective cybersecurity requires collabora…
S70
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — Law enforcement agencies need significant capacity building and development to effectively address cyber threats. Collab…
S71
Media Hub — Need law enforcement, judiciary, court system, judges to understand cyber space and offenses, lawyers to be trained, pol…
S72
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — However, concerns are raised about weak enforcement cultures in developing countries if they were to adopt ex-ante regul…
S73
Regional cooperation for safer online consumer markets (UNCTAD) — In conclusion, the rise of online shopping brings concerns about the safety of products and the lack of information avai…
S74
(Day 4) General Debate – General Assembly, 79th session: afternoon session — Kamina Johnson Smith – Jamaica: Thank you, Mr. President. I extend Jamaica’s congratulations on your election to the l…
S75
World Economic Forum 2025 Annual Meeting Opening Ceremony: Summary — Development | Economic Hoffmann emphasizes that the World Economic Forum serves as a platform for public-private cooper…
S76
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — Pakistan:Thank you, Chair, Distinguished Chair, Excellencies, Distinguished Delegates. At the outset, I would like to ex…
S77
World Economic Forum Town Hall on AI Ethics and Trust — Trust requires context and cannot be evaluated without specific use cases. Botsman argues that asking whether people tru…
S78
AI Governance Dialogue: Steering the future of AI — This comment addresses a fundamental flaw in top-down governance approaches, highlighting that trust cannot be imposed e…
S79
From principles to practice: Governing advanced AI in action — **Systemic Societal Risks**: Broader societal impacts, particularly profound labor market disruption that could create s…
S80
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S81
AI governance needs urgent international coordination — AGIS Reports analysisemphasises that as AI systems become pervasive, they create significant global challenges, includin…
S82
Military AI: Operational dangers and the regulatory void — Meanwhile, less technologically advanced states fear the use of military AI against them when they cannot develop this t…
S83
Main Session 2: The governance of artificial intelligence — Mashologu advocates for human-in-the-loop learning in AI system development from design through deployment, where humans…
S84
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — The tone was optimistic and forward-looking throughout the discussion. Speakers emphasized the potential for growth and …
S85
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S86
How AI Is Transforming Diplomacy and Conflict Management — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated…
S87
Open Forum #12 Ensuring an Inclusive and Rights-Respecting Digital Future — The tone was largely constructive and collaborative, with speakers building on each other’s points. There was a sense of…
S88
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S89
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S90
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and calls for action, with many speakers emphasizing the need for immediate reforms …
S91
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S92
Dynamic Coalition Collaborative Session — The discussion maintained a serious, urgent tone throughout, characterized by technical expertise and policy-focused ana…
S93
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S94
Any other business /Adoption of the report/ Closure of the session — In closing, the speaker reiterated steadfast support for the Chairperson, the Secretariat, and the diligent team, emphas…
S95
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S96
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S97
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S98
Building Trusted AI at Scale – Keynote Anne Bouverot — The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and appreciation, m…
S99
Global Standards for a Sustainable Digital Future — Dimitrios Kalogeropoulos: Thank you, Karen. That’s an excellent question. Some of it is immediate, some of it is longer …
S100
Human Rights Council — Foreign – Discrimination and article 20 of the International Covenant on Civil and Political Rights require S…
S101
Contents — The digital transformation of economy and society will only succeed if people are convinced that new business models and…
S102
AI Transformation in Practice_ Insights from India’s Consulting Leaders — The tone was pragmatically optimistic and refreshingly candid. Both speakers were honest about challenges and uncertaint…
S103
UNSC meeting: Strengthening UN peacekeeping — 4. The speaker stressed the importance of clear, targeted, and flexible mandates for peacekeeping missions to adjust to …
S104
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — Frameworks should foster innovation while protecting rights. In summary, ISO acknowledges the critical role human right…
S105
Closing Ceremony — This argument advocates for a human rights-based approach to data governance and artificial intelligence development. It…
S106
Keynotes — Legal and regulatory | Human rights O’Flaherty calls for the EU to maintain its commitment to enforcing the Digital Ser…
S107
European Commission to establish European AI Office for EU AI Act enforcement — TheEuropean Commissionis preparing to establish the European Artificial Intelligence Office, which will be crucial in en…
S108
European Council gives final approval to EU AI Act — Today, on 21 May, the European Councilgave its final approvalto the Artificial Intelligence (AI) Act, a pioneering legis…
S109
Introduction to cyber diplomacy — The moderator, observing the steady inflow of participants, suggests a considerate delay, favouring inclusivity and ensu…
S110
ABOUT THIS PROGRAM — In Summit preparation, the respective roles of the host country and the SIRG in preparing the Summit texts require clari…
S111
Summit Opening Session — Thought provoking comments
S112
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-leap-policy-to-practice-with-aip2 — Well, I think we can learn a lot from what we are seeing here in these days, and I’m convinced that we need to be determ…
S113
AI Development Beyond Scaling: Panel Discussion Report — And there will be people who want to make them even look like us. So it’s going to be video first, eventually maybe phys…
S114
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — Ramadori criticizes the current approach of trying to fix AI problems after they manifest, arguing that this patching me…
S115
Multi-stakeholder Discussion on issues about Generative AI — He said that their current hardware technology is too energy consuming and expensive. This signifies the significance o…
S116
Comprehensive Discussion Report: The Future of Artificial General Intelligence — International cooperation on minimum safety standards is needed, but geopolitical competition makes coordination difficu…
S117
Are we creating alien beings? — Legal and regulatory | Cybersecurity References Dario Amodei’s paper ‘the urgency of interoperability’ arguing companie…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Brando Benifei
2 arguments113 words per minute601 words319 seconds
Argument 1
Inclusive co‑legislative process creates flexible yet clear rules (Brando Benifei)
EXPLANATION
Brando explains that the AI Code of Practice will be drafted through a co‑legislative process that involves civil society, developers, enterprises of all sizes and academia. This approach aims to produce rules that are both clear in their objectives and flexible enough to adapt to the rapidly evolving AI landscape.
EVIDENCE
He states that the code of practice will emerge from a co-legislative process with diverse stakeholders, allowing the creation of rules that are adherent to the present AI situation while remaining flexible and not vague [2-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Benifei’s advocacy for a co-legislative drafting process involving diverse stakeholders is highlighted in the India AI Impact Summit summary, which notes his push for inclusive legislative frameworks and codes of practice [S13].
MAJOR DISCUSSION POINT
Co‑legislative drafting for clear yet adaptable AI rules
AGREED WITH
Speaker 2
DISAGREED WITH
Paola
Argument 2
Public institutions must drive cooperation on high‑risk areas like military use and loss‑of‑control, not rely on businesses (Brando Benifei)
EXPLANATION
Brando argues that issues such as military applications of AI and loss‑of‑control risks require coordination led by public institutions rather than being left to private companies. He stresses that governments need to act promptly to address these systemic risks.
EVIDENCE
He mentions specific high-risk domains-military use of AI and loss-of-control risks-and asserts that solutions must come from public institutions, not businesses, urging leaders to act without further delay [30-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same source notes Benifei’s stance that cooperation on high-risk AI domains should be led by public institutions rather than businesses [S13].
MAJOR DISCUSSION POINT
Government‑led cooperation on high‑risk AI applications
AGREED WITH
Sean
DISAGREED WITH
Sean
S
Speaker 2
2 arguments75 words per minute239 words190 seconds
Argument 1
Serves as a reference for other countries; safety chapters act as standards (Speaker 2)
EXPLANATION
Speaker 2 highlights that the AI Code of Practice, especially its safety chapters, can function as a benchmark for other nations seeking comparable standards. By adopting these chapters, countries can align on common safety expectations.
EVIDENCE
In the closing remarks, Speaker 2 invites participants to review the safety chapters of the Code of Practice, noting that they are likely to become standards that other countries can sign up to [40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for minimum standards and the potential of safety chapters to become international benchmarks are discussed in the IGF parliamentary track remarks about the need for common standards [S19] and in the discussion on aligning global governance while respecting local nuances [S15].
MAJOR DISCUSSION POINT
Code of Practice as an international safety standard
AGREED WITH
Brando Benifei
Argument 2
Ongoing collaboration is essential; innovation and trust can coexist (Speaker 2)
EXPLANATION
Speaker 2 asserts that innovation does not have to conflict with trust, emphasizing the need for continuous joint efforts among stakeholders. He calls for sustained dialogue to ensure both progress and confidence in AI systems.
EVIDENCE
He summarizes the panel’s conclusion that innovation and trust can go together and stresses the necessity of continued cooperation, inviting participants to keep the discussion alive [38-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder engagement and the importance of continued joint effort are emphasized in the WSIS+20 review where Speaker 2 stresses accountability across sectors [S18], and broader calls for cooperation are noted in the IGF session on international cooperation [S23].
MAJOR DISCUSSION POINT
Continued joint effort to balance innovation and trust
AGREED WITH
Brando Benifei
S
Sean
1 argument173 words per minute210 words72 seconds
Argument 1
Leaders must create conditions for safe development, coordinate across regions, and mitigate geopolitical pressure (Sean)
EXPLANATION
Sean calls on political leaders, scholars and governance experts to establish an environment where AI developers can prioritize safety without being constrained by competitive geopolitical pressures. He stresses the need for global coordination, including Europe, the United States and China, to enable shared safety measures and possible slowdown at critical points.
EVIDENCE
He notes that current conditions are insufficient because CEOs feel unable to take extra safety steps due to geopolitical competition, and argues that leaders must create conditions for additional safety actions, sharing expertise and coordinating internationally, including with China [12-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sean O’Heigeartaigh’s focus on creating conditions for companies to adopt safety measures is recorded in the India AI Impact Summit summary [S13]; concerns about geopolitical competition affecting safety steps are echoed in analyses of geopolitical material supply chains [S24] and calls for greater international cooperation in the face of threats [S23].
MAJOR DISCUSSION POINT
Creating global conditions for safe AI development
DISAGREED WITH
Brando Benifei
P
Paola
1 argument136 words per minute103 words45 seconds
Argument 1
Focusing on domain‑specific use cases and appropriate trust controls unlocks productivity and confidence (Paola)
EXPLANATION
Paola emphasizes that trust in AI depends on applying the technology to the right contexts, as requirements differ between sectors such as medicine and customer service. By concentrating on specific use cases and tailoring trust controls, organisations can both boost productivity and build confidence in AI systems.
EVIDENCE
She points out the gap between perception of AI’s power and actual deployment, explaining that the bottleneck is understanding how to trust AI in the correct context, which varies by domain, and that addressing this unlocks productivity and trust simultaneously [21-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to define concrete applications and focus on specific use cases is mentioned in Brando’s session notes [S14] and reinforced by Goldman’s emphasis on domain-specific trust requirements in the same summit report [S13].
MAJOR DISCUSSION POINT
Context‑specific AI deployment to build trust
DISAGREED WITH
Brando Benifei
Agreements
Agreement Points
The AI Code of Practice should be created through an inclusive co‑legislative process, be clear yet flexible, and serve as an international safety benchmark.
Speakers: Brando Benifei, Speaker 2
Inclusive co‑legislative process creates flexible yet clear rules (Brando Benifei) Serves as a reference for other countries; safety chapters act as standards (Speaker 2)
Both speakers stress that the Code of Practice will be drafted with civil society, developers, enterprises and academia to produce clear, adaptable rules and that its safety chapters can become standards for other nations [2-4][7][40].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for an inclusive, flexible framework mirrors the UNESCO AI ethics recommendation that stresses global safety standards [S61] and the EU GPAI Code’s emphasis on context-specific governance to build trust [S45]. Recent IGF discussions also highlight the need for broad stakeholder participation and adaptable policy design [S59][S62].
Public institutions and political leaders must lead cooperation on high‑risk AI domains (e.g., military use, loss‑of‑control) rather than leaving it to private companies.
Speakers: Brando Benifei, Sean
Public institutions must drive cooperation on high‑risk areas like military use and loss‑of‑control, not rely on businesses (Brando Benifei) Leaders must create conditions for safe development, coordinate across regions and mitigate geopolitical pressure (Sean)
Both argue that governments need to take the lead in addressing systemic AI risks and to create conditions that enable companies to adopt safety measures [30-34][12-19].
POLICY CONTEXT (KNOWLEDGE BASE)
International security forums have repeatedly urged state-led coordination on military AI to ensure compliance with international law [S51] and to focus sovereign resources on critical control points rather than full replication of AI stacks [S52].
Innovation and trust can be pursued in parallel; continuous multistakeholder collaboration is essential.
Speakers: Brando Benifei, Speaker 2
Ongoing collaboration is essential; innovation and trust can coexist (Speaker 2) Do not contrast safety with diffusion; both can go in parallel (Brando Benifei)
Both emphasize that safety measures and AI diffusion can run side-by-side and require ongoing joint effort among stakeholders [26-28][38-42].
POLICY CONTEXT (KNOWLEDGE BASE)
High-level sessions stress that companies must cooperate with policymakers to avoid paralysis while fostering innovation [S41], and Dynamic Coalition meetings demonstrate the effectiveness of technical-policy-user collaboration for AI safety [S42]. Policy research also calls for private-sector engagement in evidence-based governance [S53] and balancing innovation with security concerns [S55].
Global coordination across regions (EU, US, China) is required to create conditions for safe AI development.
Speakers: Sean, Brando Benifei
Leaders must create conditions for safe development, coordinate across regions, and potentially slow down at critical points (Sean) International cooperation is needed to address deployment risks (Brando Benifei)
Both highlight the necessity of bringing major AI actors to a common platform to enable coordinated safety actions [18-19][28-29].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple IGF reports underline that AI governance must be globally inclusive, accommodating differing regional approaches while maintaining common safety goals [S63][S62]. Geopolitical analyses note that misunderstandings, not fundamental conflicts, often drive coordination challenges [S60].
Similar Viewpoints
Both see the Code of Practice as a concrete, stakeholder‑driven instrument that will provide clear, adaptable rules and act as an international safety reference [2-4][7][40].
Speakers: Brando Benifei, Speaker 2
Inclusive co‑legislative process creates flexible yet clear rules (Brando Benifei) Serves as a reference for other countries; safety chapters act as standards (Speaker 2)
Both stress that governmental leadership is essential to establish conditions that allow safe AI development and to manage systemic risks [30-34][12-19].
Speakers: Brando Benifei, Sean
Public institutions must drive cooperation on high‑risk areas like military use and loss‑of‑control, not rely on businesses (Brando Benifei) Leaders must create conditions for safe development, coordinate across regions and mitigate geopolitical pressure (Sean)
Both call for a global, multilateral approach that brings all major AI actors to the table to ensure safety and coordination [18-19][28-29].
Speakers: Sean, Brando Benifei
Leaders must create conditions for safe development, coordinate across regions and mitigate geopolitical pressure (Sean) International cooperation is needed to address deployment risks (Brando Benifei)
Unexpected Consensus
Practical, domain‑specific focus as a pathway to trust
Speakers: Paola, Brando Benifei
Focusing on the use cases. … trust controls in the right context unlock productivity and trust (Paola) We need to build a culture of restraint … prevent risks … (Brando Benifei)
While Paola emphasizes concrete use-case trust controls and Brando discusses broader policy instruments, both converge on the idea that concrete, context-specific safeguards are key to unlocking both productivity and public confidence, an alignment not obvious given their different starting points [21-24][3].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from the EU GPAI Code and sector-focused trust studies shows that tailoring controls to specific domains (e.g., medicine vs. customer service) narrows the trust gap and boosts productivity [S45]. Complementary viewpoints argue for sectoral regulation where harms are clear, leaving broader issues to existing frameworks [S44], and risk-based, application-specific governance is widely endorsed [S48].
Overall Assessment

The panel shows strong convergence on four main fronts: (1) the Code of Practice should be co‑legislatively drafted, clear yet adaptable, and serve as an international safety benchmark; (2) governments must lead on high‑risk AI areas and create conditions for safe development; (3) innovation and trust are not mutually exclusive and require ongoing multistakeholder collaboration; (4) global coordination across regions is essential. An unexpected but notable consensus links practical, use‑case‑driven trust measures with high‑level policy goals.

High – the speakers largely reinforce each other’s positions, indicating a shared understanding that a mixed approach of inclusive policy design, governmental leadership, and international cooperation is necessary to balance AI innovation with safety and public trust. This consensus strengthens the prospect of coordinated action on AI governance at both regional and global levels.

Differences
Different Viewpoints
Leadership and responsibility for high‑risk AI governance (public institutions vs industry)
Speakers: Brando Benifei, Sean
Public institutions must drive cooperation on high‑risk areas like military use and loss‑of‑control, not rely on businesses (Brando Benifei) Leaders must create conditions for safe development, coordinate across regions, and mitigate geopolitical pressure (Sean)
Brando argues that solutions to high-risk AI domains such as military use and loss-of-control must be led by public institutions and not left to private actors [30-34], while Sean stresses that political leaders need to create conditions that enable companies to take additional safety steps despite geopolitical competition [12-19].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on government versus private sector roles highlight a tension between rapid innovation and regulatory oversight, as discussed in panels on sovereign AI and private-sector challenges [S58][S47].
Approach to building trust – broad code of practice vs domain‑specific use‑case focus
Speakers: Brando Benifei, Paola
Inclusive co‑legislative process creates flexible yet clear rules (Brando Benifei) Focusing on domain‑specific use cases and appropriate trust controls unlocks productivity and confidence (Paola)
Brando promotes a Europe-wide AI Code of Practice drafted through an inclusive co-legislative process to produce clear yet adaptable rules for the whole AI landscape [2-4], whereas Paola argues that trust is best achieved by concentrating on specific contexts and tailoring trust controls to each sector, such as medicine or customer service [21-24].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses contrast system-wide codes with sector-specific controls, noting that clear-harm areas are often regulated directly while other domains rely on existing sectoral rules [S44], and the EU GPAI initiative advocates for context-specific trust mechanisms [S45].
Urgency of action versus need for coordinated condition‑building
Speakers: Brando Benifei, Sean
Public institutions must drive cooperation on high‑risk areas like military use and loss‑of‑control, not rely on businesses (Brando Benifei) Leaders must create conditions for safe development, coordinate across regions, and mitigate geopolitical pressure (Sean)
Brando calls for immediate political action, urging leaders not to lose any more time and to use summit occasions for progress [35-37], while Sean warns that current geopolitical pressures prevent CEOs from taking safety steps and that leaders must first create enabling conditions before rapid action can be taken [13-19].
POLICY CONTEXT (KNOWLEDGE BASE)
High-level leader sessions stress immediate action to avoid a policy window closing, warning against analysis paralysis [S41], while comprehensive reports also flag the same urgency across disciplines [S43].
Unexpected Differences
Role of businesses in governing high‑risk AI applications
Speakers: Brando Benifei, Sean
Public institutions must drive cooperation on high‑risk areas like military use and loss‑of‑control, not rely on businesses (Brando Benifei) Leaders must create conditions for safe development, coordinate across regions, and mitigate geopolitical pressure (Sean)
It is surprising that Brando dismisses a business role in high-risk AI governance, while Sean emphasizes that leaders must create conditions that enable companies to adopt safety measures, indicating a divergence on the private sector’s responsibility [30-34][12-19].
POLICY CONTEXT (KNOWLEDGE BASE)
Australian business surveys reveal a gap between AI adoption and governance expectations, highlighting the challenge of aligning corporate practices with regulatory demands [S50]. Similar tensions are noted in discussions of government-private dynamics for AI risk management [S58][S53].
Granularity of trust‑building measures – system‑wide code vs sector‑specific controls
Speakers: Brando Benifei, Paola
Inclusive co‑legislative process creates flexible yet clear rules (Brando Benifei) Focusing on domain‑specific use cases and appropriate trust controls unlocks productivity and confidence (Paola)
While Brando advocates a single, flexible code of practice for the whole AI ecosystem, Paola argues that trust must be built through detailed, sector-specific use-case controls, a contrast that was not anticipated given the shared emphasis on trust [2-4][21-24].
Overall Assessment

The panel shows strong consensus on the need for trustworthy, safe AI, but diverges on who should lead high‑risk governance, whether a broad code of practice or sector‑specific controls is preferable, and how quickly action should be taken. These disagreements are moderate and revolve around implementation pathways rather than the underlying goal.

Moderate disagreement – while all speakers share the same overarching objective (trustworthy AI), they differ on leadership, methodology, and timing, which could affect the speed and coherence of future AI governance initiatives.

Partial Agreements
All participants agree that AI systems must be trustworthy and safe, and that some form of coordinated effort—whether through a code of practice, sector‑specific controls, or continued multistakeholder dialogue—is necessary to achieve that goal [7][12-17][21-24][38-42].
Speakers: Brando Benifei, Sean, Paola, Speaker 2
Inclusive co‑legislative process creates flexible yet clear rules (Brando Benifei) Leaders must create conditions for safe development, coordinate across regions, and mitigate geopolitical pressure (Sean) Focusing on domain‑specific use cases and appropriate trust controls unlocks productivity and confidence (Paola) Ongoing collaboration is essential; innovation and trust can coexist (Speaker 2)
Takeaways
Key takeaways
The AI Code of Practice is intended to build trust and mitigate systemic and existential AI risks through a flexible yet clear set of rules. The Code is developed via an inclusive co‑legislative process involving civil society, industry of all sizes, and academia, making it adaptable to the evolving AI landscape. It is positioned as a reference model for other countries, with the safety chapters serving as potential international standards. International cooperation is essential; leaders must create conditions that allow companies worldwide—including Europe, the US, and China—to prioritize safety over competitive pressure. Public institutions, not private firms alone, should lead coordination on high‑risk areas such as military AI and loss‑of‑control scenarios. Innovation and trust are not mutually exclusive; they can be pursued together when trust controls are tailored to specific domains. Focusing on context‑specific use cases (e.g., medicine vs. customer service) is key to unlocking productivity while maintaining confidence in AI systems.
Resolutions and action items
Empower the European AI Office with the necessary resources to implement and enforce the Code of Practice. Encourage political leaders to convene and establish conditions that enable AI developers to adopt additional safety measures, even under geopolitical pressure. Promote ongoing international dialogue and cooperation on AI safety, especially concerning military applications and loss‑of‑control risks. Invite stakeholders to review the Code of Practice, particularly the safety chapters, and consider adopting them as standards in their jurisdictions. Continue collaborative work among summit participants to align innovation with trust-building measures.
Unresolved issues
Specific mechanisms for achieving effective international coordination and enforcement of the Code across different regions. Concrete steps to alleviate geopolitical competition that hinders companies from implementing safety measures. Detailed governance frameworks for high‑risk AI applications such as military use and autonomous systems. How to ensure consistent compliance and verification across small, medium, and large AI developers. Operational guidelines for domain‑specific trust controls and their integration into existing workflows.
Suggested compromises
Pursue safety and diffusion of AI in parallel, allowing rapid deployment while maintaining robust risk mitigation. Adopt flexible, co‑legislative rules that are clear in intent but adaptable to various contexts and technological evolutions. Combine public‑sector leadership with industry participation, ensuring that businesses contribute expertise without bearing sole responsibility for high‑risk governance.
Thought Provoking Comments
We decided to put in the AI Act the provision of having this code of practice that would come, as Professor Bengio explained very clearly, from a co‑legislative process involving civil society, developers, small, medium, big enterprises and academia… to build a culture of restraint… and to build trust among our citizens that we can innovate without sacrificing human rights and fundamental values.
This comment introduced the novel idea of a co‑legislative, multi‑stakeholder code of practice rather than a rigid, prescriptive regulation, highlighting flexibility, inclusivity, and the goal of fostering trust while protecting rights.
It set the foundational framework for the whole discussion, prompting other speakers to address how such a code could be operationalised (e.g., Sean’s call for conditions that enable compliance) and framing the subsequent debate around trust, flexibility, and stakeholder involvement.
Speaker: Brando Benifei
We should be hearing that [CEOs feel unable to take extra safety steps] as a red alarm bell… we need to create conditions where it is possible for them to take additional steps, to focus on safety, to share expertise, to coordinate and potentially even to slow down before critical points… we need to bring everyone to the table as equals – Europe, US, China.
Sean highlighted the systemic pressure from geopolitical competition that hampers safety measures, turning the conversation from regulatory design to the practical reality of industry constraints and the necessity of global, cross‑regional cooperation.
His point shifted the tone from a European‑centric policy discussion to a broader, urgent call for international coordination, influencing Brando’s later emphasis on parallel safety and diffusion, and prompting the panel to consider global governance mechanisms.
Speaker: Sean
Focus on the use cases… the bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service… when we start to focus on context, the right use cases, what trust controls look like in those domains, that’s where we unlock not only productivity, but trust.
Paola introduced a pragmatic, domain‑specific perspective, moving the debate from abstract governance to concrete implementation challenges, emphasizing that trust mechanisms must be tailored to distinct sectors.
Her comment redirected the discussion toward practical deployment considerations, prompting Brando to acknowledge the need for parallel safety and diffusion, and reinforcing the idea that a one‑size‑fits‑all code may need contextual adaptation.
Speaker: Paola
We need to not contrast safety at the highest terms and the focus on diffusion, on action, on impact… there are areas of deployment of AI where without international cooperation we are facing huge risks… we have issues regarding military use of AI and loss of control… it must come from public institutions, not from business… Don’t lose any more time.
This remark deepened the conversation by explicitly linking safety to high‑impact domains such as military AI and loss‑of‑control scenarios, and by asserting that public institutions—not private firms—must lead, adding urgency and a moral imperative.
It served as a turning point that heightened the stakes of the discussion, reinforcing Sean’s global‑cooperation call and Paola’s use‑case focus, and culminating in the moderator’s summary that emphasized the need for continued collaboration and concrete standards.
Speaker: Brando Benifei
Overall Assessment

The discussion was shaped by a progression from establishing a collaborative, flexible regulatory foundation (Brando’s opening) to confronting real‑world constraints and the need for global coordination (Sean), then to grounding trust in sector‑specific use cases (Paola), and finally to stressing urgent, high‑risk applications and the primacy of public‑sector leadership (Brando’s closing). Each of these pivotal comments introduced new dimensions—process design, geopolitical pressure, practical deployment, and security‑critical risks—that redirected the conversation, deepened analysis, and built consensus around the central theme that innovation and trust must evolve together through inclusive, context‑aware, and internationally coordinated governance.

Follow-up Questions
How can the European AI Office be equipped with sufficient authority and tools to effectively implement and enforce the AI Code of Practice, ensuring private actors comply?
Effective enforcement is crucial for the Code of Practice to translate into real‑world safety measures and to build public trust in AI innovation.
Speaker: Brando Benifei
What mechanisms can be created to allow CEOs and companies to take additional safety steps despite competitive geopolitical pressures?
Without supportive conditions, firms may prioritize market competition over safety, undermining responsible AI development.
Speaker: Sean
How can the international community bring AI leaders from the EU, the US, China and other regions to the table as equals to cooperate on AI safety and governance?
AI risks are global; coordinated, equitable cooperation is needed to prevent fragmented standards and to manage systemic threats.
Speaker: Sean
What domain‑specific trust controls and use‑case frameworks are needed for different sectors (e.g., medicine versus customer service) to ensure safe AI deployment?
Different applications have distinct risk profiles; tailored controls are essential for both productivity gains and public confidence.
Speaker: Paola
What governance structures should address the military use of AI and the associated loss‑of‑control risks?
Military AI poses existential and security threats that require oversight beyond the private sector, demanding clear public‑institution leadership.
Speaker: Brando Benifei
What further research is required on loss‑of‑control risks and military AI, as highlighted by Professor Banjo’s work?
Understanding these high‑impact risks is necessary to design effective safeguards and inform policy decisions.
Speaker: Brando Benifei
Which safety chapters and standards in the Code of Practice can be adopted internationally as reference points for other countries?
Identifying harmonised standards facilitates global alignment and mutual recognition, strengthening worldwide AI safety governance.
Speaker: Speaker 2

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.