How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026
20 Feb 2026 14:00h - 15:00h
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026
Summary
The panel discussed how to govern artificial intelligence through a European-wide code of practice that aims to balance innovation with democratic safeguards [1-2].
Benifei explained that the AI Act will not enumerate every technical mitigation, but will rely on a co-legislative code of practice developed together with civil society, developers, enterprises and academia to reflect the evolving AI landscape [2]. The code is intended to create a culture of restraint that limits misinformation, cyber-bullying and criminal misuse of AI [3]. He stressed that the framework must be clear yet flexible, avoiding vague language while specifying the objectives [4]. Implementation will be entrusted to the European AI Office, which must have sufficient powers to enforce the rules and ensure private actors comply, thereby building public trust [5].
Sean argued that leaders must first create the conditions that allow safe and beneficial AI development, noting that such conditions are currently lacking [12-13]. He warned that CEOs feel constrained by competitive geopolitical pressure and that this should be seen as a red alarm bell [14-16]. His recommendation was to enable global coordination-including Europe, the United States and China-so that companies can adopt additional safety steps, share expertise, and even pause development at critical points [17-19].
Paola highlighted the importance of focusing on specific use cases, arguing that the gap between AI’s perceived power and its trustworthy deployment lies in context-dependent validation [21-23]. She suggested that defining trust controls for domains such as medicine versus customer service will unlock both productivity and confidence in the technology [24].
Benifei reiterated that safety and diffusion must proceed in parallel, because certain AI deployments pose huge cross-border risks without international cooperation [26-28]. He pointed to military applications and loss-of-control scenarios as areas that require public-institutional leadership rather than relying on industry self-regulation [30-34]. He concluded with an urgent call for leaders to stop delaying and to use summit opportunities to advance concrete progress [35-37].
The moderator summed up that innovation and trust can coexist, urging participants to consult the code’s safety chapters as potential standards for other regions and to continue collaborative work [38-44].
Keypoints
Major discussion points
– A co-legislative Code of Practice is essential for AI risk mitigation.
Brando explains that the AI Act will rely on a code of practice developed together with civil society, developers, enterprises and academia to create “a culture of restraint” and prevent democratic and existential risks while remaining flexible enough to adapt to the evolving AI landscape [2-4][7].
– Effective enforcement requires strong public-sector capacity.
He stresses that the European AI Office must be equipped to implement the code, ensuring that private actors comply and that the rules become “applicable, effective, and… build trust” [5].
– International cooperation is needed to create safe development conditions.
Sean calls for a global effort-across Europe, the United States, China and beyond-to give companies the “conditions… to take additional steps, to put additional focus on safety, to share expertise… and potentially even to slow down before critical points” [12-19].
– Trust must be anchored in context-specific use-case governance.
Paola highlights the gap between the perceived power of AI and its trustworthy deployment, arguing that “when we start to focus on context, the right use cases… we unlock not only productivity, but trust in the same breath” [21-24].
– High-risk domains such as military AI and loss-of-control scenarios demand urgent public-institution action.
Brando warns that without international cooperation “we are facing huge risks” in areas like military use and loss-of-control, and that “it must come from the public institutions… Don’t lose any more time” [28-37].
Overall purpose / goal of the discussion
The panel aimed to shape a pragmatic governance framework for artificial intelligence that balances rapid innovation with the protection of democratic values, human rights, and safety. Participants presented the EU’s forthcoming Code of Practice, highlighted the need for enforceable mechanisms, and offered concrete recommendations for leaders to foster trustworthy, globally coordinated AI development.
Overall tone and its evolution
– The conversation begins with a constructive and collaborative tone, emphasizing the creation of inclusive, flexible rules and the building of trust [2-4].
– It shifts to a cautiously urgent tone as speakers stress the current lack of safe conditions, competitive pressures on CEOs, and the necessity of immediate, coordinated action [12-19][28-37].
– The closing remarks return to a forward-looking, hopeful tone, reaffirming that innovation and trust can coexist and encouraging continued collaboration [38-43].
Thus, the discussion moves from collaborative framing, through urgent calls for action, to a reaffirmation of optimism about achieving trustworthy AI governance.
Speakers
– Paola – Secretary General, European Digital Media Observatory; gender advocate. Expertise: media disinformation, trust in AI use-case contexts. [S1][S3]
– Speaker 2 – Moderator/Chair of the panel at the AI Impact Summit; responsible for guiding the discussion and posing questions to panelists. [S4][S5][S6]
– Brando Benifei – Member of the European Parliament; focuses on AI legislation, the European AI Office, code of practice, and building trust while safeguarding human rights. [S7][S8]
– Sean – Director, Future Conflict & Cyber Security, International Institute for Strategic Studies; expertise in AI governance, cybersecurity policy, and safe AI development. [S11]
Additional speakers:
– Professor Bengio – Prominent AI researcher (Yoshua Bengio) referenced for explaining the co-legislative process behind the code of practice. (no external source provided)
– Professor Banjo – Researcher mentioned regarding systemic risks, loss-of-control AI, and related safety research. (no external source provided)
– Professor Eiger – Academic referenced (pronunciation difficulty noted) in the context of AI safety and diffusion discussions. (no external source provided)
Opening & Code of Practice – Brando Benifei opened the panel by explaining that the EU’s forthcoming AI Act will be complemented by a Code of Practice, developed through a co-legislative process involving civil society, developers, enterprises of all sizes and academia [2]. The Code is intended to create a “culture of restraint” that can address both existential and systemic risks to democracy and citizens’ freedoms [2-4]. Benifei stressed that the Code must be clear yet flexible, avoiding vague language while articulating concrete objectives [4]. Its purpose is to foster public confidence that innovation can proceed without compromising human rights and fundamental values [7]. To make the Code effective, he called for the European AI Office to be equipped with sufficient powers and resources so that it can enforce the rules, verify compliance of private actors and thereby build trust [5-6].
Moderator’s Prompt – After Benifei’s remarks, the moderator asked Sean to formulate a one-minute recommendation for summit leaders [13].
Sean’s Recommendation – Sean argued that current conditions are inadequate for safe AI development. He noted that CEOs of leading AI firms feel constrained by intense geopolitical competition and therefore cannot take the extra safety steps they would like [13-14]. Describing this as a “red alarm bell,” he urged leaders to create enabling conditions that allow companies to prioritise safety, share expertise and, when necessary, pause development at critical junctures [15-18]. Crucially, Sean called for global coordination that brings together Europe, the United States, China and other regions on an equal footing [18-19].
Paola’s Focus – Paola shifted the discussion to domain-specific trust mechanisms. She highlighted a “gigantic gap” between the public’s perception of AI’s power and organisations’ ability to deploy it responsibly [22-23]. By concentrating on appropriate use-cases-recognising that trust controls differ markedly between sectors such as medicine and customer-service-she argued that productivity and confidence can be unlocked simultaneously [24].
Benifei’s Follow-up – Benifei reiterated that safety and diffusion must proceed in parallel; they should not be presented as opposing goals [26-28]. He warned that without international cooperation, AI deployments in high-risk areas-particularly military applications and loss-of-control scenarios-pose “huge risks” [30-31]. He asserted that responsibility for addressing these systemic threats lies with public institutions, not with businesses, and issued an urgent plea for political leaders to act without further delay [32-37].
Moderator’s Closing – The moderator concluded by affirming that innovation and trust can coexist and that the Code’s safety chapters could serve as reference standards for other jurisdictions [38-41]. Participants were invited to review the Code, continue collaborative work, and maintain momentum in building trustworthy AI [42-44].
Points of Consensus
* Inclusive co-legislative Code of Practice – Benifei and the moderator agreed that the Code should be drafted with input from civil society, industry and academia, be clear yet adaptable, and act as an international benchmark [2-4][7][40].
* Public-sector leadership on high-risk AI – Benifei emphasised that governments must create the conditions for safe development and lead coordination on high-risk domains such as military AI and loss-of-control scenarios [30-34].
* Parallel pursuit of safety and diffusion – Both Benifei and the moderator stressed that safety measures need not hinder AI diffusion; both can advance together [26-28][38-42].
* Global coordination – Sean and Benifei called for bringing major AI actors from the EU, the US and China to a common platform to enable coordinated safety actions [18-19][28-29].
Points of Nuance
* Governance of high-risk AI – Benifei argued that public institutions, not businesses, must drive cooperation on military and loss-of-control AI [30-34], whereas Sean emphasised that leaders must first create conditions that enable companies to adopt additional safety steps [12-19].
* Scope of the trust framework – Benifei advocated a single, flexible Code of Practice for the whole AI ecosystem [2-4], while Paola insisted that trust must be built through sector-specific use-case controls tailored to domains such as healthcare or customer service [21-24].
* Urgency versus condition-building – Benifei’s call to “not lose any more time” urged immediate political action [35-37]; Sean, by contrast, warned that without first establishing enabling conditions, rapid action may be ineffective [13-19].
Key Take-aways
* The Code of Practice is designed to mitigate systemic and existential AI risks while remaining adaptable to technological change.
* Its inclusive co-legislative development aims to make it a reference model for other countries, with the safety chapters offering potential international standards.
* International cooperation is essential; leaders must devise mechanisms that allow firms worldwide to prioritise safety despite competitive pressures.
* Public institutions, rather than private firms alone, should steer coordination on high-risk domains such as military AI and loss-of-control scenarios.
* Trust is best achieved when context-specific use-case controls are defined, recognising that risk profiles differ across sectors.
* Innovation and trust are not mutually exclusive; they can be pursued together when safety and diffusion are pursued in parallel.
Resolutions and Action Items
1. Empower the European AI Office with the necessary authority, resources and enforcement tools to implement the Code [5-6].
2. Encourage political leaders to convene forums that create conditions for companies to adopt extra safety measures, even under geopolitical competition [13-18].
3. Promote ongoing international dialogue on AI safety, especially concerning military use and loss-of-control risks [30-34].
4. Invite stakeholders to review the Code’s safety chapters and consider adopting them as standards in their own jurisdictions [40-41].
5. Continue multistakeholder collaboration to align innovation with trust-building measures across domains [38-44].
Unresolved Issues
* Concrete mechanisms for achieving effective global coordination and enforcement of the Code across regions.
* Specific policies to alleviate geopolitical competition that hinder firms from implementing safety steps.
* Detailed governance frameworks for military AI and autonomous-system loss-of-control risks.
* Methods to ensure consistent compliance verification among small, medium and large AI developers.
* Operational guidelines for domain-specific trust controls and their integration into existing workflows.
Suggested Compromises
* Pursue safety and diffusion in parallel, allowing rapid deployment while maintaining robust risk mitigation.
* Adopt flexible, co-legislative rules that are clear in intent yet adaptable to diverse contexts and technological evolutions.
* Combine public-sector leadership with industry participation, ensuring businesses contribute expertise without bearing sole responsibility for high-risk governance.
Thought-Provoking Comments and Their Impact
* Benifei’s introduction of a co-legislative Code of Practice highlighted a novel shift from rigid regulation to a flexible, multi-stakeholder framework aimed at building trust while protecting rights [2]. This set the foundation for the entire discussion, prompting subsequent speakers to consider operationalisation and enforcement.
* Sean’s “red alarm bell” about CEOs’ inability to take extra safety steps under geopolitical pressure expanded the debate from European policy design to the practical constraints faced by industry, underscoring the need for global, equitable coordination [13-18].
* Paola’s emphasis on use-case specificity redirected attention to concrete deployment challenges, arguing that trust must be anchored in sector-specific controls, thereby linking high-level policy to on-the-ground practice [21-24].
* Benifei’s later warning not to contrast safety with diffusion and his call for public-institutional leadership on military AI elevated the stakes, reinforcing the urgency of international cooperation and the moral imperative for swift political action [26-37].
Overall, the discussion progressed from establishing a collaborative, flexible regulatory foundation, through recognising real-world industry constraints and the necessity of global coordination, to affirming that innovation and trust can be jointly pursued when concrete, context-aware safeguards are put in place. The panel’s consensus on inclusive governance, public-sector leadership and parallel safety-diffusion provides a solid basis for future AI policy development, while the identified nuances point to areas where further negotiation and research are required.
So from different places in the world as a possible way of how to deal with the frontier aspects of AI development. Because instead of detailing in the legislative act every aspect of the risk mitigation that we ask now to the big developers, we decided to put in the AI act the provision of having this code of practice that would come, as Professor Bengio explained very clearly, from a co -legislative process involving civil society, developers, small, medium, big enterprises and academia in an exercise that would allow to build a more adherent to the… present situation and evolution. of the AI landscape of a set of rules to actually prevent existential risks, but also we call them systemic risks that deal with our democracies, with our freedom as citizens.
I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent risks of damaging our democratic processes by spreading misinformation or contrasting the cyberbullying or the criminal actions through the use of AI. And we… I think we built a very clear framework. because I think it’s very important to be clear, not to have vague proposals that are very loosely interpretable, but maintaining a certain degree of flexibility, we are clear on what we want to pursue. However, I think it will be very important, and so I need to subscribe to what Professor Banjo said at the end of his speech, that this is our effort from the Parliament side that we provide the European AI Office all the means to actually implement this code of practice, because it’s true that, as it was said, many companies are already complying with many of the risk mitigation aspects that are in the code of practice, but we need to be sure that we can, again, be at the same level of this very power.
private actors to do our part in making the rules that we decided applicable, effective, and so build trust. In the end, to conclude, this is our objective. We want the code of practice to contribute in building trust among our citizens on the fact that we can innovate without sacrificing human rights and protection of our fundamental values. Thank you.
Thank you very much, Brando. Now, we still have very few minutes left, so I would like to exploit the opportunity of your presence to ask you, maybe if you can say in one minute, Sean, you have already said this, but maybe you can reformulate or come up with one recommendation for the leaders. at this summit on the way that we can govern AI in the future? What would you say to them?
In one minute, I would say the role of our leaders, the role of us as scholars, the role of us as governance experts is to create the conditions for the safe and beneficial development of AI. Right now, I do not believe those conditions entirely exist because exactly of the things that the CEOs of the leading companies say. They say they would like to be able to take additional steps, but under the competitive geopolitical pressure they’re in, they do not feel that they are able to. We should be hearing that. That should be a red alarm bell for us. And so what we need to do is figure out how do we create the conditions where it is possible for them.
to take these additional steps, to put additional focus on safety, to share expertise if needed, to coordinate and potentially even to slow down before critical points. And that doesn’t just mean European companies, it doesn’t just mean US companies, it also means our colleagues in China who are making such impressive progress. We need to figure out what is a way in which we can bring everyone to the table as equals and figure out how to cooperate on this challenge of our time.
Paola?
I would say focus on the use cases. So right now there’s a gigantic gap between sort of our perception of this incredible power of the technology and how quickly organizations can deploy it. And the bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service. And so I think when we start to focus on context, the right use cases, what trust controls look like in those domains, in local context, that’s where we unlock not only productivity, but trust in the same breath.
Well, in my opinion, we need to, again, as it was said earlier by Professor Eiger, it’s very difficult for me to figure the pronunciation. But anyway, we need to not contrast, not put in contrast safety at the highest terms and the focus on diffusion, on action. On impact, the title of this summit. I think this can go in parallel and it must go in parallel because there are. areas of deployment of AI where without international cooperation we are facing huge risks. We hope that the code of practice will be a way to enlarge this discussion and build a reference point as I said but we need to go even further. We have issues regarding military use of AI.
We have issues regarding the loss of control risks that also Professor Banjo has been looking a lot at with his research that are in need of further cooperation. I don’t think this will come from the business but not because they are bad. It’s not their role. It must come from the public institutions and so we need to send this message to our leaders. Don’t lose any more time. You need to sit down and use these occasions to do progress. We need that. and we do not need to lose any more time on this.
Thank you very much. So I would like to close this very interesting panel simply to say that what we have tried to discuss and conclude in this session is the fact that innovation and trust can go together and we can find different ways to make sure that trust is ensured or enabled in a particular country and in a particular continent, but we will need to continue working together and we are also happy to have presented to you some elements of the Code of Practice. Please take a look at that Code of Practice, in particular look at the safety chapters, and you will see that these are probably standards to which other countries can sign up to.
And thanks a lot for your participation. We look forward to continuing this discussion with you and with all the colleagues in this summit. Thank you very much and thanks to our panelists. Thank you very much. Thank you. you Thank you.
So we need to be in close collaboration in order to mitigate these risks.
EventBenifei advocates for a comprehensive legislative framework through co-legislative processes and codes of practice, while Goldman emphasizes focusing on specific use cases and context-dependent approa…
EventMelinda Claybaugh: I mostly echo what other people said, but just on the point about the EU AI Act, I think that it’s an interesting reflection of how unsettled things are. So with the code of pract…
EventUmut Pajaro Velasquez:Hello everyone, well as Jennifer will say I will be presenting mainly the outputs from the youth lack IGF that occurred this year in ECOS related to AI and emerging technologies….
EventAudience: Thank you, Dr. Subi, for the great question. And it’s a difficult one and one that I’ve been kind of grasping with my entire professional career. I think there’s a big difference, and I …
EventThere is strong consensus that traditional approaches are inadequate and that effective cybersecurity requires collaboration between law enforcement, government agencies, and private sector organizati…
EventLaw enforcement agencies need significant capacity building and development to effectively address cyber threats. Collaboration, capacity building, sensitization, and engagement with law enforcement a…
EventNeed law enforcement, judiciary, court system, judges to understand cyber space and offenses, lawyers to be trained, police to be trained, and technical experts. It’s a huge investment requirement Sa…
EventHowever, concerns are raised about weak enforcement cultures in developing countries if they were to adopt ex-ante regulation. Strengthening competition enforcement capacity and building a sound found…
EventIn conclusion, the rise of online shopping brings concerns about the safety of products and the lack of information available for products sold across borders. International cooperation is necessary t…
EventKamina Johnson Smith – Jamaica: Thank you, Mr. President. I extend Jamaica’s congratulations on your election to the leadership of this distinguished body. You can be assured of our full commitmen…
EventDevelopment | Economic Hoffmann emphasizes that the World Economic Forum serves as a platform for public-private cooperation where government, businesses, and civil society come together. He argues t…
EventPakistan:Thank you, Chair, Distinguished Chair, Excellencies, Distinguished Delegates. At the outset, I would like to extend my heartfelt congratulations to Ambassador Burhan Rafur for his exemplary a…
EventTrust requires context and cannot be evaluated without specific use cases. Botsman argues that asking whether people trust AI in their daily lives is meaningless without specifying what tasks or funct…
EventThis comment addresses a fundamental flaw in top-down governance approaches, highlighting that trust cannot be imposed externally and that context-specific solutions are essential. It challenges the a…
Event**Systemic Societal Risks**: Broader societal impacts, particularly profound labor market disruption that could create systemic challenges requiring coordinated responses. Brian Tse: right now? First…
EventWhat is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed multiple dimensions of AI that stakeholders believe require regulation, spannin…
EventAGIS Reports analysisemphasises that as AI systems become pervasive, they create significant global challenges, including surveillance risks, algorithmic bias, cyber vulnerabilities, and environmental…
UpdatesMeanwhile, less technologically advanced states fear the use of military AI against them when they cannot develop this technology themselves. Additionally, these countries fear that whoever dominates …
BlogMashologu advocates for human-in-the-loop learning in AI system development from design through deployment, where humans must guide and when needed override automated systems. This includes reinforcem…
EventThe tone was optimistic and forward-looking throughout the discussion. Speakers emphasized the potential for growth and innovation in the ASEAN region, while also acknowledging the challenges that nee…
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThe discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated genuine enthusiasm for AI’s potential while expressing well-founded concerns ab…
EventThe tone was largely constructive and collaborative, with speakers building on each other’s points. There was a sense of shared purpose in promoting inclusive governance, though some frustration was e…
EventThe discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged serious challenges and risks (declining public funding, regulatory bottlenecks, conc…
EventThe discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes contentious issues. While there were moments of challenge and critique (particularl…
EventThe overall tone was one of urgency and calls for action, with many speakers emphasizing the need for immediate reforms to the global financial architecture. There was a sense of frustration from deve…
EventThe overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speakers emphasized the need for cooperation, there was an undercurrent of concern abo…
EventThe discussion maintained a serious, urgent tone throughout, characterized by technical expertise and policy-focused analysis. The moderator’s opening analogy of buying a car without brakes while desc…
EventThe tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shifts to educational and expansive while presenting AI capabilities. It becomes inc…
EventIn closing, the speaker reiterated steadfast support for the Chairperson, the Secretariat, and the diligent team, emphasising a shared spirit of collaboration. The address concluded with the speaker’s…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularl…
EventThe tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potential and India’s opportunities in the space. The discussion maintained an educati…
EventThe tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and progress. However, there were also notes of caution about hype and unrealistic expec…
EventThe tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and appreciation, maintains an encouraging and partnership-focused approach when discussing bilater…
Event“The EU’s forthcoming AI Act will be complemented by a Code of Practice, developed through a co‑legislative process involving civil society, developers, enterprises of all sizes and academia.”
The knowledge base states that the Code of practice should emerge from a co-legislative process involving all stakeholders to address systemic and existential risks [S13].
“The Code is intended to create a “culture of restraint” that can address both existential and systemic risks to democracy and citizens’ freedoms.”
S13 confirms that the Code aims to address systemic and existential risks through a multi‑stakeholder approach.
“The Code must be clear yet flexible, avoiding vague language while articulating concrete objectives.”
S14 highlights the importance of concentrating on concrete applications and clear definitions, supporting the claim of clarity and flexibility.
“Its purpose is to foster public confidence that innovation can proceed without compromising human rights and fundamental values.”
S105 describes a human‑rights‑based approach that protects fundamental rights while promoting innovation, aligning with the stated purpose.
“The European AI Office should be equipped with sufficient powers and resources so that it can enforce the rules, verify compliance of private actors and thereby build trust.”
S107 reports that the European Commission is establishing a European AI Office with enforcement powers and a separate budget line for the AI Act.
“CEOs of leading AI firms feel constrained by intense geopolitical competition and therefore cannot take the extra safety steps they would like.”
S117 references Dario Amodei’s observation that companies are competing intensely with China, limiting their ability to devote resources to safety.
“Leaders should create enabling conditions that allow companies to prioritise safety, share expertise and, when necessary, pause development at critical junctures.”
S116 notes that international cooperation on minimum safety standards is needed, but geopolitical competition makes coordination difficult, underscoring the need for enabling conditions.
“Global coordination that brings together Europe, the United States, China and other regions on an equal footing is essential.”
S116 calls for international cooperation on AI safety standards, highlighting the challenge of achieving equal footing among major regions.
“Safety and diffusion must proceed in parallel; they should not be presented as opposing goals.”
S105 emphasizes that protecting fundamental rights and promoting innovation are complementary objectives, confirming the parallel‑track view.
“Without international cooperation, AI deployments in high‑risk areas—particularly military applications and loss‑of‑control scenarios—pose huge risks.”
S116 mentions the need for cooperation on high‑risk AI applications and the dangers of loss‑of‑control, providing additional nuance to the risk claim.
“The Code’s safety chapters could serve as reference standards for other jurisdictions.”
S104 discusses frameworks that foster innovation while protecting rights and can act as reference models for other jurisdictions, supporting this assertion.
The panel shows strong convergence on four main fronts: (1) the Code of Practice should be co‑legislatively drafted, clear yet adaptable, and serve as an international safety benchmark; (2) governments must lead on high‑risk AI areas and create conditions for safe development; (3) innovation and trust are not mutually exclusive and require ongoing multistakeholder collaboration; (4) global coordination across regions is essential. An unexpected but notable consensus links practical, use‑case‑driven trust measures with high‑level policy goals.
High – the speakers largely reinforce each other’s positions, indicating a shared understanding that a mixed approach of inclusive policy design, governmental leadership, and international cooperation is necessary to balance AI innovation with safety and public trust. This consensus strengthens the prospect of coordinated action on AI governance at both regional and global levels.
The panel shows strong consensus on the need for trustworthy, safe AI, but diverges on who should lead high‑risk governance, whether a broad code of practice or sector‑specific controls is preferable, and how quickly action should be taken. These disagreements are moderate and revolve around implementation pathways rather than the underlying goal.
Moderate disagreement – while all speakers share the same overarching objective (trustworthy AI), they differ on leadership, methodology, and timing, which could affect the speed and coherence of future AI governance initiatives.
The discussion was shaped by a progression from establishing a collaborative, flexible regulatory foundation (Brando’s opening) to confronting real‑world constraints and the need for global coordination (Sean), then to grounding trust in sector‑specific use cases (Paola), and finally to stressing urgent, high‑risk applications and the primacy of public‑sector leadership (Brando’s closing). Each of these pivotal comments introduced new dimensions—process design, geopolitical pressure, practical deployment, and security‑critical risks—that redirected the conversation, deepened analysis, and built consensus around the central theme that innovation and trust must evolve together through inclusive, context‑aware, and internationally coordinated governance.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

