How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026

20 Feb 2026 14:00h - 15:00h

How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026

Session at a glance

Summary

This discussion focused on governing AI development and ensuring safe, beneficial deployment while maintaining innovation and building public trust. Brando Benifei explained how the European AI Act addresses frontier AI development through a code of practice rather than detailed legislative requirements. This code emerges from a co-legislative process involving civil society, developers of various sizes, and academia to create rules that prevent both existential and systemic risks to democracy and citizen freedoms. The framework aims to build a culture of restraint while addressing threats like misinformation, cyberbullying, and criminal AI use.


Benifei emphasized the importance of providing the European AI Office with adequate resources to implement the code of practice effectively, noting that while many companies already comply with risk mitigation aspects, regulatory bodies must match the power of private actors. When asked for recommendations to leaders, Sean O’Heigeartaigh stressed the need to create conditions for safe AI development, noting that leading companies express desire to take additional safety steps but feel constrained by competitive and geopolitical pressures. He advocated for international cooperation that brings all stakeholders, including Chinese developers, to the table as equals.


Paula Goldman recommended focusing on specific use cases, highlighting the gap between AI’s perceived power and organizations’ ability to deploy it trustfully in different contexts like medicine versus customer service. Benifei concluded that safety and widespread deployment should not be viewed as opposing forces but must proceed in parallel, particularly regarding military AI use and loss of control risks. The discussion concluded that innovation and trust can coexist through continued international cooperation and adherence to shared standards like the Code of Practice.


Keypoints

Major Discussion Points:


Code of Practice Development: The discussion centers on implementing a co-legislative process involving civil society, developers, enterprises, and academia to create flexible yet clear rules for AI risk mitigation, particularly addressing systemic risks to democracy and fundamental rights.


Creating Conditions for Safe AI Development: The need to address competitive and geopolitical pressures that prevent AI companies from taking additional safety steps, despite their expressed willingness to do so under the right conditions.


Focus on Context-Specific Use Cases: Emphasizing the importance of understanding AI applications within specific domains (medicine vs. customer service) to build appropriate trust controls and unlock both productivity and public confidence.


International Cooperation and Public Institution Leadership: The urgent need for global collaboration on AI governance, particularly regarding military AI applications and loss of control risks, with public institutions taking the lead rather than relying on private companies.


Balancing Safety and Innovation: Ensuring that safety measures and widespread AI deployment can proceed in parallel rather than being viewed as competing priorities.


Overall Purpose:


The discussion aims to explore how to govern AI development effectively while maintaining innovation, with particular focus on the European AI Act’s Code of Practice as a potential model for international cooperation and trust-building in AI governance.


Overall Tone:


The tone is constructive and collaborative throughout, with speakers building on each other’s points rather than disagreeing. There’s an underlying sense of urgency about the need for action, but it remains professional and solution-oriented. The moderator maintains a positive framing by emphasizing that “innovation and trust can go together,” and the discussion concludes on an optimistic note about continued cooperation and the potential for the Code of Practice to serve as a standard for other countries.


Speakers

Paula Goldman: Area of expertise, role, and title not mentioned in the transcript


Brando Benifei: Parliament member (referenced as “from the Parliament side”), involved in AI Act legislation


Sean O’Heigeartaigh: Scholar and governance expert (self-described as “us as scholars, the role of us as governance experts”)


Moderator: Session moderator/facilitator


Additional speakers:


Professor Bengio: Referenced by other speakers but did not speak directly in this transcript portion. Mentioned as having given a speech and conducting research on AI risks including “loss of control risks”


Full session report

This panel discussion explored the challenges of governing artificial intelligence development while maintaining innovation and building public trust, with particular focus on the European Union’s AI Act and its Code of Practice as a potential model for international cooperation.


The European AI Act’s Code of Practice Approach


Brando Benifei explained the European Parliament’s strategic approach to AI governance, describing how the AI Act establishes a framework for a Code of Practice that involves stakeholders including civil society, AI developers, academic institutions, and enterprises. This approach aims to address what Benifei termed “systemic risks” – threats to democratic processes through misinformation campaigns, cyberbullying facilitated by AI systems, and criminal exploitation of AI technologies.


Rather than creating prescriptive legislative requirements, the framework seeks to “build a culture of restraint” among AI system operators while maintaining flexibility to address emerging challenges. Benifei emphasized the goal of being “very clear, not to have vague proposals that are very loosely interpretable, but maintaining a certain degree of flexibility.”


Importantly, Benifei noted that “many companies are already complying with many of the risk mitigation aspects that are in the code of practice.” However, he stressed that success depends on providing the European AI Office with adequate resources and authority to ensure regulatory bodies can operate “at the same level of this very power[ful] private actors.”


One-Minute Policy Recommendations


When asked for brief recommendations, the three speakers offered distinct perspectives:


O’Heigeartaigh’s Focus on Competitive Pressures: Sean O’Heigeartaigh highlighted a critical challenge where “the CEOs of the leading companies say they would like to be able to take additional steps, but under the competitive geopolitical pressure they’re in, they do not feel that they are able to.” He argued this should serve as “a red alarm bell” for policymakers, suggesting that governance frameworks should create conditions enabling responsible behavior rather than simply mandating it. He emphasized the need for genuinely international cooperation, bringing “everyone to the table as equals” including Chinese colleagues “making such impressive progress.”


Goldman’s Deployment and Trust Perspective: Paula Goldman identified “a gigantic gap between sort of our perception of this incredible power of the technology and how quickly organizations can deploy it.” She emphasized that trust requirements vary dramatically across applications – “the answer is very different in medicine than it is for customer service.” Goldman argued that by developing appropriate trust controls for specific use cases, organizations can “unlock not only productivity, but trust in the same breath.”


Benifei’s Call for Institutional Leadership: Benifei argued that meaningful progress on challenging aspects of AI governance – particularly military applications and loss of control risks – requires leadership from public institutions rather than private companies, not because companies “are bad. It’s not their role,” but because these challenges require coordination beyond commercial interests. He expressed particular urgency: “Don’t lose any more time. You need to sit down and use these occasions to do progress. We need that, and we do not need to lose any more time on this.”


Building Trust Through Governance


Throughout the discussion, speakers emphasized that building public trust is fundamental to successful AI governance. Benifei articulated this as the objective of the Code of Practice: “We want the code of practice to contribute in building trust among our citizens on the fact that we can innovate without sacrificing human rights and protection of our fundamental values.”


This trust-building connects to Goldman’s practical deployment challenges, suggesting that effective governance must address both high-level policy coordination and specific implementation details across different sectors and use cases.


International Cooperation and Future Directions


The moderator concluded by noting how the discussion demonstrated that “innovation and trust can go together” through appropriate governance frameworks, while acknowledging that continued international cooperation will be essential. The panel positioned the EU’s Code of Practice as a potential reference point for international standards, particularly encouraging examination of “the safety chapters” as elements that “other countries can sign up to.”


The discussion revealed both convergence and differences in approach among speakers. All agreed that safety and innovation can proceed in parallel, that international cooperation is essential, and that trust-building is central to effective AI governance. However, they offered different emphases: Benifei advocating for comprehensive legislative frameworks and institutional leadership, O’Heigeartaigh focusing on addressing competitive pressures through international coordination, and Goldman emphasizing practical, context-specific approaches to deployment challenges.


The urgency expressed by all speakers underscored that while approaches may vary, the need for coordinated action on AI governance is both immediate and critical for ensuring technological advancement serves broader societal interests.


Session transcript

Brando Benifei

So from different places in the world as a possible way of how to deal with the frontier aspects of AI development. Because instead of detailing in the legislative act every aspect of the risk mitigation that we ask now to the big developers, we decided to put in the AI act the provision of having this code of practice that would come, as Professor Bengio explained very clearly, from a co -legislative process involving civil society, developers, small, medium, big enterprises and academia in an exercise that would allow to build a more adherent to the… present situation and evolution. of the AI landscape of a set of rules to actually prevent existential risks, but also we call them systemic risks that deal with our democracies, with our freedom as citizens.

I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent risks of damaging our democratic processes by spreading misinformation or contrasting the cyberbullying or the criminal actions through the use of AI. And we… I think we built a very clear framework. because I think it’s very important to be clear, not to have vague proposals that are very loosely interpretable, but maintaining a certain degree of flexibility, we are clear on what we want to pursue. However, I think it will be very important, and so I need to subscribe to what Professor Banjo said at the end of his speech, that this is our effort from the Parliament side that we provide the European AI Office all the means to actually implement this code of practice, because it’s true that, as it was said, many companies are already complying with many of the risk mitigation aspects that are in the code of practice, but we need to be sure that we can, again, be at the same level of this very power.

private actors to do our part in making the rules that we decided applicable, effective, and so build trust. In the end, to conclude, this is our objective. We want the code of practice to contribute in building trust among our citizens on the fact that we can innovate without sacrificing human rights and protection of our fundamental values. Thank you.

Moderator

Thank you very much, Brando. Now, we still have very few minutes left, so I would like to exploit the opportunity of your presence to ask you, maybe if you can say in one minute, Sean, you have already said this, but maybe you can reformulate or come up with one recommendation for the leaders. at this summit on the way that we can govern AI in the future? What would you say to them?

Sean O’Heigeartaigh:

In one minute, I would say the role of our leaders, the role of us as scholars, the role of us as governance experts is to create the conditions for the safe and beneficial development of AI. Right now, I do not believe those conditions entirely exist because exactly of the things that the CEOs of the leading companies say. They say they would like to be able to take additional steps, but under the competitive geopolitical pressure they’re in, they do not feel that they are able to. We should be hearing that. That should be a red alarm bell for us. And so what we need to do is figure out how do we create the conditions where it is possible for them.

to take these additional steps, to put additional focus on safety, to share expertise if needed, to coordinate and potentially even to slow down before critical points. And that doesn’t just mean European companies, it doesn’t just mean US companies, it also means our colleagues in China who are making such impressive progress. We need to figure out what is a way in which we can bring everyone to the table as equals and figure out how to cooperate on this challenge of our time.

Moderator

Paola?

Paula Goldman

I would say focus on the use cases. So right now there’s a gigantic gap between sort of our perception of this incredible power of the technology and how quickly organizations can deploy it. And the bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service. And so I think when we start to focus on context, the right use cases, what trust controls look like in those domains, in local context, that’s where we unlock not only productivity, but trust in the same breath.

Brando Benifei

Well, in my opinion, we need to, again, as it was said earlier by Professor Eiger, it’s very difficult for me to figure the pronunciation. But anyway, we need to not contrast, not put in contrast safety at the highest terms and the focus on diffusion, on action. On impact, the title of this summit. I think this can go in parallel and it must go in parallel because there are. areas of deployment of AI where without international cooperation we are facing huge risks. We hope that the code of practice will be a way to enlarge this discussion and build a reference point as I said but we need to go even further. We have issues regarding military use of AI.

We have issues regarding the loss of control risks that also Professor Banjo has been looking a lot at with his research that are in need of further cooperation. I don’t think this will come from the business but not because they are bad. It’s not their role. It must come from the public institutions and so we need to send this message to our leaders. Don’t lose any more time. You need to sit down and use these occasions to do progress. We need that. and we do not need to lose any more time on this.

Moderator

Thank you very much. So I would like to close this very interesting panel simply to say that what we have tried to discuss and conclude in this session is the fact that innovation and trust can go together and we can find different ways to make sure that trust is ensured or enabled in a particular country and in a particular continent, but we will need to continue working together and we are also happy to have presented to you some elements of the Code of Practice. Please take a look at that Code of Practice, in particular look at the safety chapters, and you will see that these are probably standards to which other countries can sign up to.

And thanks a lot for your participation. We look forward to continuing this discussion with you and with all the colleagues in this summit. Thank you very much and thanks to our panelists. Thank you very much. Thank you. you Thank you.

B

Brando Benifei

Speech speed

113 words per minute

Speech length

601 words

Speech time

319 seconds

Collaborative governance through a Code of Practice (AI Act)

Explanation

Benifei argues that the AI Act should rely on a co‑legislative, multi‑stakeholder Code of Practice that sets clear yet flexible risk‑mitigation rules. He also stresses that the European AI Office must be given the tools to enforce the code and build citizen trust.


Evidence

“Because instead of detailing in the legislative act every aspect of the risk mitigation that we ask now to the big developers, we decided to put in the AI act the provision of having this code of practice that would come, as Professor Bengio explained very clearly, from a co -legislative process involving civil society, developers, small, medium, big enterprises and academia in an exercise that would allow to build a more adherent to the… present situation and evolution” [7]. “I think it will be very important, and so I need to subscribe to what Professor Banjo said at the end of his speech, that this is our effort from the Parliament side that we provide the European AI Office all the means to actually implement this code of practice, because it’s true that, as it was said, many companies are already complying with many of the risk mitigation aspects that are in the code of practice, but we need to be sure that we can, again, be at the same level of this very power” [6]. “because I think it’s very important to be clear, not to have vague proposals that are very loosely interpretable, but maintaining a certain degree of flexibility, we are clear on what we want to pursue” [16].


Major discussion point

Collaborative governance through a Code of Practice (AI Act)


Topics

Artificial intelligence | The enabling environment for digital development


Urgent action on high‑risk AI applications and public‑institution leadership

Explanation

Benifei warns that military uses of AI and loss‑of‑control scenarios pose immediate existential risks, and calls for public institutions to act decisively without delay.


Evidence

“We have issues regarding military use of AI” [19]. “We have issues regarding the loss of control risks that also Professor Banjo has been looking a lot at with his research that are in need of further cooperation” [18]. “Don’t lose any more time” [31]. “It must come from the public institutions and so we need to send this message to our leaders” [17].


Major discussion point

Urgent action on high‑risk AI applications and public‑institution leadership


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


S

Sean O’Heigeartaigh

Speech speed

173 words per minute

Speech length

210 words

Speech time

72 seconds

Creating conditions for safe and beneficial AI development, with global cooperation

Explanation

O’Heigeartaigh stresses the need to alleviate competitive geopolitical pressure on AI CEOs so they can adopt extra safety measures, and calls for an equal‑footing partnership among the EU, US and China to manage AI risks collectively.


Evidence

“They say they would like to be able to take additional steps, but under the competitive geopolitical pressure they’re in, they do not feel that they are able to” [24]. “And so what we need to do is figure out how do we create the conditions where it is possible for them” [25]. “Right now, I do not believe those conditions entirely exist because exactly of the things that the CEOs of the leading companies say” [26]. “And that doesn’t just mean European companies, it doesn’t just mean US companies, it also means our colleagues in China who are making such impressive progress” [27]. “We need to figure out what is a way in which we can bring everyone to the table as equals and figure out how to cooperate on this challenge of our time” [12].


Major discussion point

Creating conditions for safe and beneficial AI development, with global cooperation


Topics

Artificial intelligence | The enabling environment for digital development


P

Paula Goldman

Speech speed

136 words per minute

Speech length

103 words

Speech time

45 seconds

Focusing on context‑specific use cases to foster trust

Explanation

Goldman argues that trust controls must be tailored to the specific domain—such as medicine versus customer service—to close the gap between public perception and actual deployment, thereby unlocking both productivity and confidence.


Evidence

“And so I think when we start to focus on context, the right use cases, what trust controls look like in those domains, in local context, that’s where we unlock not only productivity, but trust in the same breath” [13]. “And the bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service” [29]. “So right now there’s a gigantic gap between sort of our perception of this incredible power of the technology and how quickly organizations can deploy it” [30].


Major discussion point

Focusing on context‑specific use cases to foster trust


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


M

Moderator

Speech speed

75 words per minute

Speech length

239 words

Speech time

190 seconds

Code of Practice as an international reference standard

Explanation

The moderator highlights that the safety chapters of the Code of Practice can serve as standards that other countries may adopt, reinforcing a common baseline for AI governance.


Evidence

“Please take a look at that Code of Practice, in particular look at the safety chapters, and you will see that these are probably standards to which other countries can sign up to” [4].


Major discussion point

Collaborative governance through a Code of Practice (AI Act)


Topics

Artificial intelligence | The enabling environment for digital development


Agreements

Agreement points

Innovation and safety can coexist and should proceed in parallel

Speakers

– Brando Benifei
– Paula Goldman
– Moderator

Arguments

Safety and widespread AI deployment can and must proceed in parallel rather than being in opposition


Context-specific deployment focusing on appropriate use cases can unlock both productivity and trust simultaneously


Innovation and trust can go together and different approaches can ensure trust in different countries and continents


Summary

All speakers agree that AI innovation and safety measures are not mutually exclusive but can and should be pursued simultaneously through appropriate frameworks and context-specific approaches


Topics

Artificial intelligence | The enabling environment for digital development


International cooperation is essential for effective AI governance

Speakers

– Brando Benifei
– Sean O’Heigeartaigh
– Moderator

Arguments

International cooperation is essential for addressing risks in military AI use and loss of control scenarios


Global cooperation must include all major players including China as equals at the negotiating table


Continued international collaboration is necessary for AI governance progress


Summary

All speakers emphasize the critical need for international cooperation and collaboration among all major stakeholders, including global powers like China, to address AI governance challenges effectively


Topics

Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs


Trust-building is fundamental to AI deployment and governance

Speakers

– Brando Benifei
– Paula Goldman
– Moderator

Arguments

Code of practice aims to build citizen trust that innovation can proceed without sacrificing human rights and fundamental values


Context-specific deployment focusing on appropriate use cases can unlock both productivity and trust simultaneously


Innovation and trust can go together and different approaches can ensure trust in different countries and continents


Summary

All speakers recognize that building trust among citizens and stakeholders is essential for successful AI deployment and governance, whether through codes of practice, context-specific approaches, or international frameworks


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


Similar viewpoints

Both speakers recognize that regulatory bodies and governance structures need adequate resources and frameworks to match the power and capabilities of private AI developers to ensure effective implementation of safety measures

Speakers

– Brando Benifei
– Sean O’Heigeartaigh

Arguments

European AI Office must be provided with adequate resources to effectively implement the code of practice


Need to create conditions for safe AI development by addressing competitive pressures that prevent companies from taking additional safety steps


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers acknowledge that there are systemic barriers (competitive pressures for O’Heigeartaigh, deployment challenges for Goldman) that prevent optimal AI implementation and that solutions require addressing these specific contextual challenges

Speakers

– Sean O’Heigeartaigh
– Paula Goldman

Arguments

Current competitive and geopolitical pressures prevent companies from taking necessary safety measures despite their willingness


Focus on specific use cases and context-dependent trust controls rather than broad generalizations


Topics

Artificial intelligence | The enabling environment for digital development


Unexpected consensus

Companies want to implement safety measures but are constrained by external pressures

Speakers

– Sean O’Heigeartaigh
– Brando Benifei

Arguments

Current competitive and geopolitical pressures prevent companies from taking necessary safety measures despite their willingness


European AI Office must be provided with adequate resources to effectively implement the code of practice


Explanation

There is unexpected consensus that private companies are not inherently opposed to safety measures but are constrained by competitive pressures and regulatory gaps. This shifts the focus from adversarial regulation to creating enabling conditions for voluntary compliance


Topics

Artificial intelligence | The enabling environment for digital development


Context-specific approaches are more effective than broad generalizations

Speakers

– Paula Goldman
– Brando Benifei

Arguments

Focus on specific use cases and context-dependent trust controls rather than broad generalizations


Code of practice should emerge from co-legislative process involving all stakeholders to address systemic and existential risks


Explanation

Both speakers, despite different backgrounds, agree that tailored, stakeholder-inclusive approaches work better than one-size-fits-all solutions, suggesting a convergence toward more nuanced governance models


Topics

Artificial intelligence | The enabling environment for digital development


Overall assessment

Summary

The speakers demonstrate strong consensus on key principles: innovation and safety can coexist, international cooperation is essential, trust-building is fundamental, and context-specific approaches are more effective than broad generalizations. There is also agreement that current barriers to AI safety implementation stem from systemic issues rather than company unwillingness.


Consensus level

High level of consensus with significant implications for AI governance – suggests that stakeholders across different sectors and regions are converging on similar principles for AI governance, emphasizing collaborative, flexible, and trust-based approaches rather than adversarial regulatory frameworks. This consensus could facilitate more effective international cooperation and implementation of AI governance measures.


Differences

Different viewpoints

Approach to AI governance – regulatory framework vs. use case focus

Speakers

– Brando Benifei
– Paula Goldman

Arguments

Code of practice should emerge from co-legislative process involving all stakeholders to address systemic and existential risks


Focus on specific use cases and context-dependent trust controls rather than broad generalizations


Summary

Benifei advocates for a comprehensive legislative framework through co-legislative processes and codes of practice, while Goldman emphasizes focusing on specific use cases and context-dependent approaches rather than broad regulatory frameworks


Topics

Artificial intelligence | The enabling environment for digital development


Source of international cooperation leadership

Speakers

– Brando Benifei
– Sean O’Heigeartaigh

Arguments

International cooperation is essential for addressing risks in military AI use and loss of control scenarios


Need to create conditions for safe AI development by addressing competitive pressures that prevent companies from taking additional safety steps


Summary

Benifei argues that international cooperation must come from public institutions rather than businesses, while O’Heigeartaigh focuses on creating conditions that enable companies to take safety steps they want to take


Topics

Artificial intelligence | The enabling environment for digital development


Unexpected differences

Role of business vs. public institutions in AI safety

Speakers

– Brando Benifei
– Sean O’Heigeartaigh

Arguments

International cooperation is essential for addressing risks in military AI use and loss of control scenarios


Current competitive and geopolitical pressures prevent companies from taking necessary safety measures despite their willingness


Explanation

Unexpectedly, while both speakers want international cooperation, Benifei dismisses business role (‘not because they are bad. It’s not their role’) while O’Heigeartaigh sees companies as willing partners constrained by systemic pressures


Topics

Artificial intelligence | The enabling environment for digital development


Overall assessment

Summary

The main disagreements center on governance approaches (comprehensive regulatory frameworks vs. use-case specific approaches) and the role of different actors (public institutions vs. enabling company cooperation) in AI safety


Disagreement level

Low to moderate disagreement level – speakers share common goals of safe AI development and international cooperation but differ on implementation strategies. These disagreements are constructive and complementary rather than fundamental, suggesting different but potentially compatible approaches to achieving shared objectives


Partial agreements

Partial agreements

Both speakers agree on the need for international cooperation in AI governance, but disagree on the mechanism – Benifei emphasizes public institution leadership while O’Heigeartaigh focuses on bringing all players including companies to the table as equals

Speakers

– Brando Benifei
– Sean O’Heigeartaigh

Arguments

International cooperation is essential for addressing risks in military AI use and loss of control scenarios


Global cooperation must include all major players including China as equals at the negotiating table


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers agree that safety and innovation/productivity can go together, but disagree on approach – Benifei advocates for broad parallel development while Goldman emphasizes context-specific deployment

Speakers

– Brando Benifei
– Paula Goldman

Arguments

Safety and widespread AI deployment can and must proceed in parallel rather than being in opposition


Context-specific deployment focusing on appropriate use cases can unlock both productivity and trust simultaneously


Topics

Artificial intelligence | The enabling environment for digital development


Similar viewpoints

Both speakers recognize that regulatory bodies and governance structures need adequate resources and frameworks to match the power and capabilities of private AI developers to ensure effective implementation of safety measures

Speakers

– Brando Benifei
– Sean O’Heigeartaigh

Arguments

European AI Office must be provided with adequate resources to effectively implement the code of practice


Need to create conditions for safe AI development by addressing competitive pressures that prevent companies from taking additional safety steps


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers acknowledge that there are systemic barriers (competitive pressures for O’Heigeartaigh, deployment challenges for Goldman) that prevent optimal AI implementation and that solutions require addressing these specific contextual challenges

Speakers

– Sean O’Heigeartaigh
– Paula Goldman

Arguments

Current competitive and geopolitical pressures prevent companies from taking necessary safety measures despite their willingness


Focus on specific use cases and context-dependent trust controls rather than broad generalizations


Topics

Artificial intelligence | The enabling environment for digital development


Takeaways

Key takeaways

Innovation and trust in AI development can and must proceed in parallel rather than being viewed as opposing forces


A code of practice should emerge from a co-legislative process involving all stakeholders (civil society, developers, academia) to address both existential and systemic risks to democracy


Current competitive and geopolitical pressures prevent AI companies from taking additional safety steps they claim they want to take


AI governance should focus on specific use cases and context-dependent trust controls rather than broad generalizations, as requirements differ significantly across domains like medicine versus customer service


International cooperation including all major players (US, Europe, China) as equals is essential for addressing global AI risks


The European AI Office must be adequately resourced to effectively implement the code of practice and ensure compliance


Resolutions and action items

Provide the European AI Office with all necessary means to implement the code of practice effectively


Leaders should use summit opportunities to make concrete progress on international AI cooperation without further delay


Stakeholders should examine the Code of Practice, particularly the safety chapters, as potential standards for international adoption


Create conditions that enable AI companies to take additional safety steps by addressing competitive pressures


Unresolved issues

How to practically bring China and other major AI players to the negotiating table as equals for global cooperation


Specific mechanisms for addressing military use of AI through international cooperation


How to resolve the competitive pressures that currently prevent companies from implementing desired safety measures


Detailed implementation strategies for context-specific trust controls across different AI use cases


How to ensure the code of practice remains flexible yet clear enough to be effectively enforceable


Suggested compromises

Balance safety requirements with innovation needs by allowing parallel development rather than sequential prioritization


Use a co-legislative approach for the code of practice that involves all stakeholders rather than top-down regulation


Focus on context-specific governance rather than one-size-fits-all approaches to accommodate different use cases and domains


Thought provoking comments

Right now, I do not believe those conditions entirely exist because exactly of the things that the CEOs of the leading companies say. They say they would like to be able to take additional steps, but under the competitive geopolitical pressure they’re in, they do not feel that they are able to. We should be hearing that. That should be a red alarm bell for us.

Speaker

Sean O’Heigeartaigh


Reason

This comment is deeply insightful because it reframes the AI governance challenge from a regulatory compliance issue to a structural market failure problem. Rather than viewing companies as resistant to safety measures, O’Heigeartaigh identifies that companies themselves are signaling they want to do more but are constrained by competitive pressures. This shifts the focus from punitive regulation to creating enabling conditions for responsible behavior.


Impact

This comment fundamentally redirected the conversation from discussing what rules to impose on companies to how to create systemic conditions that allow companies to act responsibly. It introduced the concept that the problem isn’t corporate resistance but rather structural barriers, which influenced subsequent speakers to focus on cooperation and institutional solutions rather than enforcement mechanisms.


I would say focus on the use cases. So right now there’s a gigantic gap between sort of our perception of this incredible power of the technology and how quickly organizations can deploy it. And the bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service.

Speaker

Paula Goldman


Reason

This comment is thought-provoking because it identifies a critical disconnect between AI’s perceived capabilities and practical deployment challenges. Goldman shifts the discussion from abstract governance principles to concrete implementation realities, highlighting that trust isn’t a universal concept but must be contextually defined based on specific use cases and domains.


Impact

Goldman’s intervention moved the conversation from high-level policy frameworks to practical implementation challenges. Her emphasis on context-specific trust controls provided a more granular perspective that complemented the broader institutional discussions, adding a layer of operational realism to the governance debate.


We need to not contrast, not put in contrast safety at the highest terms and the focus on diffusion, on action… I think this can go in parallel and it must go in parallel… I don’t think this will come from the business but not because they are bad. It’s not their role. It must come from the public institutions.

Speaker

Brando Benifei


Reason

This comment is insightful because it challenges the false dichotomy between safety and innovation that often dominates AI policy discussions. Benifei argues for a both/and approach rather than either/or, while also clearly delineating institutional responsibilities – acknowledging that expecting businesses to lead on global coordination is unrealistic without being judgmental about their motivations.


Impact

Benifei’s comment synthesized the previous speakers’ points while adding urgency about institutional responsibility. It reinforced O’Heigeartaigh’s point about structural conditions while emphasizing that creating these conditions is fundamentally a public sector responsibility, not a market-driven solution. This helped crystallize the discussion around the need for proactive government leadership.


Overall assessment

These key comments collectively transformed the discussion from a traditional regulatory compliance framework to a more sophisticated understanding of AI governance as a complex coordination problem. O’Heigeartaigh’s insight about competitive pressures reframed companies as potentially willing partners rather than resistant actors, Goldman’s focus on use cases grounded the abstract governance discussion in practical implementation realities, and Benifei’s synthesis emphasized both the compatibility of safety and innovation and the crucial role of public institutions. Together, these comments elevated the conversation from ‘how do we control AI companies’ to ‘how do we create systems that enable responsible AI development across all stakeholders,’ representing a more mature and nuanced approach to AI governance that acknowledges both the complexity of the challenge and the need for collaborative solutions.


Follow-up questions

How can we create conditions where AI companies can take additional safety steps under competitive geopolitical pressure?

Speaker

Sean O’Heigeartaigh


Explanation

This addresses the fundamental challenge that AI company CEOs express wanting to take more safety measures but feeling unable to due to competitive pressures, which represents a critical governance gap


How can we bring all global AI stakeholders, including Chinese colleagues, to the table as equals for cooperation?

Speaker

Sean O’Heigeartaigh


Explanation

This highlights the need for inclusive international cooperation that goes beyond just European and US companies to include all major AI development regions


How do we develop context-specific trust controls for different AI use cases and domains?

Speaker

Paula Goldman


Explanation

This addresses the gap between AI’s perceived power and organizations’ ability to deploy it safely, recognizing that trust requirements vary significantly across different applications like medicine versus customer service


How can we advance international cooperation on military use of AI?

Speaker

Brando Benifei


Explanation

This represents a critical area where international coordination is needed but currently lacking, with significant security implications


How do we address loss of control risks in AI systems through international cooperation?

Speaker

Brando Benifei


Explanation

This refers to fundamental safety research areas that require coordinated effort beyond what individual companies can address


How can the EU Code of Practice serve as a reference point for broader international AI governance?

Speaker

Brando Benifei


Explanation

This explores whether the EU’s regulatory approach can be scaled or adapted for global AI governance frameworks


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.