WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette
27 Jun 2025 09:00h - 10:15h
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette
Session at a glance
Summary
This discussion at IGF 2025 focused on the challenge of regulating artificial intelligence in ways that are both ethical and innovation-enabling, examining whether current AI governance frameworks adequately balance these competing demands. The session brought together speakers from multiple regions to explore the tension between AI ethics as surface-level principles versus regulation grounded in ethical foresight that creates real accountability.
Alexandra Krastins Lopes emphasized that ethical foresight must be operationalized rather than theoretical, requiring proactive mechanisms built into organizational governance structures throughout AI systems’ lifecycles. She highlighted the need for global AI governance platforms that can harmonize minimum principles while respecting local specificities, noting that regulatory asymmetry creates concerning scenarios of unaddressed risks and international fragmentation. Moritz von Knebel identified key regulatory gaps including lack of technical expertise among regulators, reactive rather than anticipatory frameworks, and jurisdictional conflicts that create races to the bottom. He advocated for adaptive regulatory architectures and the establishment of shared language around fundamental concepts like AI risk and systemic risk.
Vance Lockton stressed the importance of regulatory cooperation focused on influencing AI design and development rather than just enforcement, noting the vast differences in regulatory capacity between countries. Phumzile van Damme called for binding international law on AI governance and highlighted the exclusion of Global South voices, advocating for democratization across AI’s entire lifecycle including development, profits, and governance. Yasmin Alduri proposed moving from risk-based to use-case frameworks and emphasized the critical need to include youth voices in multi-stakeholder approaches.
The speakers collectively agreed that overcoming simple dichotomies between innovation and regulation is essential, as safe and trustworthy frameworks actually enable rather than hinder technological advancement.
Keypoints
## Major Discussion Points:
– **Ethical Foresight vs. Surface-Level Ethics**: The distinction between treating AI ethics as a superficial add-on versus embedding ethical foresight as a foundational, operational process that anticipates risks before harm occurs and involves all stakeholders from design to deployment.
– **Regulatory Fragmentation and the Need for Shared Language**: The challenge of different countries and regions developing incompatible AI governance frameworks, creating regulatory gaps, jurisdictional conflicts, and the absence of consensus on basic definitions like “AI systems,” “systemic risk,” and regulatory approaches.
– **Inclusion and Global South Participation**: The exclusion of marginalized communities, particularly from the Global South and Africa, from AI governance discussions due to policy absence, resource constraints, and the dominance of Western perspectives in shaping AI regulatory frameworks.
– **Cross-Border Regulatory Cooperation**: The mismatch between globally operating AI platforms and locally enforced laws, requiring new approaches to international regulatory cooperation that focus on influencing design and development rather than just enforcement actions.
– **Innovation vs. Regulation False Dichotomy**: Challenging the narrative that regulation stifles innovation, with speakers arguing that safe, trustworthy frameworks actually enable innovation by providing predictability and building consumer trust, similar to other regulated industries like aviation and nuclear power.
## Overall Purpose:
The discussion aimed to explore how to develop AI governance frameworks that are both ethically grounded and innovation-friendly, moving beyond reactive approaches to create proactive, inclusive regulatory systems that can address the global, borderless nature of AI technology while respecting local contexts and values.
## Overall Tone:
The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving rather than adversarial debate. Speakers acknowledged significant challenges while remaining optimistic about finding solutions through multi-stakeholder cooperation. The tone was academic yet practical, with participants building on each other’s points and seeking common ground. There was a sense of urgency about addressing AI governance gaps, but this was balanced with realistic assessments of the complexity involved in international coordination and inclusive policymaking.
Speakers
**Speakers from the provided list:**
– **Vance Lockton** – Senior technology policy advisor for the office of the Privacy Commissioner in Canada
– **Alexandra Krastins Lopes** – Lawyer at VLK Advogados (Brazilian law firm), provides legal counsel on data protection, AI, cybersecurity, and advising multinational companies including big techs on juridical matters and government affairs. Previously served at the Brazilian Data Protection Authority and has a civil society organization called Laboratory of Public Policy and Internet
– **Deloitte consultant** – AI Governance Consultant at Deloitte, founder of Europe’s first youth-led non-profit focusing on responsible technology
– **Phumzile van Damme** – Integrity Tech Ethics and Digital Democracy Consultant, former lawmaker
– **von Knebel Moritz** – Chief of staff at the Institute of AI and Law, United Kingdom
– **Moderator** – Digital Policy and Governance Specialist
– **Online moderator 1** – Mohammed Umair Ali, graduate researcher in the field of AI and public policy, co-founder and coordinator for the Youth IGF Pakistan
– **Online moderator 2** – Haissa Shahid, information security engineer in a private firm in Pakistan, co-organizer for the Youth IGF, ISOC RIT career fellow
**Additional speakers:**
– **Yasmeen Alduri** – AI Governance Consultant, Deloitte (mentioned in the moderator’s introduction but appears to be the same person as “Deloitte consultant” in the speakers list)
Full session report
# AI Governance Discussion at IGF 2024
## Balancing Ethics and Innovation in Artificial Intelligence Regulation
### Executive Summary
This session at the Internet Governance Forum brought together international experts to examine how to develop AI governance frameworks that balance ethical responsibility with technological advancement. The hybrid session featured speakers from Canada, Brazil, the United Kingdom, South Africa, and Pakistan, representing perspectives from government agencies, law firms, consultancies, and civil society organizations.
The discussion challenged the assumption that regulation necessarily stifles innovation, with speakers arguing that well-designed governance frameworks can actually enable technological progress. While participants agreed on the need for international cooperation and multi-stakeholder engagement, they offered different approaches to implementation, from flexible principle-based frameworks to binding international agreements.
### Session Structure and Participants
The session experienced initial technical difficulties with audio setup for online participants. The discussion was moderated by a Digital Policy and Governance Specialist, with online moderation support from Mohammed Umair Ali and Haissa Shahid from Youth IGF Pakistan.
**Key Speakers:**
– **Yasmeen Alduri**: AI Governance Consultant at Deloitte and founder of Europe’s first youth-led non-profit focusing on responsible technology
– **Alexandra Krastins Lopes**: Lawyer at VLK Advogados in Brazil, previously with the Brazilian Data Protection Authority
– **Phumzile van Damme**: Integrity Tech Ethics and Digital Democracy Consultant and former lawmaker from South Africa
– **Vance Lockton**: Senior Technology Policy Advisor for the Office of the Privacy Commissioner in Canada
– **Moritz von Knebel**: Chief of Staff at the Institute of AI and Law in the United Kingdom
### Key Themes and Speaker Perspectives
#### Moving Beyond Theoretical Ethics to Operational Implementation
Alexandra Krastins Lopes emphasized that ethical foresight “cannot be just a theoretical exercise” but must involve “proactive mechanisms” embedded in organizational structures. She argued for the need to operationalize ethics through concrete governance processes rather than treating it as an add-on to existing AI development.
Yasmeen Alduri reinforced this point by highlighting implementation challenges, noting that “if the developers cannot implement the regulations because they simply do not understand them, we need translators.” She advocated for moving from risk-based to use case-based regulatory frameworks that better account for how AI systems are actually deployed.
#### Regulatory Fragmentation and International Cooperation
Moritz von Knebel provided a stark assessment of current AI regulation, describing it as having “more gaps than substance” with “knowledge islands surrounded by oceans” of uncertainty. He identified critical shortcomings including lack of technical expertise among regulators and reactive rather than anticipatory frameworks.
Vance Lockton emphasized the importance of regulatory cooperation that focuses on “influencing design and development rather than just enforcement actions.” He highlighted the vast differences in regulatory capacity between countries as a critical factor in designing effective international cooperation mechanisms.
Alexandra Krastins Lopes proposed addressing fragmentation through “global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities.”
#### Global South and Youth Inclusion
Phumzile van Damme delivered a powerful critique of current AI governance processes, arguing that “there hasn’t been a proper democratization” of AI. She noted that while there may have been “somewhat of a democratization in terms of access,” true democratization requires addressing “the lifecycle” including “democratization of AI development, democratization of AI profits, and democratization of AI governance itself.”
Van Damme called for “binding international law on AI platform governance” and highlighted the systematic exclusion of African and Global South voices from AI governance discussions.
Yasmeen Alduri emphasized the exclusion of youth voices, noting that “multi-stakeholder approaches” often “forget about youth” despite young people having solutions and being disproportionately affected by AI systems.
#### Challenging the Innovation-Regulation Dichotomy
Multiple speakers challenged the narrative that regulation inherently stifles innovation. Moritz von Knebel argued that “you can have innovation through regulation,” citing examples from the nuclear and aviation industries where “they took off after we had safety standards in place.”
This perspective reframes regulation as potentially enabling innovation by providing predictability, building trust, and creating level playing fields that reward responsible development practices.
### Areas of Agreement and Disagreement
**Consensus Areas:**
– Need for multi-stakeholder engagement across civil society, private sector, and public sector
– Importance of international cooperation while respecting local contexts
– Inadequacy of current reactive regulatory approaches
– Need for adaptive rather than static frameworks
**Different Approaches:**
– **Regulatory mechanisms**: Flexible principle-based approaches versus binding international law
– **Framework design**: Risk-based versus use case-based categorization
– **Implementation focus**: Early-stage development influence versus enforcement cooperation
### Questions and Audience Interaction
The session included questions from online participants about practical implementation challenges and the role of friction between tech companies and regulators. Yasmeen Alduri noted that such friction is expected and that “democratic states specifically live from this friction,” while emphasizing the need for constructive dialogue.
Participants also discussed the challenge of balancing innovation incentives with necessary safety regulations without creating regulatory arbitrage where companies seek the most permissive jurisdictions.
### Practical Recommendations
Speakers proposed several concrete steps:
– Establishing global AI governance platforms for harmonizing minimum principles
– Creating independent technical advisory groups and capacity building programs for regulators
– Developing international dialogues to establish consensus on key AI definitions and terminology
– Creating spaces for meaningful co-creation between different stakeholders
– Shifting narratives around AI from inevitability to focus on desired societal outcomes
### Unresolved Challenges
The discussion highlighted several ongoing challenges:
– How to achieve definitional consensus across different regulatory traditions and cultures
– Addressing vast disparities in regulatory capacity and resources between countries
– Developing practical mechanisms for meaningful Global South and youth inclusion
– Implementing effective cross-border enforcement while respecting national sovereignty
### Conclusion
The discussion demonstrated both the potential for collaborative AI governance and the complexity of implementation. While speakers found common ground on fundamental principles, they revealed significant challenges in translating shared values into coordinated action. The session’s emphasis on inclusion and its challenge to the innovation-regulation dichotomy provide important foundations for future AI governance efforts, though substantial work remains to develop practical, implementable frameworks that serve global needs.
Session transcript
Moderator: I hope everyone can hear me. Hello, everyone. Good morning and welcome to this session, Digital Dilemma, AI Ethical Foresight versus Regulatory Ruling. I’m a Digital Policy and Governance Specialist, and this is a conversation that I’ve been looking forward for many weeks. Today is the final day of IGF 2025, and it’s fitting that we’re closing with one of the most important global governance challenges. How to regulate artificial intelligence in a way that’s both ethical and enabling? Is that even possible? As the awareness of AI’s power has risen, the dominant response has been to turn to AI ethics. Ethics, simply put, is a set of moral principles. We’ve seen governments, companies, and institutions roll out principles, guidelines, and frameworks meant to guide responsible AI development. But here is the problem. When ethics is treated as a surface-level add-on, it often lacks the teeth to create real accountability or change, and that’s where regulation grounded in ethical foresight becomes essential. We’ll be unpacking the challenge today through insights from multiple regions and perspectives. I’m joined in person by three incredible speakers, Yasmeen Alduri, AI Governance Consultant, Deloitte, Alexandra Krastins Lopes lawyer at… and VLK Advogados, and Phumzile van Damme, Integrity Tech Ethics and Digital Democracy Consultant, who will be joining us in just a short while. We also have a vibrant virtual component, my colleagues, Mohammed Umair Ali and Haissa Shahid, are the online moderators. Umair, over to you to introduce yourselves and our online speakers.
Online moderator 1: Hello, everyone. Hello, everyone. Just mic testing. Am I audible? Yes. Great. So welcome, everyone. Welcome to our session. I’m Mohammed Umair Ali. I’ll be serving as the co-host and online moderator for this very session. Briefly about my introduction. I’m a graduate researcher in the field of AI and public policy. I’m also the co-founder and the coordinator for the Youth IGF Pakistan. And by my side is Haissa Shahid. Haissa, can you turn on your camera?
Online moderator 2: Hello, everyone. Sorry, I can’t turn on the camera right now. But hello from my side. I’m Haissa Shahid from Pakistan. And I’m also the co-organizer for the Youth IGF. Along with that, I’m serving as an information security consultant. I’m also the co-founder and the coordinator for the Youth IGF Pakistan. And by my side is Haissa Shahid. Haissa, can you turn on your camera? Hello, everyone. Sorry, I can’t turn on the camera right now. But hello from my side. I am serving as an information security engineer in the private firm in Pakistan. And I’m also an ISOC RIT career fellow this year. So, I’ll bring forward the session ahead. Thank you.
Online moderator 1: And moving towards the introduction of our speakers, so we are joined by speakers from the United Kingdom and Canada. To start with Mr. Vens, who is the senior technology policy advisor for the office of the Privacy Commissioner in Canada. And Mr. Meritz from the United Kingdom, who is the chief of staff in the intelligence division of the ICT division. Please welcome Mr. Meritz. Thank you so much for having me today. Hello, everyone. Mr. Meritz is the chief of staff at the Institute of AI and Law. Welcome aboard, everyone. Over to you, Taiba. Taiba, am I audible?
Moderator: Thank you so much for letting me know. I’m going to repeat myself. We’ll begin with a round of questions for each of our five speakers. One question each, about five minutes per speaker towards the end. We will be taking pictures, speakers with the audience, so please don’t run off too quickly at the end. Alexandra, I’ll start with you. So many of the AI frameworks we see today cite ethics as a very important aspect. But when you look at it a bit closely, ethics often feels like an add-on rather than like a foundational principle. From your perspective, is that enough? And more importantly, what does it look like to truly embed ethical foresight into governance structures so that it shapes the direction of AI development from the very start?
Alexandra Krastins Lopes: Great, thanks. It’s an honor to contribute to this important discussion. And while I have a proudly funded civil society organization called Laboratory of Public Policy and Internet, and served for a few years in the Brazilian Data Protection Authority, today I speak from the private sector perspective. I represent Felicade Avogados, a Brazilian law firm where I provide legal counsel on data protection, AI, cybersecurity, and advising multinational companies including big techs on juridical matters and government affairs. And when we talk about embedding ethical foresight into AI governance, we’re not talking about simply including a list of ethical principles in a regulation or policy document. We’re talking about building a proactive and ongoing process, one that is operational, not theoretical. Ethical foresight means anticipating risks before harm occurs. So it requires mechanisms that allow organizations to… to ask the right questions before AI systems are deployed. Are these systems fair? Are they explainable? Do they reinforce historical discrimination? Are they safe across different contexts? And this ethical foresight must be built into organization governance structures, including AI ethics committees, impact assessments, internal accountability protocols that operate across the entire lifecycle of AI systems. It also requires clarity about roles and responsibilities within organizations. Therefore, ethical foresight is not just the job of the legal or the compliance team, it needs engagement from product designers, data scientists, business leaders, and most importantly, it needs board-level commitment. At the same time, foresight must be context-aware. Many global frameworks are still shaped primarily by Nord countries’ perspectives, with assumptions about infrastructure, economic and regulatory capacity, and also risk tolerance that do not necessarily reflect the realities of the global majority. So the regulatory debate should be connected to social context. In Brazil, for example, historical inequalities, specific challenges for entrepreneurship, and limited institutional enforcement power. Besides that, embedding ethical foresight in governance requires abstract principles. That is, by setting minimum requirements rather than imposing a one-size-fits-all model. And tools can be adapted to specific conditions, such as voluntary codes of conduct, soft law mechanisms, AI ethics boards, and flexible risk-based approaches. Additionally, national policies may have different strategic goals, such as technology development, This regulatory asymmetry creates a highly concerning scenario of unaddressed ethical and social risks, international regulatory fragmentation, and direct normative conflicts. Embedding AI foresight requires international coordination with minimum safeguards, promotion of technological progress, investment, and global competitiveness. Considering this scenario, the proposal I bring for reflection today is the need to establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities. Besides that, incentivize tools such as voluntary codes of practice, which allow business to anticipate and mitigate risks without restricting innovation. Additionally, there should be parameters for cross-border regulatory cooperation for enforcement and accountability, since jurisdictional fragmentation makes enforcement difficult, especially when platforms operate globally, but accountability is local. In my experience advising companies, I’ve seen that while they are willing to act responsibly and ethically, international cooperation and legal certainty are essential for that to happen. Thank you.
Moderator: Thank you so much, Alexandra, and as she mentioned, ethical foresight is not just the job of a legal team, but truly a multi-stakeholder process. Now we’ll turn to our online speaker, Moritz, if you’re there, you’ve been keeping a very close view on how AI regulatory frameworks are evolving and also where they’re falling short. From where you sit, from your perspective, what are some of the key loopholes or blind spots in current AI regulation and what would it take in terms of practical terms to close these gaps and build a governance model that builds digital trust and resilience? Over to you.
von Knebel Moritz: Yeah, thank you and thanks for having me. People have often asked this question, what are the regulatory gaps that we see? And I think that assumption already is a bit flawed and mistaken because that would assume that around these gaps we have well thought out systems and processes and frameworks. And I don’t think we have. And so I will employ a metaphor used by a former professor of mine who said that he does not have knowledge gaps because it would be just too large. But he has knowledge islands and those are surrounded by a lot of ocean that we don’t know about. And then if those islands are well connected enough, then you can travel back and forth and it still kind of works. But there are more gaps than there is actual substance. So with that caveat, let’s look at where those gaps are clearest or where the ocean is deepest, so to speak. And I think on the domestic level, I would identify two big items. One is a lack of technical expertise. And so regulators often lack the deep technical understanding that is needed to effectively oversee those rapidly evolving AI systems. That’s especially true in countries that have historically not had the kind of capacity and resources to build up that infrastructure. And those institutions that then in turn creates a reliance on industry for setting standards for self-reporting, for self-assessing the safety of their models. And that makes it very difficult to craft meaningful technical standards on the. on the political level. That’s one of these very, very deep, deep sea areas. Another one is that we often see reactive frameworks. So current approaches largely respond to known harms and try to track when a harm occurs. But they do little in terms of anticipating emerging risks. And if you think that that is where a huge amount of the risk in the future is going to come from, then frameworks need to be adaptive to that. And the pace of AI development, which proceeds at a breakneck speed, consistently outstrips the pace at which regulatory systems can adapt, which is a huge challenge. On the international level, I think there’s one big item that then feeds into other problems, which is jurisdictional conflicts or races to the bottom. And then that turns into sometimes unjustified, sometimes justified fears of overregulation. And so different national approaches, but also international approaches like the one that the EU has taken, create a lot of complexity for compliance and regulation. And China’s governance system, which focuses on algorithms, differs fundamentally from the EU, which is more risk based. The US, the UK have emphasized innovation friendly approaches, and that creates a regulatory patchwork that is difficult to navigate, but also creates room for regulatory arbitrage. So similar to how you would move your country to your company to a country that has very low tax rates, you might do the same when it comes to regulation. And so that then creates incentives also for countries to to water down and then weaken their regulation. All of this is amplified by the fact that we just do not know enough. So a lot of the terms that are used in the concepts like risk, systemic risk, are insufficiently defined and there’s no consensus on what that actually means. So on that maybe a bit gloomy note, I’ll move on to what we can take to close these gaps or rather to to build bridges between those. that we already have with the UAI Act and some other frameworks that exist around the globe. One thing is I think we need adaptive regulatory architectures, which means that rather than having static rules, we need frameworks that can evolve with technology and rather quickly. And so that could mean that you have principle-based regulations, and then technical implementation standards can be updated through streamlined processes in secondary regulation. You could also have regulatory sandboxes where you try out new approaches and see what works, and have very quick feedback loops, which maybe governments aren’t historically good at, but are definitely getting better at as we speak. There’s a general need for capacity building. So I’ve talked about how expertise is lacking in governments often, but it’s also lacking elsewhere. So creating independent technical advisory groups could be useful that include the perspectives of different people. And that also means that we need inclusive and novel approaches to engage stakeholders. We need dialogues and diplomatic efforts. So I’ve said that some terms are just not sufficiently defined. And so without a shared language, the landscape will remain fragmented and incentives to cut corners will remain. So we see some work on this with international reports coming out and the international dialogues on air safety, for instance, but much more of this is needed to establish consensus on the key questions, because only then can we start to think about regulatory frameworks. And one last thing, because I do work on AI, so I have to flag this. There might also be ways to leverage AI for these processes. And I think relatively little work has gone into this and much more is needed. There are many more gaps and many more solutions that we could talk about, but I’ll leave it at that for now. And I’m happy to receive questions about any or all of these in the Q&A.
Moderator: Thank you, Moritz. I have a question. You mentioned a shared language. Can you elaborate more on that? Because it sounded really interesting.
von Knebel Moritz: Yeah, so this runs the gap. It basically spans from the very fundamental, what is the risk? What is an AI system? Luckily, the OECD provides a definition that most people are happy with. But when it comes to, I’ll give you a concrete example, the definition of systemic risks in the European Union under the AI Act, this will influence a lot on how future AI models are governed and deployed and developed. And so since that is the case, it really matters what people think of as systemic risk. And it’s not a field that has centuries of research behind it, but rather decades. And so some people understand very different things under systemic risk as others do. And so unless you have some consensus on this, it’ll be very, very difficult. That’s on the sort of fine grained regulation and a specific legislative framework. But there’s also high level conversations about what are even the most important risks and to try to create consensus on this. Again, different cultures, different countries see AI in a very different way. And if you consider that, that makes it difficult to cooperate and collaborate internationally. And so to establish a shared language around what does AI risk mean? And I’m specifically focused on risk here, but the same goes for benefits. What are the benefits that we care most about? That will be needed. And that, again, also touches on the technical expertise, because a lot of this requires a kind of technical knowledge and you need to know what accountability is and what adversarial robustness is of an AI model. So that further complicates things.
Moderator: Thank you so much, Moritz. Very enlightening. And the main key takeaway I could see was that different countries see AI in a very different way. And I think we really need to explore this further. Vance, I’m going to go to you now. Given how borderless technology is, there’s a real challenge around jurisdiction and enforcement. And I know Moritz also touched upon it. In a way, that platforms operate globally, but the laws don’t. What opportunities do you see for cross-border regulatory cooperation, and how can regulatory cooperation help tackle jurisdictional conflicts and enforcement barriers, especially for digital platforms that may be lightly regulated in one country, but highly impactful globally? Over to you, Vance.
Vance Lockton: Sure, thanks. So, it’s a very interesting question when we get into this idea of regulatory cooperation, because, I mean, as Moritz has been flagging, not only are the understandings of artificial intelligence difference across countries, there’s such a difference on what regulatory frameworks actually exist, and what can be applied, that once we’re into the realm of enforcement, certainly, there are some countries that have attempted to do shared, make shared enforcement efforts, but I don’t think those are necessarily where we need to focus our efforts. They’re not necessarily going to be the most effective way to address some of these issues. I think what really needs to be happening, when we’re talking about regulatory enforcement, rather than shared enforcement actions, it really has to be, how can regulators have more influence over the design and development and deployment decisions that are being made when these systems are being created or adopted in the first place? And that’s something that a lot of organizations are starting to try to, try to wrap their heads around. I look at a document like the OECD has a framework for anticipatory governance of emerging technologies. I don’t have any particular connection to it. The OPC, my organization, doesn’t have it particularly endorsed this document, but it just kind of creates this useful framework for me of saying, you know, the elements that they set out in this framework of what this regulatory governance needs to look like, saying, you know, it needs to establish guiding values, have strategic intelligence, bring in stakeholder engagement, agile regulation, international cooperation. Again, I don’t want to walk through this framework in particular, but I do think there are useful pieces that come out of that. Because from a regulatory perspective, one of the challenges that we always face is that we don’t really get to set the environment in which we work. Generally speaking, regulators are not going to be the ones who are drafting the laws that they’re overseeing. We may be able to provide input into their drafting, but by and large, we’re going to be handed a piece of legislation over which we have to have oversight. So, there are elements, but even within that, though, there are elements of discretion as far as what kinds of resources can be applied or what strategies can be applied to particular problems. Or there are often, in a lot of statutes, there will be considerations with respect to appropriateness or reasonableness. I think that’s going to be a critical piece for AI governance, for particularly the challenge that you’ve set out, this idea of international impacts of AI systems, where, you know, frankly, as a regulator in a Western nation, we aren’t necessarily going to be able to have that direct visibility, that direct insight into the impacts of that system on the global majority. So we aren’t necessarily going to have, again, that full visibility into what safeguards you need to be pushing for, or what appropriate purposes or what reasonableness actually means in the context of these AI systems, when we aren’t seeing what the true impacts of these systems are. So having those kind of dialogues amongst regulators to share that cultural knowledge is going to be a critical piece of this going forward. And, you know, just the sheer regulatory cooperation of understanding amongst regulators who has what capacities in AI, as Moritz flags, for a lot of regulators, there isn’t necessarily going to be that technical expertise. And that can be quite understandable. You know, for a lot of newer data protection authorities or some of the less resourced authorities, you might have a dozen staff for the entire regulatory agency. And that’s not a dozen staff on AI. That’s a dozen staff covering privacy or data protection as a whole. And obviously, data protection isn’t the only piece of AI regulations. It’s just kind of the piece I’m most familiar with. That’s why I’m kind of focused on it. So, you know, you might have a dozen staff. Canada has about 200 staff, but we have a dedicated tech lab that’s designed to be able to kind of get into these systems. It can take them apart and really understand what’s happening kind of under the hood of a lot of these AI systems. Then you look at something like the United Kingdom’s Information Commissioner’s Office, which has well over a thousand staff. So, well over a thousand staff and a lot of programs set up for ways that you can kind of have more innovative engagements with regulators around things like regulatory sandboxes and things along those lines. So, again, being able to understand amongst regulators who has what capacity within AI is going to be a critical piece. And I think having that shared understanding of who can do what and what the true risks and benefits are going to be is such a critical piece because, you know, I think one of the biggest challenges for me that I’m kind of seeing from a regulatory perspective is – again, from a Western regulatory perspective, I’ll say is that there’s this narrative that’s coming out that AI is such an essential piece of future economic prosperity of a nation or even security, future security of a nation. That there’s this idea that any regulation that creates restrictions on the development of AI is a threat as opposed to this opportunity or as opposed to this necessary thing to get to responsible innovation. And so being able to have that real understanding of what the potential risks are, what the potential harms are, and realistically what benefits we’re trying to achieve, that are not simply just, again, making stock prices for a handful of companies go up or making overall GDP of a country raise.
Moderator: Vance, I’m so sorry. You have 30 seconds left.
Vance Lockton: Yeah, no problem. Not a problem. So, again, we need to have that ability to counter that narrative. And, you know, within countries, regulators are getting better at having that cooperation amongst various sectors. So we are working towards having better cooperation internationally and finding those soft mechanisms, finding those ways to have influence over design and development. Again, it’s a work in progress, but my overall message is going to be we just need to reframe regulatory cooperation away from enforcement cooperation and to finding opportunities to influence that design and development. And I’ll stop there.
Moderator: Thank you so much, Vance. Phumzile, thank you so much for joining us. I’d love to talk to you about power and participation, shifting the lens to inclusion, which is, you know, a very important part of artificial intelligence. So we see that AI governance conversations are often dominated. by a handful of powerful perspectives. In your view, whose values are shaping the way AI is being regulated now, and what would it take to ensure that voices of marginalized or under-representative communities are not just included but also influence decision-making?
Phumzile van Damme: Thank you. I would just like to start with what I view as the ideal on AI governance. I think previous speakers have kind of highlighted a major challenge in that there is one, a lack of shared language. There is the idea that AI, like all technology, knows no jurisdictional borders. It’s global. So what I think the ideal would be is some form of international law around AI governance. And through that process, perhaps run through the UN, there is the ability to get a more inclusive process that includes the language, the ideals, the ethical systems of various countries. So I think that is the direction we need to be going. I think the experience from kind of over the last years with social media and the governance around that and the difficulty with different countries, either in some instances not having any laws, regulations or policies in place and others doing that, is that there is a lack of proper oversight. So I think it’s an opportunity now for international law to be drafted around that. But as it stands right now, outside of what I think is the utopia, my utopia, what I think would be the great thing to have around AI governance, I think there are two challenges of inclusion right now. One is that in the global south, particularly in Africa, there is policy absence. I know in many African countries, AI is just not a consideration, and that’s not because, you know, it’s just something that’s not viewed as important. There is a conflict with bread and butter issues. Most of these are countries that are more concerned or more, you know, they’re more focused on those types of issues. So I think because of that, then there is a exclusion in those discussions because, you know, that’s just a process that hasn’t taken place in those countries. So I think I’d like to see more African countries, countries in the global South, including themselves in those conversations by beginning to begin the process of drafting those policies, crafting those laws. And I always say that it may appear incredibly complex, but I think there is a way of looking at what other jurisdictions have done and kind of amending those laws to reflect local situations. I don’t think it’s a situation where there needs to be a reinvention of the wheel. I used to be a lawmaker, and I don’t want to say this in a polite way, but there’s a bit of, I don’t want to say that tech is, I don’t know, tech literacy is not the right phrase, but it’s just not something that’s…
Moderator: You can be candid. It’s okay.
Phumzile van Damme: Yeah. It’s just, it seems like a very difficult topic to handle, and it’s just avoided. So just to kind of encourage that it’s not that difficult, there are frameworks in existence that can be amended to fit the requirements in each country. So one, so there’s that exclusion. And I think there’s… The theoretical misnomer around what has been seen as a democratization of AI, and I think that is the wrong phrase to use, there hasn’t been a democratization. While there may have been somewhat of a democratization in terms of access, in terms of access to tools, gen AI tools, particularly like ChatGPT and all of those, there hasn’t been a proper democratization. So if we’re going to talk democratization and AI governance, we need to talk about it through the life cycle. So there needs to be democratization, I wrote this down, not only democratization of AI use, democratization of AI development, so that in the creation of AI tools, there’s a really diverse voice is included in those design choices, and large language models that there is someone at the table that says, you need to look at African languages, you need to look at languages from different countries, there needs to be a democratization of AI profits. You know, it can’t be a situation where countries are merely seen as sources of income, and there’s no way to kind of share profits, so there needs to be a democratization of AI profits, and the democratization of AI governance itself. So the way I kind of summarize it, I think there needs to be a deliberate effort by the global South, by African countries, by other countries, to insert themselves forcefully into the discussion, and that requires them beginning those processes where it hasn’t begun, and indeed for there to be a deliberate inviting of those voices to say, come take part in the discussions, and the discussion is not only at the stage where the technology is being deployed from, but through the entire, to the entire lifetime.
Moderator: Thank you. Very insightful. I actually have a question, but we’re running a bit short on time, so I’m probably going to leave it towards the end. But I really like the way that you said that the inclusion of gender inclusion or diversity of perspectives is not very difficult to do so, but it just seems very difficult to do so in real world in making sure that regulations or frameworks actually include a part of that as well. Thank you so much. Yasmin, we’re going to move on to you. We’re going to wrap up this round. We’ve talked a lot about the challenges, what’s not working, but let’s imagine what could. If you had a blank slate to design an ideal AI regulatory framework, one that’s forward-looking, ethically grounded and innovation-friendly, I mean huge to ask what are the three most essential features you would include?
Deloitte consultant: Good morning everyone. My name is Yasmin Alduri. I’m an AI governance consultant at Deloitte and I’m the founder of Europe’s first youth-led non-profit focusing on responsible technology. So we’re basically trying to bring young voices to the responsible tech field and have them basically share their ideals, their fears and their ideas on how to make this world a better place, let’s just say like that. I’m really happy van Damme actually brought up the point of international law because yes, in an ideal world we would all get along and in an ideal world we would have some kind of definition on AI governance, specifically on the international law, but there are two aspects that are really, really, I wouldn’t say problematic but that make this utopia hard to achieve. I wouldn’t say It’s not a possibility to achieve it, but it’s hard to achieve it. And that’s the first part is definitions. We need to get to a point where everyone defines the same aspects the same way. So just the definition of AI systems is a huge, huge issue. We saw this with the European laws already, with the EU AI Act. Then the second part, which is also an issue, is are countries actually upholding international law already? And if we look at the past years, we will see that in a lot of countries we see a huge increase in countries actually not upholding international law. So the first part would be to actually question and to basically push for more accountability from countries. So this is one aspect. I also wanted to bring up one point because I love the fact that also Saba Tiku Beyene brought up the aspect of inclusion. We need to set a base, which is infrastructure. If we don’t have infrastructure in different countries who are lacking the access to AI, we’re not able to democratize the access to AI. So this would also be a base for me. Okay, let’s come to the actual question. Ideal frameworks. As I said, in an ideal world, we wouldn’t have an approach that is so reactive, as Moritz called it earlier, but we would have one that is adaptive. So in this case, what we have right now is a risk-based approach. So we basically look at AI systems and we categorize them in different risks or straight up just prohibit them. And while I do see why we’re doing that, the ideal form would actually be a use case framework. So we actually look at AI systems and we look at them, how they’re used and in what sectors they’re used. The idea behind it is that AI systems, the same AI system, can be used in different use cases and in different ways, which means it can have different harms to different stakeholders. And the idea here is to make sure that we can actually use these systems. AI systems but we make sure that we really do uphold all the different scenarios that could happen and a use case approach would actually make this easier for us. It would also make it easier to keep and check the impact assessments that you have to do as someone who’s actually implementing those laws. That brings me actually to the second part which is we need frameworks that are actually implementable and understandable. So what exactly do I mean? At the end of the day those regulations and laws are being implemented by the people who are developing AI systems because those are the ones who are actually building it and those are the ones who have to uphold it. So if the developers cannot implement the regulations because they simply do not understand them, we need translators. So people like me or Alex who like go down and like try to explain or be the bridge between tech and law. But if we in an ideal world we would have developers who already understand this. We would have developers who would be able to implement each aspect, each principle in a way that is clearly not only defined but understandable. So this is the second part. The third part and I know that some of you will laugh already because you have heard this so many times over the past days. We need to include all stakeholders, multi-stakeholder approaches. So civil society, private sector and the public sector need to come together. Now here’s the thing where I disagree with most multi-stakeholder approaches. We talk a lot about including all stakeholders yet we forget to include the youth. And this is one of the biggest issues I see because in my work within the responsible technology hub I see not only the potential of youth but I see that they mostly even understand the technology better and they know how to implement it better. So we need to include them to not only future-proof our legislation but to make sure that they’re included in means of okay these are their fears, these are their not only ideas but these could be possible solutions. And from my work I can tell you that youth, young people, and young professionals have a lot of ideas for solutions that we’re facing, they’re just not heard. So we need to give them platforms. We need an older generation that leads by example but also gives platform. We need a young generation that refreshes this discourse every time. And we need spaces that are spaces of real co-creation. So these kind of discussions that we’re having right now, they’re a great base. But we need spaces where we actually create, where we have enough space to talk, to discuss, but at the same time to bring different disciplines and different sectors together. So in my work specifically, I see this working actually very well. So having a space where academia comes, where the public sector comes, where the private sector comes, and where we have intergenerational solutions for issues. And we’re actually building solutions together. So with that in mind, those would be the three aspects of, I would say, the ideal AI governance framework for me specifically.
Moderator: Right on time. Thank you so much to all our speakers. So many valuable takeaways. Now we open it up to the floor. The mics are on either side of the room. Please introduce yourself. And if you’re joining virtually, feel free to post in the chat or raise your hand. Omer, do we have any questions?
Online moderator 1: Yes, absolutely. We do have questions. I have received questions in the direct message and the chat box, but do we have any questions from the on-site participants before we move on to the online participants?
Moderator: We’re warming up here. So you know, start with the question. Right, so one of the questions that I received is,
Online moderator 1: and I think it might be for Moritz to, you know, maybe answer this better. The question is that, just like the concept of internet fragmentation, do you see that we are heading towards an era of AI fragmentation due to absence of consensus-based definition, absence of shared language, and if that is the case, how do you suggest to move forward with it?
von Knebel Moritz: Yes, I do think that we do see fragmentation, but I also think that it’s maybe a little bit reductive. I think this often gets portrayed as a fragmentation alongside country lines or jurisdictions like EU, US, China, whereas there are often splits between different factions and their interests and what they see as the ideal governance system, and all these factions have input. And so I see a fragmentation happening on multiple levels, so it goes beyond different countries and or jurisdictions. I am fairly concerned about fragmentation because, as I said, it creates the wrong incentives to race to the bottom. It doesn’t build a great base for trust between different partners, again, different countries, but also different sectors, the non-profit sector, industry, academia, government. So yes, I see this as a concern. In terms of what can be done, I think it goes back to identifying the areas where people already agree on and then building up from there. There’s never going to be perfect agreement and total unison on these things, but I think sometimes there’s more room and more overlap people are willing to give credit for. So in the foreign policy or diplomatic domain, Track 1.5, Track 2 dialogues have been pretty successful, I think, in identifying these areas of consensus. And yeah, I mean, at the end of the day, it’s also events and fora like IGF, where people get together and hopefully get to, at some point, establish a shared language or at least hear what other people, I think sometimes people are also too ambitious here, right? They want everybody to speak the same language and cultural anthropologists and other people will tell us that that is not actually what we should be aiming for, right? It is okay for people to speak different languages. But I think if you dial back your ambition and say, no, we don’t have to have a complete agreement, the first step is hearing how the other side sees things. Then I think having more of these conferences and as other people have said, diverse representation of voices at these conferences and events can be very useful in charting a path forward towards a shared language. It’s an iterative, it’s a painful process. It’s not going to come overnight, but it doesn’t mean that we shouldn’t invest energy in it.
Online moderator 1: All right. So, there is another question here and it says that, right. So, yeah, can you, can any of the speakers point towards any exemplary AI policy that balances innovation and ethics? I understand this is quite a broad-ended question, but if anyone would like to take that up.
Moderator: Anyone from the onsite speakers would like to take this question?
Deloitte consultant: Perfect. Let me take this off first. So, can I point any regulation out there that brings in good governance and with the news lately, so even there we have still discussions on how to implement specific laws and there is still a lot of room. So for example with the Middle East, we barely have any official AI governance there. We see a trend of discussions. We saw this at the last IGF where a lot of Middle Eastern and North African states came together and started having a discussion but this discussion relied more on principles rather than actual frameworks that could be implemented. So there is still, for whoever who asked this question, there is still a lot of room to actually co-create and to bring regulation into the market.
Moderator: Great, Harsha, I believe you got a question from the chat as well, do you want to go?
Online moderator 2: Yeah, we have another question. So Ahmed Khan asked, there has been a recent trend of boringness and putback from tech companies against what they call excessive regulation which restricts innovation and goes against development. So is there a way forward with Harmony or are we going to see a similar friction in the AI space as we have seen in social media and tech space versus government regulation?
Moderator: Anyone from the on-site speakers want to take this ahead or have a comment to make on this? Going to go online as well. Moritz or Vance, do you want to answer this question? Harsha, can you please repeat the question again, though? We didn’t understand it quite as the connection got a bit fuzzy.
Online moderator 2: So the question is, is there a way forward with Harmony, or are we going to see a similar friction in the AI space as we have seen in social media and tech spaces versus government regulations?
Moderator: So what I understand is that, you know, in the past we’ve seen that there has been some friction between platform regulation or platform governance as well. Are we going to be seeing a similar sort of friction in AI framework, AI regulation as well, as we move forward in the future? Do you think so? You don’t think so? You know, just a one word answer would be great as well.
Alexandra Krastins Lopes: Let me answer that one. I hope not. That’s why we’re here, having this multi-stakeholder discussion, so we can achieve some kind of consensus. And also complementing the last question about different kinds of legislations and which one would provide safeguards and would not hinder innovation. I believe that the UK has been taking a good approach by not regulating with a specific bill of law. There is no bill of law going on right now. They are letting the sectoral authorities handle the issues and also they did not define the concepts, so it won’t be a closed concept. So it can let the technology evolve and we can evolve concepts as well. So I hope that, I believe that a principle-based approach and not a strict regulation would be the best approach and that would prevent the friction we’re talking about.
Moderator: Okay, great, perfectly concise answer and good thing too because we’re running a bit short on time. So we’re going to move towards the final stretch. I’d like to hear one last thought from each speaker. Please restrict your answer to one sentence or two sentences at max. Yasmin, we’re going to start with you. What is the most urgent action we must take today to align AI governance with ethical foresight?
Deloitte consultant: To give a quick, quick answer to the last question about the friction between tech companies and regulators, yes, we see the friction. Yes, there will be more friction and I would be surprised if we didn’t have it, if specifically in democratic states specifically live from this friction. So we will see it and hopefully we will find a common ground to discuss these regulations. But what I do think is critical and what we need specifically as a society regardless of government regulation or tech regulation or whatnot, we need to make sure that we’re critically assessing what we’re consuming and we need to make sure that we critically assess what we say, what we perceive and what we create because what we’re seeing right now is very worrisome in means of how we’re using specific AI systems without questioning their outputs, we’re publicizing them without questioning what whether the contexts are right or not. So I believe one of the most important things right now as citizens is to make sure that we’re bringing critical thinking back again.
Moderator: Phumzile, would you want to go next?
Phumzile van Damme: Yeah, I’m gonna restrict it to one sentence. I’ve already gone into it. Binding international law on platform governance, AI platform governance.
Moderator: Perfect. Alexandra, would you want to go next?
Alexandra Krastins Lopes: I would like to leave you with a key action point that we must move from ethical intention to institutional implementation. I think ethical foresight cannot remain a theoretical aspiration or a paragraph and a regulation or a corporate statement. We need to build core structures of governance within companies and organizations in general.
Moderator: Rightly so. Ethical foresight should not be just a corporate statement. Thank you so much, Alexandra. And we’re going to go online to our online speakers. Vance, if you want to go ahead with two or three sentences to this question, what is the most urgent action we must take today to align AI governance with ethical foresight?
Vance Lockton: I’d say it’s to shift the narrative around AI away from AI being either an inevitability or a wholly necessary piece of future economies, to think about… what outcomes we want from future societies and how AI can build into those, can factor into those.
Moderator: Perfect. Moritz, if you want to go next.
von Knebel Moritz: Yeah, I’m gonna just add to that, more generally speaking, adding nuance to the dialogue. So overcoming the simple dichotomies of it’s innovation versus regulation. You can have innovation through regulation. We’ve had that for decades. Overcoming the US versus China or the human rights-based versus risk-based approach. Yeah, breaking away with these dichotomies that are not helpful and that ignore a lot of the nuance that is embedded within these debates.
Moderator: Moritz, I’m going to ask you to elaborate more because this is such an interesting point to make and adds so much value to the discussion. If you can elaborate more, I think we might give you one more minute.
von Knebel Moritz: Yeah, maybe I’ll touch on the, I mean, there’s much in here, but maybe the one thing I also touches on a question that was raised in the chat is on balancing ethics and innovation or regulation and innovation. And again, this is often pitted as a, you can choose. You can either regulate or you can innovate. And the EU is at a crossroads and we can’t fall behind. And yeah, we got to innovate, innovate, innovate. Whereas the reality on the ground is that safe and trustworthy frameworks that people, that give companies also predictability about the future and safety and know that their customers will like and feel that their products are safe is integral to innovation. And we’ve seen this in the past, the nuclear industry, the aviation industry, they took off after we had safety standards in place because that made it possible then to scale operations. And so I think this is just a very fascinating. for those of you interested, I’ve previously done work on this, writing up case studies on different safety standards in other industries and how they contribute to the development of a technology. So, yeah, going against the, we have to choose, the UK is an example of a very pro-innovative regulatory approach, and they didn’t step back and say, we’re not going to regulate. They were thinking about how regulation can serve innovation. And I think there are many ways that we can do this. And this often, unfortunately, gets ignored because it is easier to craft a narrative that pits two things and often two people and two camps against each other.
Moderator: Thank you so much. And what I take away from this is that building safe and trustworthy frameworks is due to these brilliant speakers. I also want to thank our audience, both here and online for their presence and for the conversation as well. Your presence on the final day of IGF 2025 is very important to us. And thank you to the wonderful IGF media team for coordinating the technical aspects. Before we close, I’d love to invite everyone on for a picture. First, we’re going to have a picture with the speakers, and then I’m going to invite the audience if they can come on the stage and have a picture with the speakers and myself. Thank you very much. Thank you. Thank you. The following is a work of fiction. Any resemblance to persons, living or dead, is coincidental and unintentional.
Alexandra Krastins Lopes
Speech speed
100 words per minute
Speech length
709 words
Speech time
424 seconds
Ethical foresight requires proactive mechanisms and organizational structures, not just theoretical principles
Explanation
Alexandra argues that embedding ethical foresight into AI governance requires building proactive and ongoing operational processes rather than simply including ethical principles in policy documents. This means creating mechanisms that allow organizations to ask critical questions about fairness, explainability, and safety before AI systems are deployed.
Evidence
She mentions specific mechanisms like AI ethics committees, impact assessments, and internal accountability protocols that operate across the entire lifecycle of AI systems, requiring engagement from product designers, data scientists, business leaders, and board-level commitment.
Major discussion point
Embedding Ethical Foresight in AI Governance
Topics
Legal and regulatory | Human rights
Agreed with
– Deloitte consultant
Agreed on
Need for multi-stakeholder approaches in AI governance
Need for context-aware approaches that consider local realities rather than one-size-fits-all models
Explanation
Alexandra emphasizes that ethical foresight must be context-aware, noting that many global frameworks are shaped primarily by Global North perspectives with assumptions that don’t reflect the realities of the global majority. She argues for regulatory approaches that set minimum requirements while allowing adaptation to specific local conditions.
Evidence
She provides Brazil as an example, citing historical inequalities, specific entrepreneurship challenges, and limited institutional enforcement power as factors that require context-specific approaches. She mentions tools like voluntary codes of conduct, soft law mechanisms, and flexible risk-based approaches.
Major discussion point
Embedding Ethical Foresight in AI Governance
Topics
Legal and regulatory | Development
Agreed with
– Phumzile van Damme
– von Knebel Moritz
Agreed on
International cooperation and coordination is essential
Moving from ethical intention to institutional implementation within companies and organizations
Explanation
Alexandra argues that ethical foresight cannot remain a theoretical aspiration or merely a paragraph in regulation or corporate statement. Instead, there must be concrete institutional structures built within companies and organizations to operationalize ethical principles.
Major discussion point
Embedding Ethical Foresight in AI Governance
Topics
Legal and regulatory | Economic
Agreed with
– von Knebel Moritz
Agreed on
Current regulatory approaches are insufficient and reactive
Principle-based approaches without strict definitions allow technology and concepts to evolve
Explanation
Alexandra supports the UK’s approach of not regulating with specific legislation but letting sectoral authorities handle issues without defining closed concepts. This allows both technology and regulatory concepts to evolve together rather than being constrained by rigid definitions.
Evidence
She specifically mentions the UK’s approach of not having a specific AI bill and allowing sectoral authorities to handle issues, which prevents concepts from being closed and allows for technological evolution.
Major discussion point
Practical Framework Design
Topics
Legal and regulatory
Disagreed with
– Phumzile van Damme
Disagreed on
Regulatory specificity – Principle-based vs Detailed frameworks
von Knebel Moritz
Speech speed
165 words per minute
Speech length
1931 words
Speech time
702 seconds
Current AI regulation has more gaps than substance, with regulators lacking technical expertise
Explanation
Moritz argues that the assumption of regulatory gaps is flawed because it presumes well-thought-out systems exist around these gaps. Instead, he suggests there are more gaps than actual substance, using a metaphor of knowledge islands surrounded by vast oceans of unknown territory. He identifies lack of technical expertise as a major domestic-level challenge.
Evidence
He uses his former professor’s metaphor about having knowledge islands rather than knowledge gaps because the gaps would be too large. He notes that regulators often lack deep technical understanding needed to oversee rapidly evolving AI systems, especially in countries without historical capacity to build such infrastructure.
Major discussion point
Regulatory Gaps and Technical Challenges
Topics
Legal and regulatory | Development
Disagreed with
– Deloitte consultant
Disagreed on
Regulatory approach – Risk-based vs Use case-based frameworks
Reactive frameworks that respond to known harms rather than anticipating emerging risks
Explanation
Moritz criticizes current approaches as largely reactive, responding to known harms rather than anticipating emerging risks. He argues that since much future risk will come from emerging threats, frameworks need to be adaptive to address this challenge, especially given AI development’s breakneck pace.
Evidence
He notes that current approaches track when harm occurs but do little to anticipate emerging risks, and that the pace of AI development consistently outstrips the pace at which regulatory systems can adapt.
Major discussion point
Regulatory Gaps and Technical Challenges
Topics
Legal and regulatory
Agreed with
– Alexandra Krastins Lopes
Agreed on
Current regulatory approaches are insufficient and reactive
Need for adaptive regulatory architectures with principle-based regulations and streamlined processes
Explanation
Moritz advocates for adaptive regulatory architectures with principle-based regulations where technical implementation standards can be updated through streamlined secondary regulation processes. He suggests regulatory sandboxes and quick feedback loops as mechanisms to achieve this adaptability.
Evidence
He mentions specific tools like regulatory sandboxes for trying out new approaches, quick feedback loops, and the ability to update technical implementation standards through streamlined processes in secondary regulation.
Major discussion point
Regulatory Gaps and Technical Challenges
Topics
Legal and regulatory
Agreed with
– Deloitte consultant
Agreed on
Frameworks must be adaptive rather than static
Need for shared language and consensus on key terms like systemic risk and AI definitions
Explanation
Moritz emphasizes that without shared language, the regulatory landscape will remain fragmented with incentives to cut corners. He notes that key terms like systemic risk are insufficiently defined and lack consensus, making international cooperation difficult when different cultures and countries see AI very differently.
Evidence
He provides the specific example of systemic risk definition in the European Union under the AI Act, noting it will influence how future AI models are governed but lacks consensus. He mentions the OECD provides an AI system definition most people accept, but other terms remain problematic.
Major discussion point
International Cooperation and Jurisdictional Issues
Topics
Legal and regulatory
Agreed with
– Alexandra Krastins Lopes
– Phumzile van Damme
Agreed on
International cooperation and coordination is essential
Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation
Explanation
Moritz argues for breaking away from simple dichotomies like innovation versus regulation, noting that innovation can occur through regulation as has happened for decades. He advocates for overcoming false choices between different approaches and adding nuance to debates.
Evidence
He provides examples from nuclear and aviation industries that took off after safety standards were in place, enabling scaled operations. He mentions the UK as an example of pro-innovative regulatory approach that thinks about how regulation can serve innovation.
Major discussion point
Regulatory Gaps and Technical Challenges
Topics
Legal and regulatory | Economic
Disagreed with
– Online moderator 2
Disagreed on
Innovation vs Regulation relationship
Need for inclusive stakeholder engagement and capacity building across different sectors
Explanation
Moritz emphasizes the need for capacity building and creating independent technical advisory groups that include diverse perspectives. He argues for inclusive and novel approaches to engage stakeholders through dialogues and diplomatic efforts.
Evidence
He mentions the need for independent technical advisory groups and references international dialogues on AI safety as examples of work being done, though noting much more is needed.
Major discussion point
Infrastructure and Capacity Building
Topics
Development | Legal and regulatory
Vance Lockton
Speech speed
122 words per minute
Speech length
1120 words
Speech time
550 seconds
Regulatory cooperation should focus on influencing design and development rather than enforcement
Explanation
Vance argues that rather than focusing on shared enforcement actions, regulators need to find ways to have more influence over the design, development, and deployment decisions made when AI systems are created or adopted. He emphasizes the need to reframe regulatory cooperation away from enforcement toward influencing early-stage development.
Evidence
He references the OECD framework for anticipatory governance of emerging technologies, which includes elements like establishing guiding values, strategic intelligence, stakeholder engagement, agile regulation, and international cooperation.
Major discussion point
International Cooperation and Jurisdictional Issues
Topics
Legal and regulatory
Understanding different regulatory capacities across countries is critical for cooperation
Explanation
Vance emphasizes the importance of regulators understanding who has what capacities in AI governance, noting vast differences in staffing and resources between countries. He argues this understanding is critical for effective international cooperation and sharing of cultural knowledge about AI impacts.
Evidence
He provides specific examples: some newer data protection authorities might have only a dozen staff covering all privacy/data protection (not just AI), Canada has about 200 staff with a dedicated tech lab, while the UK’s ICO has over 1,000 staff with innovative engagement programs like regulatory sandboxes.
Major discussion point
International Cooperation and Jurisdictional Issues
Topics
Legal and regulatory | Development
Shifting narrative away from AI as inevitability toward desired societal outcomes
Explanation
Vance argues for changing the narrative that presents AI as essential for future economic prosperity or national security, where any regulation is seen as a threat rather than an opportunity for responsible innovation. He advocates for focusing on what outcomes society wants and how AI can contribute to those goals.
Evidence
He mentions the problematic narrative that AI is essential for future economic prosperity and security, and that restrictions on AI development are threats rather than opportunities for responsible innovation, noting the need to counter this with focus on actual benefits beyond stock prices or GDP.
Major discussion point
International Cooperation and Jurisdictional Issues
Topics
Economic | Legal and regulatory
Phumzile van Damme
Speech speed
134 words per minute
Speech length
813 words
Speech time
361 seconds
International law through UN processes could enable more inclusive AI governance
Explanation
Phumzile advocates for international law around AI governance, ideally run through the UN, as this would enable a more inclusive process that incorporates the language, ideals, and ethical systems of various countries. She sees this as the ideal direction for AI governance given technology’s borderless nature.
Evidence
She references the experience with social media governance and the difficulty with different countries having varying levels of laws, regulations, or policies, leading to lack of proper oversight.
Major discussion point
Global Inclusion and Representation
Topics
Legal and regulatory
Agreed with
– Alexandra Krastins Lopes
– von Knebel Moritz
Agreed on
International cooperation and coordination is essential
Policy absence in Global South countries excludes them from AI governance discussions
Explanation
Phumzile identifies policy absence in many African and Global South countries as a major challenge for inclusion in AI governance discussions. She notes this isn’t because AI isn’t viewed as important, but because these countries are more focused on basic needs and bread-and-butter issues.
Evidence
She mentions that in many African countries, AI is not a consideration due to focus on more immediate concerns, and encourages these countries to begin policy processes by adapting existing frameworks from other jurisdictions rather than reinventing the wheel.
Major discussion point
Global Inclusion and Representation
Topics
Development | Legal and regulatory
Need for democratization across entire AI lifecycle, not just access to tools
Explanation
Phumzile argues that while there may have been some democratization in access to AI tools like ChatGPT, true democratization requires inclusion across the entire AI lifecycle. This includes democratization of AI development, profits, and governance itself, not just usage.
Evidence
She provides specific examples: democratization of AI development should include diverse voices in design choices and large language models that consider African languages; democratization of AI profits means countries shouldn’t just be sources of income without profit sharing; democratization of governance means participation throughout the technology’s lifetime.
Major discussion point
Global Inclusion and Representation
Topics
Development | Economic
Binding international law on AI platform governance is urgently needed
Explanation
Phumzile identifies binding international law on AI platform governance as the most urgent action needed to align AI governance with ethical foresight. This represents her core recommendation for addressing current governance challenges.
Major discussion point
Global Inclusion and Representation
Topics
Legal and regulatory
Disagreed with
– Alexandra Krastins Lopes
Disagreed on
Regulatory specificity – Principle-based vs Detailed frameworks
Deloitte consultant
Speech speed
165 words per minute
Speech length
1349 words
Speech time
490 seconds
Critical thinking and assessment of AI outputs by citizens is essential for ethical AI use
Explanation
The Deloitte consultant emphasizes that citizens need to critically assess what they consume, say, perceive, and create when using AI systems. She expresses concern about people using AI systems without questioning their outputs or publicizing them without verifying context and accuracy.
Evidence
She notes worrisome trends of people using AI systems without questioning outputs and publicizing content without verifying whether contexts are correct.
Major discussion point
Embedding Ethical Foresight in AI Governance
Topics
Sociocultural | Human rights
Use case-based approach would be more effective than current risk-based categorization
Explanation
The consultant argues that instead of the current risk-based approach that categorizes AI systems by risk levels, an ideal framework would use a use case-based approach. This would examine how AI systems are used and in what sectors, recognizing that the same system can have different harms for different stakeholders depending on its application.
Evidence
She explains that the same AI system can be used in different ways and sectors, potentially causing different harms to different stakeholders, making use case analysis more comprehensive than risk categorization.
Major discussion point
Practical Framework Design
Topics
Legal and regulatory
Agreed with
– von Knebel Moritz
Agreed on
Frameworks must be adaptive rather than static
Disagreed with
– von Knebel Moritz
Disagreed on
Regulatory approach – Risk-based vs Use case-based frameworks
Frameworks must be implementable and understandable by developers who build AI systems
Explanation
The consultant emphasizes that regulations and laws are ultimately implemented by people developing AI systems, so if developers cannot understand or implement the regulations, the frameworks fail. She advocates for clear, understandable frameworks that don’t require translators between tech and law.
Evidence
She mentions that people like herself and Alexandra serve as bridges between tech and law, but in an ideal world, developers would already understand regulations and be able to implement each principle clearly.
Major discussion point
Practical Framework Design
Topics
Legal and regulatory | Development
Multi-stakeholder approaches must include youth voices and create spaces for real co-creation
Explanation
The consultant argues that while multi-stakeholder approaches are commonly discussed, they often forget to include youth voices. She emphasizes that young people often understand technology better and have valuable solutions, but they need platforms and spaces for real co-creation with different sectors and generations.
Evidence
She draws from her work with the responsible technology hub, noting that youth not only have potential but often understand technology better and know how to implement it better. She describes successful spaces where academia, public sector, and private sector come together for intergenerational solutions.
Major discussion point
Practical Framework Design
Topics
Sociocultural | Development
Agreed with
– Alexandra Krastins Lopes
Agreed on
Need for multi-stakeholder approaches in AI governance
Infrastructure development is essential foundation for democratizing AI access
Explanation
The consultant argues that infrastructure must be established as a base for AI democratization, noting that without infrastructure in countries lacking AI access, true democratization cannot occur. She sees this as a foundational requirement before other governance measures can be effective.
Major discussion point
Infrastructure and Capacity Building
Topics
Infrastructure | Development
Moderator
Speech speed
139 words per minute
Speech length
1452 words
Speech time
622 seconds
Session facilitates important dialogue on global AI governance challenges
Explanation
The moderator frames the session as addressing one of the most important global governance challenges: how to regulate AI in a way that’s both ethical and enabling. The session brings together multiple regional perspectives to unpack this challenge through insights from various experts.
Evidence
The moderator notes this is the final day of IGF 2025 and it’s fitting to close with this important topic. The session includes speakers from multiple regions and both in-person and virtual components to ensure diverse perspectives.
Major discussion point
Infrastructure and Capacity Building
Topics
Legal and regulatory
Online moderator 1
Speech speed
171 words per minute
Speech length
329 words
Speech time
114 seconds
AI fragmentation is occurring due to absence of consensus-based definitions and shared language
Explanation
Online moderator 1 raises the concern that similar to internet fragmentation, we are heading towards an era of AI fragmentation. This fragmentation is attributed to the absence of consensus-based definitions and shared language around AI governance and regulation.
Evidence
The moderator draws parallels to the concept of internet fragmentation and asks about moving forward given the lack of shared language and consensus-based definitions.
Major discussion point
International Cooperation and Jurisdictional Issues
Topics
Legal and regulatory
Need for exemplary AI policies that balance innovation and ethics
Explanation
Online moderator 1 seeks identification of exemplary AI policies that successfully balance innovation with ethical considerations. This reflects the ongoing challenge of finding regulatory approaches that don’t stifle technological advancement while ensuring ethical safeguards.
Evidence
The moderator acknowledges this is a broad-ended question but seeks concrete examples of policies that achieve this balance.
Major discussion point
Practical Framework Design
Topics
Legal and regulatory | Economic
Online moderator 2
Speech speed
156 words per minute
Speech length
221 words
Speech time
84 seconds
Tech companies are pushing back against regulation they view as excessive and innovation-restricting
Explanation
Online moderator 2 highlights the recent trend of tech companies resisting what they perceive as excessive regulation that restricts innovation and development. This raises questions about whether harmony is possible or if we’ll see continued friction between industry and government regulation in the AI space, similar to what occurred with social media.
Evidence
The moderator references the recent trend of pushback from tech companies and draws parallels to friction seen in social media and tech space versus government regulation.
Major discussion point
Regulatory Gaps and Technical Challenges
Topics
Legal and regulatory | Economic
Disagreed with
– von Knebel Moritz
Disagreed on
Innovation vs Regulation relationship
Agreements
Agreement points
Need for multi-stakeholder approaches in AI governance
Speakers
– Alexandra Krastins Lopes
– Deloitte consultant
Arguments
Ethical foresight requires proactive mechanisms and organizational structures, not just theoretical principles
Multi-stakeholder approaches must include youth voices and create spaces for real co-creation
Summary
Both speakers emphasize that effective AI governance requires engagement across multiple stakeholders – Alexandra mentions engagement from product designers, data scientists, business leaders, and board-level commitment, while the Deloitte consultant specifically advocates for civil society, private sector, and public sector collaboration, with particular emphasis on including youth voices.
Topics
Legal and regulatory | Development
Frameworks must be adaptive rather than static
Speakers
– von Knebel Moritz
– Deloitte consultant
Arguments
Need for adaptive regulatory architectures with principle-based regulations and streamlined processes
Use case-based approach would be more effective than current risk-based categorization
Summary
Both speakers reject static, rigid regulatory approaches. Moritz advocates for adaptive regulatory architectures that can evolve with technology, while the Deloitte consultant proposes moving from risk-based to use case-based approaches that better account for context and application.
Topics
Legal and regulatory
International cooperation and coordination is essential
Speakers
– Alexandra Krastins Lopes
– Phumzile van Damme
– von Knebel Moritz
Arguments
Need for context-aware approaches that consider local realities rather than one-size-fits-all models
International law through UN processes could enable more inclusive AI governance
Need for shared language and consensus on key terms like systemic risk and AI definitions
Summary
All three speakers recognize the need for international coordination while respecting local contexts. Alexandra calls for global AI governance platforms with minimum safeguards, Phumzile advocates for binding international law through UN processes, and Moritz emphasizes the need for shared language and consensus on key terms.
Topics
Legal and regulatory
Current regulatory approaches are insufficient and reactive
Speakers
– von Knebel Moritz
– Alexandra Krastins Lopes
Arguments
Reactive frameworks that respond to known harms rather than anticipating emerging risks
Moving from ethical intention to institutional implementation within companies and organizations
Summary
Both speakers criticize current approaches as inadequate. Moritz identifies reactive frameworks as a key problem, while Alexandra emphasizes that ethical foresight cannot remain theoretical but must be institutionally implemented with concrete structures.
Topics
Legal and regulatory
Similar viewpoints
Both speakers advocate for flexible, principle-based regulatory approaches that avoid rigid definitions and false dichotomies. They both support the idea that regulation can enable rather than hinder innovation when properly designed.
Speakers
– Alexandra Krastins Lopes
– von Knebel Moritz
Arguments
Principle-based approaches without strict definitions allow technology and concepts to evolve
Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation
Topics
Legal and regulatory | Economic
Both speakers emphasize the importance of understanding and building regulatory capacity, particularly recognizing the vast differences in resources and expertise across different countries and the need for capacity building initiatives.
Speakers
– Vance Lockton
– von Knebel Moritz
Arguments
Understanding different regulatory capacities across countries is critical for cooperation
Need for inclusive stakeholder engagement and capacity building across different sectors
Topics
Legal and regulatory | Development
Both speakers recognize that true democratization of AI requires more than just access to tools – it requires fundamental infrastructure development and participation across the entire AI lifecycle, with particular attention to inclusion of underrepresented voices.
Speakers
– Phumzile van Damme
– Deloitte consultant
Arguments
Need for democratization across entire AI lifecycle, not just access to tools
Infrastructure development is essential foundation for democratizing AI access
Topics
Development | Infrastructure
Unexpected consensus
Friction between tech companies and regulators is inevitable and potentially beneficial
Speakers
– Deloitte consultant
– Alexandra Krastins Lopes
Arguments
Critical thinking and assessment of AI outputs by citizens is essential for ethical AI use
Moving from ethical intention to institutional implementation within companies and organizations
Explanation
Unexpectedly, when asked about friction between tech companies and regulators, the Deloitte consultant stated that friction is inevitable and that democratic states ‘live from this friction,’ suggesting it’s a healthy part of the democratic process. Alexandra hoped to prevent such friction through multi-stakeholder discussions, but both acknowledged the reality of this tension while seeing potential positive outcomes.
Topics
Legal and regulatory | Economic
Youth inclusion is critical but often overlooked in AI governance
Speakers
– Deloitte consultant
– Phumzile van Damme
Arguments
Multi-stakeholder approaches must include youth voices and create spaces for real co-creation
Need for democratization across entire AI lifecycle, not just access to tools
Explanation
Both speakers unexpectedly converged on the critical importance of youth inclusion in AI governance. The Deloitte consultant specifically criticized multi-stakeholder approaches for forgetting youth voices, while Phumzile’s call for democratization across the AI lifecycle implicitly includes youth participation. This consensus was unexpected given their different professional backgrounds and regional perspectives.
Topics
Development | Sociocultural
Overall assessment
Summary
The speakers demonstrated strong consensus on several fundamental principles: the need for adaptive rather than static regulatory frameworks, the importance of international cooperation while respecting local contexts, the inadequacy of current reactive approaches, and the critical role of multi-stakeholder engagement including often-overlooked voices like youth.
Consensus level
High level of consensus on core principles with constructive disagreement on implementation approaches. This suggests a mature understanding of AI governance challenges and creates a solid foundation for collaborative policy development, though significant work remains in translating shared principles into practical, coordinated action across different jurisdictions and stakeholder groups.
Differences
Different viewpoints
Regulatory approach – Risk-based vs Use case-based frameworks
Speakers
– Deloitte consultant
– von Knebel Moritz
Arguments
Use case-based approach would be more effective than current risk-based categorization
Current AI regulation has more gaps than substance, with regulators lacking technical expertise
Summary
The Deloitte consultant advocates for moving from risk-based categorization to use case-based approaches, arguing that the same AI system can have different harms depending on application. Moritz focuses more on the fundamental gaps in current regulation and lack of technical expertise, suggesting the problem is more systemic than just the categorization method.
Topics
Legal and regulatory
Regulatory specificity – Principle-based vs Detailed frameworks
Speakers
– Alexandra Krastins Lopes
– Phumzile van Damme
Arguments
Principle-based approaches without strict definitions allow technology and concepts to evolve
Binding international law on AI platform governance is urgently needed
Summary
Alexandra advocates for flexible, principle-based approaches that avoid strict definitions to allow evolution, while Phumzile calls for binding international law, which would necessarily involve more specific and rigid legal frameworks.
Topics
Legal and regulatory
Innovation vs Regulation relationship
Speakers
– von Knebel Moritz
– Online moderator 2
Arguments
Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation
Tech companies are pushing back against regulation they view as excessive and innovation-restricting
Summary
Moritz argues that the innovation vs regulation dichotomy is false and that regulation can enable innovation, while the online moderator highlights the real-world friction where tech companies view regulation as restrictive to innovation.
Topics
Legal and regulatory | Economic
Unexpected differences
Role of friction in democratic governance
Speakers
– Deloitte consultant
– Alexandra Krastins Lopes
Arguments
Critical thinking and assessment of AI outputs by citizens is essential for ethical AI use
Moving from ethical intention to institutional implementation within companies and organizations
Explanation
When asked about friction between tech companies and regulators, the Deloitte consultant stated that friction is expected and that ‘democratic states specifically live from this friction,’ while Alexandra hoped to avoid such friction through multi-stakeholder discussions. This represents an unexpected philosophical disagreement about whether regulatory friction is beneficial or problematic for democratic governance.
Topics
Legal and regulatory | Sociocultural
Overall assessment
Summary
The speakers showed remarkable consensus on high-level goals (ethical AI governance, international cooperation, inclusion) but significant disagreements on implementation approaches, regulatory specificity, and the role of friction in governance processes.
Disagreement level
Moderate disagreement with high convergence on principles but divergent implementation strategies. This suggests that while there is broad agreement on the need for ethical AI governance, the path forward remains contested, which could slow progress toward unified global AI governance frameworks. The disagreements reflect deeper tensions between flexibility vs. standardization, proactive vs. reactive approaches, and different regional perspectives on governance mechanisms.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers advocate for flexible, principle-based regulatory approaches that avoid rigid definitions and false dichotomies. They both support the idea that regulation can enable rather than hinder innovation when properly designed.
Speakers
– Alexandra Krastins Lopes
– von Knebel Moritz
Arguments
Principle-based approaches without strict definitions allow technology and concepts to evolve
Adding nuance to dialogue and overcoming false dichotomies between innovation and regulation
Topics
Legal and regulatory | Economic
Both speakers emphasize the importance of understanding and building regulatory capacity, particularly recognizing the vast differences in resources and expertise across different countries and the need for capacity building initiatives.
Speakers
– Vance Lockton
– von Knebel Moritz
Arguments
Understanding different regulatory capacities across countries is critical for cooperation
Need for inclusive stakeholder engagement and capacity building across different sectors
Topics
Legal and regulatory | Development
Both speakers recognize that true democratization of AI requires more than just access to tools – it requires fundamental infrastructure development and participation across the entire AI lifecycle, with particular attention to inclusion of underrepresented voices.
Speakers
– Phumzile van Damme
– Deloitte consultant
Arguments
Need for democratization across entire AI lifecycle, not just access to tools
Infrastructure development is essential foundation for democratizing AI access
Topics
Development | Infrastructure
Takeaways
Key takeaways
Ethical foresight in AI governance must move beyond theoretical principles to institutional implementation with proactive mechanisms, organizational structures, and board-level commitment
Current AI regulation has significant gaps with regulators lacking technical expertise and frameworks being reactive rather than anticipatory of emerging risks
International cooperation requires establishing shared language and consensus on key AI definitions, focusing on influencing design and development rather than enforcement
AI governance discussions lack global inclusion, particularly from the Global South, requiring democratization across the entire AI lifecycle rather than just access to tools
Effective AI frameworks should be use case-based rather than risk-based, implementable by developers, and include multi-stakeholder approaches with youth representation
Safe and trustworthy regulatory frameworks actually enable innovation rather than hinder it, as demonstrated in other industries like aviation and nuclear power
Citizens must develop critical thinking skills to assess AI outputs and question what they consume and create
Infrastructure development is essential for democratizing AI access globally
Resolutions and action items
Establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities
Create adaptive regulatory architectures with principle-based regulations and streamlined processes for technical implementation standards
Build independent technical advisory groups and capacity building programs for regulators
Develop international dialogues to establish consensus on key AI risk definitions and terminology
Create spaces for real co-creation between academia, public sector, private sector, and intergenerational voices
Implement binding international law on AI platform governance through UN processes
Shift narrative around AI from inevitability to focus on desired societal outcomes
Unresolved issues
How to achieve consensus on fundamental AI definitions and terminology across different countries and cultures
How to balance innovation-friendly approaches with necessary safety regulations without creating regulatory arbitrage
How to address the capacity and resource gaps between different countries’ regulatory authorities
How to effectively include Global South voices in AI governance when many countries lack basic AI policies
How to prevent AI fragmentation while respecting different cultural values and regulatory approaches
How to ensure meaningful youth inclusion in AI governance beyond tokenistic participation
How to implement cross-border enforcement mechanisms for globally operating AI platforms
Suggested compromises
Principle-based regulatory approaches that allow concepts and technology to evolve rather than strict definitions
Voluntary codes of conduct and soft law mechanisms that allow businesses to mitigate risks without restricting innovation
Risk-based approaches with flexible implementation adapted to specific local conditions and contexts
Sectoral regulatory approaches where existing authorities handle AI issues within their domains rather than creating new comprehensive AI laws
Regulatory sandboxes and iterative processes that allow testing of new approaches with quick feedback loops
Minimum safeguards with international coordination while respecting local specificities and strategic goals
Thought provoking comments
I don’t think we have [well thought out systems]. I will employ a metaphor used by a former professor of mine who said that he does not have knowledge gaps because it would be just too large. But he has knowledge islands and those are surrounded by a lot of ocean that we don’t know about.
Speaker
von Knebel Moritz
Reason
This metaphor fundamentally reframes the entire premise of AI regulation discussion. Instead of assuming we have solid regulatory frameworks with mere gaps to fill, Moritz suggests we have isolated islands of knowledge in vast oceans of uncertainty. This challenges the conventional approach to regulatory development and highlights the magnitude of what we don’t know about AI governance.
Impact
This comment shifted the discussion from gap-filling to foundational questioning. It influenced subsequent speakers to think more holistically about regulatory frameworks rather than incremental improvements, and established a more humble, realistic tone about the current state of AI governance knowledge.
We need to establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities… international cooperation and legal certainty are essential for that to happen.
Speaker
Alexandra Krastins Lopes
Reason
This comment bridges the tension between global coordination and local autonomy – a central challenge in AI governance. Alexandra’s perspective from the Global South adds crucial nuance by acknowledging that one-size-fits-all approaches don’t work, while still advocating for minimum global standards.
Impact
This set up a recurring theme throughout the discussion about balancing international cooperation with local contexts. It influenced later speakers to consider how regulatory frameworks can be both globally coordinated and locally relevant, particularly affecting Phumzile’s call for international law and Vance’s discussion of regulatory cooperation.
There hasn’t been a proper democratization. While there may have been somewhat of a democratization in terms of access… we need to talk about it through the life cycle. So there needs to be democratization of AI development, democratization of AI profits, and democratization of AI governance itself.
Speaker
Phumzile van Damme
Reason
This comment deconstructs the popular narrative of AI ‘democratization’ and reveals it as incomplete. By breaking down democratization into development, profits, and governance phases, Phumzile exposes how current approaches only address surface-level access while ignoring deeper structural inequalities.
Impact
This reframing influenced the discussion to move beyond simple inclusion rhetoric to examine power structures more critically. It connected to Yasmin’s later emphasis on youth inclusion and reinforced the need for more fundamental changes in how AI governance is approached, not just who gets invited to the table.
We need frameworks that are actually implementable and understandable… if the developers cannot implement the regulations because they simply do not understand them, we need translators… but in an ideal world we would have developers who already understand this.
Speaker
Deloitte consultant (Yasmin)
Reason
This comment identifies a critical practical gap between regulatory intent and implementation reality. It highlights that even well-intentioned regulations fail if the people who must implement them cannot understand or operationalize them, pointing to a fundamental communication and education challenge.
Impact
This shifted the conversation from high-level policy design to practical implementation challenges. It influenced the discussion to consider not just what regulations should say, but how they can be made actionable by the technical communities that must implement them.
We need to shift the narrative around AI away from AI being either an inevitability or a wholly necessary piece of future economies, to think about what outcomes we want from future societies and how AI can build into those.
Speaker
Vance Lockton
Reason
This comment challenges the fundamental framing of AI discussions by questioning the assumption that AI development is inevitable or inherently necessary. It redirects focus from technology-driven to outcome-driven thinking, suggesting we should define desired societal outcomes first, then determine AI’s role.
Impact
This reframing influenced the final discussion segment and connected with Moritz’s closing comments about overcoming false dichotomies. It helped establish that AI governance should be purpose-driven rather than technology-driven, affecting how other speakers framed their concluding thoughts.
Overcoming the simple dichotomies of it’s innovation versus regulation. You can have innovation through regulation. We’ve had that for decades… safe and trustworthy frameworks… is integral to innovation… the nuclear industry, the aviation industry, they took off after we had safety standards in place.
Speaker
von Knebel Moritz
Reason
This comment dismantles one of the most persistent false narratives in tech policy – that regulation inherently stifles innovation. By providing concrete historical examples from other industries, Moritz demonstrates how safety standards actually enabled scaling and growth, offering a compelling counter-narrative.
Impact
This fundamentally challenged the framing used by many in the AI industry and provided a research-backed alternative perspective. It influenced the moderator to ask for elaboration and helped conclude the discussion on a note that regulation and innovation can be mutually reinforcing rather than opposing forces.
Overall assessment
These key comments collectively transformed the discussion from a conventional regulatory gap-analysis into a more fundamental examination of AI governance assumptions and power structures. The conversation evolved through several phases: Moritz’s ‘knowledge islands’ metaphor established intellectual humility about current understanding; Alexandra and Phumzile’s comments highlighted global power imbalances and the need for inclusive approaches; Yasmin’s focus on implementation practicality grounded the discussion in real-world constraints; and the final exchanges by Vance and Moritz reframed the entire innovation-regulation relationship. Together, these interventions created a more nuanced, critical, and globally-aware discussion that moved beyond surface-level policy tweaks to examine foundational assumptions about AI development, governance, and societal impact. The speakers built upon each other’s insights, creating a layered analysis that challenged dominant narratives while offering constructive alternatives.
Follow-up questions
How can we establish global AI governance platforms capable of harmonizing minimum principles on safety, ethics, and accountability while respecting local specificities?
Speaker
Alexandra Krastins Lopes
Explanation
This addresses the critical need for international coordination to prevent regulatory fragmentation while accommodating different cultural and economic contexts
What specific mechanisms can be developed to leverage AI for regulatory processes and governance?
Speaker
von Knebel Moritz
Explanation
Moritz mentioned that relatively little work has gone into using AI to improve regulatory frameworks, suggesting this as an underexplored area with potential
How can we develop adaptive regulatory architectures that can evolve quickly with technology rather than having static rules?
Speaker
von Knebel Moritz
Explanation
This addresses the fundamental challenge of regulatory frameworks being outpaced by rapid AI development
What would binding international law on AI platform governance look like in practice?
Speaker
Phumzile van Damme
Explanation
Van Damme identified this as her most urgent recommendation, but the practical implementation details need further exploration
How can we create effective spaces for real co-creation between different disciplines, sectors, and generations in AI governance?
Speaker
Deloitte consultant (Yasmin Alduri)
Explanation
While mentioning success in her work, the specific methodologies and scalable approaches for such co-creation spaces need further development
How can use case-based frameworks be practically implemented compared to current risk-based approaches?
Speaker
Deloitte consultant (Yasmin Alduri)
Explanation
This represents a fundamental shift in regulatory approach that requires detailed exploration of implementation mechanisms
What specific soft mechanisms can regulators use to influence AI system design and development rather than just enforcement?
Speaker
Vance Lockton
Explanation
Lockton emphasized this as critical but noted it’s a work in progress requiring further development of practical approaches
How can we systematically include youth voices in AI governance beyond tokenistic participation?
Speaker
Deloitte consultant (Yasmin Alduri)
Explanation
While identifying youth as having valuable solutions, the specific mechanisms for meaningful inclusion need further research
What are the specific case studies and methodologies for how safety standards in other industries (nuclear, aviation) contributed to innovation that can be applied to AI?
Speaker
von Knebel Moritz
Explanation
Moritz mentioned having done previous work on this but suggested more research is needed to apply these lessons to AI governance
How can we achieve democratization across the entire AI lifecycle – development, profits, and governance – not just access?
Speaker
Phumzile van Damme
Explanation
This represents a comprehensive framework for AI democratization that needs detailed exploration of implementation mechanisms
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
