Laying the foundations for AI governance

10 Jul 2025 09:40h - 10:10h

Laying the foundations for AI governance

Session at a glance

Summary

This discussion focused on the challenge of translating AI governance principles into effective real-world policy and practice, featuring a panel of experts including former Greek Prime Minister George Papandreou, Professor Dawn Song from UC Berkeley, Artemis Seaford from Silicon Valley, and Dean Xue Lan. The moderator asked each panelist to identify the greatest obstacles preventing the transformation of AI principles into practical governance.


Former Prime Minister Papandreou highlighted several interconnected challenges, including geopolitical tensions that hinder necessary international cooperation, the centralization of power in big tech companies that resist regulation, growing inequality, environmental concerns from energy-intensive AI systems, and societal addiction to technology platforms that undermines democratic discourse. Professor Song emphasized the fragmentation in approaches to AI policy and governance, proposing that science and evidence-based policy could provide common ground for international cooperation. She also discussed the urgent need for “secure-by-design” approaches, noting that AI currently advantages cyber attackers more than defenders.


Artemis Seaford identified a paradigm tension between Silicon Valley’s problem-first, iterative approach and traditional institutions’ principle-based methods, arguing that the optimal solution lies in meeting at the regulatory middle ground. She stressed that companies actually want clear regulation but need predictability and consistency across jurisdictions. Dean Xue Lan focused on the fundamental pacing problem where technology advances much faster than governance, compounded by joint uncertainty about risks that even companies don’t fully understand.


The discussion concluded by identifying four overarching obstacles: limited time given technology’s pace, uncertainty about technological futures, geopolitical tensions, and dangerous concentrations of power, while noting both challenges and potential solutions that AI itself might provide.


Keypoints

**Major Discussion Points:**


– **Obstacles to translating AI governance principles into practice**: Panelists identified key barriers including geopolitical tensions, centralization of power in big tech companies, inequality and resource concentration, environmental concerns from energy-hungry AI systems, and societal addiction to technology platforms that undermines democratic discourse.


– **The need for science and evidence-based AI policy**: Discussion emphasized developing unified approaches grounded in scientific evidence rather than fragmented opinions, with focus on advancing scientific understanding of AI risks and mitigation strategies as a common foundation for policy discussions across different stakeholders and nations.


– **Technical challenges in AI safety and security**: Exploration of specific technical issues like AI’s dual-use nature in cybersecurity (currently favoring attackers), the lack of proof guarantees for AI system safety, and the need for “secure-by-design” and “safe-by-design” approaches to shift the balance toward helping defenders.


– **Industry perspective on regulation**: Companies, particularly startups, actually want regulation but need clarity and consistency rather than fragmented, unpredictable rules across jurisdictions. The discussion highlighted the complexity of responsibility chains in AI development and deployment.


– **International cooperation and the role of smaller nations**: Emphasis on AI governance as a “safe zone” for international cooperation regardless of geopolitical differences, with smaller countries like Greece potentially serving as testing grounds for AI governance experiments and bringing philosophical/ethical perspectives to the discussion.


**Overall Purpose:**


The discussion aimed to address the critical challenge of moving from AI governance principles to practical implementation, bringing together perspectives from academia, industry, and government to identify obstacles and potential solutions for effective real-world AI policy.


**Overall Tone:**


The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disagreeing. While acknowledging serious challenges and risks, the discussion maintained an optimistic undertone about the possibility of finding solutions through cooperation, scientific approaches, and learning from both technical innovations and historical wisdom. The conversation was academic yet accessible, with a sense of urgency tempered by thoughtful analysis.


Speakers

– **Moderator**: Session moderator (specific name not provided in transcript)


– **Robert Trager**: Professor at the University of Oxford, co-director of the Oxford Martin AI Governance Initiative, session moderator


– **George Papandreou**: Former Prime Minister of Greece, former Foreign Minister of Greece, works with the Council of Europe


– **Dawn Song**: Professor, computer scientist, affiliated with Berkeley RDI (Center for Responsible Decentralized Intelligence), recognized for fundamental contributions in several areas of computer science, works on AI safety and security


– **Lan Xue**: Dean (Dean Xue Lan), expertise in governance and policy


– **Artemis Seaford**: Works in Silicon Valley in industry (previously for larger companies, now for a startup), specializes in internal policy and safety, background in responsible AI


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# Comprehensive Report: Translating AI Governance Principles into Practice


## Executive Summary


This panel discussion brought together leading experts from academia, government, and industry to address the challenge of translating AI governance principles into practical implementation. The session featured Former Greek Prime Minister George Papandreou, Professor Dawn Song from UC Berkeley, Silicon Valley industry expert Artemis Seaford, and Dean Xue Lan, moderated by Professor Robert Trager from Oxford University.


The discussion revealed consensus on four fundamental obstacles to effective AI governance: time constraints due to rapid technological advancement, uncertainty about future developments, geopolitical tensions, and power concentration in major technology companies. Despite these challenges, participants identified potential pathways forward through international cooperation, experimental governance approaches, and bridging different methodological perspectives.


## Key Obstacles to AI Governance Implementation


### Time Constraints and the Pacing Problem


All participants acknowledged the fundamental challenge of AI’s rapid development pace outstripping traditional governance timelines. Dean Xue Lan noted this “pacing problem,” while Artemis Seaford emphasized that “time is actually another obstacle” and that unlike previous transformative technologies, “time, as much as we need it, is not a luxury we can afford in this case.”


This temporal pressure forces policymakers to make critical decisions about emerging technologies without sufficient time for traditional regulatory development or comprehensive understanding of long-term implications.


### Uncertainty and Mutual Ignorance


Dean Xue Lan described a situation of “joint ignorance” where both regulators and companies lack comprehensive understanding of AI risks. Professor Dawn Song reinforced this concern from a technical perspective, noting that current AI systems “lack proof of guarantees for trustworthiness, safety and security.”


This mutual uncertainty creates particularly challenging governance conditions, as traditional regulatory approaches typically assume either regulators understand the risks or industry possesses superior technical knowledge to inform policy.


### Geopolitical Tensions


Former Prime Minister Papandreou identified geopolitical tensions as preventing “necessary global cooperation and create competitive dynamics that threaten governance and world peace.” However, Dean Xue Lan offered a more optimistic perspective, suggesting that “AI governance and safety should be a safe zone for international cooperation regardless of other differences” and noting that China “does not want to get into this geopolitical competition.”


### Power Concentration


The concentration of AI capabilities in major technology companies emerged as a consistent concern. Papandreou argued that “centralisation of power through big tech giants and oligarchs prevents effective regulation and democratic control,” while noting how power has shifted from traditional democratic institutions to corporations operating beyond conventional governance structures.


## Paradigmatic Tensions in Governance Approaches


### Silicon Valley versus Traditional Institutional Approaches


Artemis Seaford identified a fundamental “paradigm tension between Silicon Valley’s problem-first iterative approach and traditional institutions’ principle-based approach.” She explained that Silicon Valley’s methodology involves starting with specific problems and iterating towards solutions, contrasting with traditional governance approaches that begin with broad principles and attempt to derive specific policies.


Seaford suggested this tension might be addressed by “meeting in the middle between top-down principle-based approaches and bottom-up problem-first approaches through the regulatory layer.”


### Two Categories of Problems


Seaford distinguished between two types of AI governance challenges: well-defined problems like scams and deepfakes where “industry needs clear rules and responsibility allocation,” and uncertain problems like existential risks where the approach remains unclear.


## Technical Challenges and Industry Perspectives


### AI’s Dual-Use Nature in Cybersecurity


Professor Song provided concrete examples through her research projects, including Cybergym and BountyBench, demonstrating that “AI is a dual-use technology” in cybersecurity. Her research indicates that “unfortunately, AI is going to help attackers more in the near future,” with AI systems now capable of finding vulnerabilities in widely distributed software.


To address these challenges, Song advocated for “secure-by-design and safe-by-design approaches” and the development of quantitatively safe AI systems.


### Industry Appetite for Clear Regulation


Contrary to common assumptions, Seaford argued that “companies want clear regulation but need to avoid unpredictability and fragmentation across jurisdictions.” She explained that companies, particularly startups, find themselves “stuck between a rock and a hard place” – facing both unpredictability and regulatory fragmentation.


This perspective suggests that well-designed regulation could support innovation by providing clear guidelines and consistent expectations across jurisdictions.


## International Cooperation and Experimental Approaches


### Science-Based Policy as Common Ground


Professor Song proposed that “science and evidence-based policy could provide common ground for international cooperation,” suggesting that empirical approaches might bridge different governance philosophies. She argued that grounding discussions in scientific evidence could provide a shared foundation for policy development.


### Smaller Nations as Testing Grounds


Former Prime Minister Papandreou suggested that “small to medium-sized countries like Greece can serve as experimental grounds for AI governance approaches.” He referenced Greece’s participation in the Council of Europe AI convention and initiatives like green islands as examples of experimental governance.


### Collecting Best Practices


Dean Xue Lan emphasized the “tremendous potential for collecting best practices from companies and industries into global standards,” suggesting that effective governance might emerge from synthesizing successful approaches rather than developing entirely new frameworks.


## Societal and Democratic Implications


### Technology’s Impact on Democratic Discourse


Papandreou raised concerns about how “societies have become addicted to social platforms and AI rather than being freed by them,” creating challenges for democratic discourse. He advocated for bringing back “critical thinking and ethical considerations about power usage, drawing from ancient philosophical traditions.”


Papandreou also briefly mentioned the possibility of “a fourth branch of government – a deliberative branch using AI to enable citizen participation in policy-making” as one potential innovation in democratic governance.


### Responsibility Allocation Challenges


The complex AI development ecosystem, from chip manufacturers to end-user applications, creates significant challenges for responsibility allocation. Seaford noted the need for clear responsibility frameworks across “the complex AI supply chain.”


## Areas of Consensus and Remaining Tensions


### Strong Agreement


Despite the complexity of challenges, participants demonstrated consensus on:


– The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration


– The critical importance of international cooperation on AI governance


– The need for collaboration between industry and regulators


– The potential for science-based approaches to provide common ground


### Persistent Disagreements


Key tensions remained around:


– Methodological approaches: problem-first iterative versus principle-based ethical frameworks


– Industry attitudes toward regulation: whether companies genuinely want regulation or resist it due to power concentration


– The appropriate balance between national approaches and international coordination


## Potential Pathways Forward


The discussion identified several promising approaches:


**Experimental Governance**: Using smaller countries as testing grounds for governance innovations could provide empirical evidence about effectiveness while reducing risks of large-scale policy experiments.


**Safe Zones for Cooperation**: Treating AI governance as a “safe zone” for international cooperation could maintain collaborative relationships despite broader geopolitical tensions.


**Methodological Synthesis**: Combining systematic principle-based thinking with responsive iterative methodologies could bridge paradigmatic differences.


**Industry-Regulator Collaboration**: Recognizing shared interests in clear, predictable governance frameworks could enable more effective cooperation.


## Conclusion


This discussion revealed both the substantial challenges in translating AI governance principles into practice and the significant potential for progress through collaborative approaches. While obstacles including time constraints, uncertainty, geopolitical tensions, and power concentration are formidable, the level of consensus among diverse stakeholders suggests coordinated action remains possible.


The most significant insight may be that difficulties in translating principles into practice might stem from methodological approaches rather than simply implementation challenges. This suggests effective AI governance may require fundamental innovations in governance methodology, combining the strengths of different approaches rather than choosing between them.


The path forward likely requires synthesis: balancing international cooperation with respect for national differences, integrating technical solutions with regulatory frameworks, and bridging the gap between Silicon Valley’s iterative problem-solving and traditional institutions’ principle-based approaches.


Despite the urgency of these challenges, the discussion demonstrated substantial common ground among stakeholders from different sectors and nations, providing a foundation for developing effective AI governance frameworks that can keep pace with technological development while protecting human interests and democratic values.


Session transcript

Moderator: of the Berkeley RDI, that’s the Center for Responsible Decentralized Intelligence. And the session will be moderated by Professor Robert Trager of the University of Oxford, who’s also co-director of the Oxford Martin AI Governance Initiative. So please join me in warmly welcoming all our guests for our next discussion on laying the foundation of AI. I wish you all a very fruitful session.


Robert Trager: Right, I think we’re ready and I would like to welcome the panelists up to the stage now. First we have Dean Xue Lan, Professor Dawn Song, former Prime Minister George Papandreou, and finally Miss Artemis Seaford. So please welcome the panelists with me, yes, and welcome to you and good morning to you. We’re very glad to be here. This morning we’re going to be talking about principle to practice. In the last few years we have had many discussions of principle, many very important discussions of principle and statements of principle, and we have been trying to transform principle into practice, and they’re in the challenge. So that is the exciting topic that we want to discuss today. And we have really an all-star panel to talk about that. I believe they have been introduced to you, and so I won’t do that again, but they’re very exciting and we will just dive right into the discussion. So if I may, I will start with former Prime Minister Papandreou, and I want to ask each of them the same question, which is, from your perspective, what is the single greatest obstacle preventing us from translating AI governance principles into effective real-world policy and practice right now?


George Papandreou: Thank you very much, and it’s an honor to be here with such a distinguished panel. And I would say there’s not one single, but I would try to be very quick in saying what I think are important. In the Council of Europe, where I work, there is now a convention on AI, which, of course, is the institutional proposal of how we could work together, not only in Europe, but globally, and we do ask people to sign up to this. But that’s the institutional, and as you said, there is the question of what are the obstacles to this. Going a little bit back to, in the past, to ancient Greece, the ancient Greeks were very much preoccupied on how we use power. Now, AI, with other technologies, is an immense power that human beings have. The question is, do we use it well, or will we abuse it? And in ancient Greece, if you were abusing power, it was anathema to the gods, and you would be punished for that. Now, I think this is one big issue, is how we use power, how we control power, and this is a question of both political will and institutions. Obstacles. First obstacle, geopolitical tensions. Are we in this world where we need to cooperate, because we will not solve the big issues, which could be solved with the help of technology, without cooperation, whether it’s climate, pandemics, whatever financial crisis which I went through. Will we cooperate, or will we compete? And this is where AI, we will have sort of a gain-of-function idea, to be who will be the best and outsmart the other side. That is a threat, not only to governance, but also to world peace. Secondly, AI and Internet and social platforms were there to decentralize power. However, we are seeing a further centralization of power, and through the centralization of power, it’s also an economic power and a wealth power, and that does not allow for regulation, because the big tech giants and oligarchs do not like it. And this, I think, is another big obstacle. A third obstacle is the inequality, which is part of this previous problem, which exists, and as we are moving forward, we will see with AI and the need for big data centers and a lot of investment and access to these widening divide in the world, which also may prevent governance. And a third, I would say fourth, this also the question of environment, as these will be energy-hungry industries. I think these are some of the basic issues. Maybe I would add at the end, a behavioral issue. We have become, as societies, addicted rather than being freed, which should be. By liberating us, we have become addicted in our societies to these social platforms and AI, and this could create a behavioral problem, which we as politicians know also creates problems in how we deliberate, how our discourse is, how democratic discourse is, to really be able to solve problems rather than bullying each other. I think these are some of the five basic issues which I see as risks, but also as obstacles to actually moving forward to implementation of some real control, democratic control and governance, both at the national but also at the global level. And at the global one more thing, I think with the competition can also create this issue of split domains, where we will have separate domains, separate for Europe, separate for the US, separate for China, separate for other countries. This may also create problems for governance. Thank you very much.


Dawn Song: Thank you very much. Okay, I think we’ll turn now on this question of obstacles to Professor Dawn Song. Okay, great. I think this works. Yeah, this is a great question and former Prime Minister George has made really good points. So I think actually, so my answer actually follows very well from George’s points. So what’s the one big obstacle we see is that there’s a lot of fragmentation in this space in terms of what is the right approach for AI policy governance. And these are due to, you know, as George mentioned, potentially different power balance and other geopolitical and even other just, even within the AI research and policy community, there are actually different opinions. There’s fragmentation in terms of what is the best approach, how we should go about developing AI policy, AI governance and so on. And because of this challenge, we realized that we actually need to develop an approach actually to bring everyone together to the table and have essentially a unified way for people to have conversations about AI policy, AI governance to make progress. So recently, together with a number of other leading AI researchers and so on, we actually put up a proposal called a proposal for a path for science and evidence-based AI policy. So the idea is that AI policy should be informed by science and evidence. And essentially, we need to advance a scientific understanding of AI risks and how we identify and mitigate them. And this is how we should inform the AI policy and AI governance. So the idea is really, when you have these geopolitical tensions and other different opinions, people can talk about different things. But really, the goal and the hope is to ground all this conversation in science and evidence. So this is the common language providing the common ground and foundation to actually have everyone to come to the table and speak essentially in the same language. And then we can use this science and evidence to actually inform the decision-making and for AI policy and so on. And of course, there are many challenges. What do we mean by evidence and how do we collect evidence and so on? So I think these are actually very important and great questions for the community together to explore and to work on. Great. Thanks so much. And I want to come back and ask how we do things on a timeline that is relevant to the pace of change.


Robert Trager: So I may come back to you on that in just a minute. But first, I will go to Artemis Seifert. Please, from your perspective, greatest obstacle.


Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI governance principles into practice may actually be in the very question itself. So I work in Silicon Valley in industry, previously for larger companies and now for a startup. And I work in internal policy and safety. And there, our approach is to start with a practice, to start with a problem. So we think a lot about problems, harms, and risks. And then we try to figure out solutions to those. And for those types of problems where the solutions are not obvious, then we turn to governance. So what is governance, right? Governance is what are the processes and rules and principles that should determine AI. And policy is what should actually happen. So I think there is a bit, perhaps, of a paradigm tension between a more Silicon Valley… tech startup approach, and I’m sure many of you have heard of the term MVP, Minimum Viable Product, or the 80-20 rule in consulting. So that’s the idea that we start with a problem, we try to find the most efficient solution, we don’t need the perfect solution right away, and we iterate. And that is very much the approach that us that also work in the safety space and the responsible AI space are adapting to the problem. So when we talk to international rights lawyers and more traditional institutions that have a more principle-based approach, then the trick there is to meet in the middle, right? And the perfect place to meet in the middle for that is probably the regulatory layer. But I feel that there is this top-down and bottom-up approach, and the optimal state is meeting in the middle, but that takes time. So time is actually another obstacle, and I know I’m fighting your question on providing multiple obstacles like George did, but if you think about it, all these complex industries of the past, the auto industry, the aviation industry, nuclear even, there was a lot more time between the development of the technology and its effective regulation. But time, as much as we need it, is not a luxury we can afford in this case. And finally, I think, an issue here is complexity. So we’re not talking about one problem, we’re talking about many problems, many different problems that also importantly interact, from alignment risk to adversarial abuse to accidents to ecosystem problems like labor and over-reliance, so it’s really hard to figure out a single governance and policy framework to address all of them. And on top of that, you add uncertainty. We just don’t know what’s going to happen. We don’t know how far our technical solutions will work yet on many of these problems. So all of these are obstacles, not insurmountable obstacles to translating governance into practice.


Robert Trager: Thank you. Yes. And now, I think to Dean Xue, and maybe if you want to address this issue of time also. Will this one work better? Yeah.


Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way. But let me also put it in a slightly different angle. I think that in the governance side, we always talk about this pacing problem. That is, technology moves very fast, and while governance is moving much slower. And I think that’s probably the greatest challenge, in terms of having very practical and effective governance regime and policies. What is added to that is also the uncertainties involved. I think as colleagues have already mentioned, I think particularly when you are in some real applications and so on. So what are the problems, risks ahead of the application? Nobody knows. I mean, it’s sometimes people, regulators feel that the companies, you know it. But actually, we’ve done a lot of, you know, case studies and we talked to the companies. We don’t know either. So in a way, there’s a sort of joint ignorance of the risks ahead. And that’s also a challenge. And so when you put those together, that really presents a tremendous challenge for effective governance. Yeah. Time and uncertainty do seem to be key things that we need to address.


Robert Trager: Maybe I’ll come to Professor Song now. You are a computer scientist who’s recognized as having made fundamental contributions in several areas of the field. A lot of our audience members, I suspect, are from the policy space. So maybe you could tell us about a particular technical innovation that you wish policymakers and the broad policy community knew more about.


Dawn Song: Yeah, that’s a great question. I think in AI safety and security, we are facing huge challenges. The field is moving really fast. AI is advancing at, you know, amazing speeds. But at the same time, we don’t really understand how deep learning, large language models really work and all these different types of risks and so on and how to mitigate them. So I feel that actually the biggest hindrance for deploying AI is how we can actually ensure the safety and security of AI. And in order to address that, as I mentioned, there are many, many challenges. And unfortunately today, we don’t really have very good solutions. So we actually need to develop new technologies. In particular, today as we deploy AI, we know that hallucinase has various vulnerabilities and so on. So there’s actually no proof of guarantees that the deployed AI systems will be trustworthy, will be safe and secure and so on. So this is a huge, actually, open challenge in the space. And to address this, recently we actually have launched an effort together with Yoshua Bengio and a number of other leading AI researchers on what we call quantitatively safe AI, essentially how we can develop AI systems with better proof of guarantees, with better quantitative safety guarantees and so on. And this, I would say, is particularly important as going forward as we deploy AI systems that makes more important decisions and so on. And I would like to just give a very quick example of a particular area, which is also in, for example, cybersecurity. So recently, we also have done some work showing that AI agents’ capabilities in cybersecurity has increased drastically. So our recent work on Cybergym and BountyBench, so this actually showed, for the first time, showed that the AI agents can now find zero days in widely distributed open source software actually relatively easily. And also, AI agents now can solve these bounty tasks, which are bounties that developers put out in the real world to get white hat hackers to help them find vulnerabilities and produce patches with monetary rewards. And these AI agents can now also solve these bounty tasks that’s worth tens of thousands of dollars and so on. So this shows that AI is actually improving really fast in its capabilities in cybersecurity. However, AI is a dual-use technology. It can help both attacker side and defender side. So then who is AI going to help more? So with our recent analysis, it shows that, unfortunately, AI is going to help attackers more in the near future. But however, overall, the state on the internet, the security posture is really not actually very good. And with the increased AI capabilities helping attackers, this can really help attackers to significantly reduce attack costs and increase scale. So then we really need to figure out what we can do to change this balance between attackers and defenders to have AI help defenders more. So this also leads to what I mentioned earlier about developing what we call safe-by-design, secure-by-design approaches for building secure systems and also building secure AI systems so that we can provide more approval guarantees to actually help essentially shift the balance to help AI help defenders more. So this is a quick summary of what I hope that actually policymakers can actually pay more attention to, the safe-by-design, secure-by-design approach.


Robert Trager: Secure-by-design, exactly. Something that requires a lot of work to get there that we’re not there yet. So AI may be advantaging attackers for now, but with work, it sounds like we might get to a place where the reverse is true. Yes, we hope that’s where we end up with. Yes, thank you so much. I wanted to turn to Mr. Papandreou to ask you about the role that, since you’re the former Prime Minister of Greece as well as Foreign Minister, how do you see the role of Greece and countries, maybe broadly, that are in a somewhat similar position in AI governance both domestically and internationally?


George Papandreou: I can see that a country like Greece, which is a small to medium-sized country, which is both in the European Union, so a developed country, but also has a lot of the problems of developing countries, could be a very good experiment to how we can deal with some of these challenges. We’re in a country of many islands and mountains, so also pilot projects could be very important to see. We’ve already done this with environment, for example. We have some green islands completely free of fossil fuel. So that’s an experiment. How is our future society? Why not use sort of an island or a small mountain village as an experiment for the future and working with the international community? But I would add one other element, again going back to our tradition, Greece being a centre of the ancient philosophers and democracy and so on. We need to bring back critical thinking, ethical issues. The ancient philosophers may not have had all the solutions, but they did put all the questions, the ethical questions, and particularly around power. How do we use power? What is it and how do we debate? How do we create just societies? How do we interact with each other in a peaceful way? That’s what democracy is. How do we make sure that power is not concentrated in the hands of a few? I think these are the issues which we need to go back to. So I think a global conversation on the ethical thinking, the education, the use of powers we have, I think is tantamount, is paramount, is absolutely important. And I would just add to that. to this, three areas which I think are difficult, which we have to look at. Again, the concentration of power. As we go back to ancient Greece, politics was basically to imagine that citizens can have the view of their future, can shape their future. What we’ve seen now is we’ve seen a movement from political institutions, and many people here work in institutions, outside of the governance, the normal governance structures, to powers beyond. So the big corporations, huge powers, influencing politics, and that of course puts governments in a very difficult situation. We do not really have power that people think we do. Secondly is this idea that simply with technology, I think Artemis, you mentioned, let’s just do more and more and more technology. That would solve the world. No, technology is not neutral. How it is used is important, so we have to think about that issue. Final idea is complexity and uncertainty, which you all mentioned, complexity, uncertainty, and time. Politics, we politicians, the complexity of the world means we need, first of all, cooperation, so trust, but to bring trust, we have to bring in our citizens. So going back to an ancient idea, the idea, and I proposed this in the Council of Europe in one of my reports I made on participative and deliberative democracy, in the ancient times, well, not everybody was free. There were slaves, there were women that didn’t have rights, but the idea that citizens participate was absolutely important. I think we cannot deal with these complex issues if we don’t bring in the collective wisdom of our societies and the agency of our societies. So why not create a fourth branch of government, a deliberative branch, using AI, so merging technology for democracy, where citizens can deliberate on all the laws and policies which we are discussing as politicians, where they have a voice, where algorithms allow for real debate, not bullying, not polarization, but real debate, but also consensus building, where everybody has a voice, but one voice, and truly can participate. So I think we have to give responsibility and bring in agency and move away from these platforms, well, they will exist, these platforms which give a false sense of empowerment and a lot of frustration and a lot of loneliness and a lot of bullying, where we really bring back the idea of citizenship, which I think is a very deep, ancient Greek idea.


Robert Trager: Good. We can finally end this here. Thank you. I appreciate it. Yes, and I think preventing concentrations of power in this space is something that, broadly, the world can really agree on and get behind. I wanna turn to Dean Chui. So both companies and countries have started developing best practices for governing advanced AI. So what are the prospects for internationalizing this, internationalizing best practice, coming to common standards, and so on? Yeah, I think there’s, indeed,


Lan Xue: I think there’s a tremendous potential of really how to really collect the best ideas. I see a lot of companies, a lot of industries have already done so in having their own best practices and how we actually can collect them and add international platform to exchange ideas and to make sure that they can be synthesized into global standards and so on. I think the challenge, though, I think my colleagues have already alluded to, is the current geopolitical environment. I think on this score, I think I must say that China, since it was often viewed as a party in this geopolitical competition, I have to say that actually China did not, China has not, does not want to get into this geopolitical competition. China has never been fully understand why China’s viewed as an adversary in this situation. And China’s participated in every possible international venue to try to work with various parties to make sure that indeed best practice can be synthesized, can be worked together. And we, of course, many of Chinese companies are in various agreements in signing up to that. So I think this issue, I think that I would really hope that all the experts here and so on, we can all agree that the AI governance, AI safety is a safe zone, that no matter what differences you have, this is something we can work together. This is for the humanity. It’s not for any single country’s interest. So there I would say that I think that if we can address that issue, I’m sure that human wisdom over the thousands of years of the culture, of the philosophy and the moral standard, I’m sure we can find a solution. I hope so. And it seems a critical area to transform principle into practice.


Robert Trager: Thank you. So I turn last to Ms. Artemis Seyford. So startups often worry that regulation will stifle innovation. What is one smart regulatory action governments could take that would actually help companies like yours build responsible AI without harming innovation?


Artemis Seaford: That is a great question. So there is a misconception that companies do not want regulation. And maybe there are companies out there that don’t want regulation, but most companies, smart companies actually want regulation. The issue is, and this is particularly acute for startups that are small and need to grow fast and don’t have a lot of people, is that companies, particularly in the current moment in the AI space, are stuck between a rock and a hard place. The rock is uncertainty and unpredictability. And the hard place is fragmentation of the rules. It’s different jurisdictions coming up with a bunch of things that don’t make sense together necessarily, either across jurisdictions or even in one jurisdiction, the various set of rules that may apply to a problem may conflict. So companies want regulation, but regulation that avoids both the hard rock of unpredictability, the rock of unpredictability, and the hard place of fragmentation. So what does that mean in practice? So I would actually classify the issues here in two broad buckets. One is the bucket where we know what the problem is, right? Take scams, deep fakes, non-consensual sexual imagery, illegal uses of AI, right? That’s a clearly defined and scoped problem. And we already starting to see in practice how bad actors can use AI to advance it. There, what we need in industry is clear rules. So what is the line? And who is responsible for holding it? A big question in industry is the chain of responsibility. So if you think about it, AI industry is incredibly complex, right? You have the chip manufacturers, and then you have the model developers, and then you might have product deployers, and then you might have a customer, and then a customer that buys from that customer. So it’s really complicated to understand how to allocate responsibility for a downstream problem up the vertical chain. So clarity on what the solution is and who owns responsibility for it will actually be quite welcome by industry if done sensibly. The second bucket of problems, which are problems that have uncertainty, like is AI gonna kill us all and how, right? We just don’t know yet. Or how are really bad actors gonna abuse AI when that becomes extremely advanced? Because there you have uncertainty, you don’t have a clear solution. So that takes us back to governance. There you need to create the bodies that work with industry over time to share information, reduce informational asymmetries, do iterative policy that doesn’t hamper innovation too early but also doesn’t catch the problem too late.


Robert Trager: Great. Yes, thanks. Thanks to all the panelists. I think we just have a tiny bit of time. So I guess maybe it’s up to me to quickly sum up what has been a very rich discussion. So what were the fundamental obstacles that we identified? I think four of perhaps the overarching main ones that really run through the discussion are number one, time. How little time we have given the pace of technology. Number two, the uncertainty we have about technological futures given that time. Number three, geopolitics, of course. Everybody knew that one was coming. And finally, number four, concentrations of power. And we heard both about challenges in the sense of AI finding zero days, for instance, and also about hope in the sense of maybe AI, even in the near future, if we work at it, providing solutions to some of those very challenges. So unfortunately, we have to leave it right there. But please join me in thanking this wonderful panel. Thank you. Many thanks to you, Robert, and your distinguished guests for this fascinating first discussion, kick-starting the AI Governance Dialogue in a beautiful fashion. Thank you so much. And we heard you, Mr. Papandreou, mentioned the word trust. Well, that’s what’s coming up next.


G

George Papandreou

Speech speed

149 words per minute

Speech length

1296 words

Speech time

519 seconds

Geopolitical tensions prevent necessary global cooperation and create competitive dynamics that threaten governance and world peace

Explanation

Papandreou argues that in a world where cooperation is essential to solve major issues like climate change and pandemics, the competitive dynamics around AI create a ‘gain-of-function’ mentality where countries try to outsmart each other. This competition threatens not only effective governance but also global peace and stability.


Evidence

References to climate change, pandemics, and financial crises as examples of issues requiring cooperation that he experienced as Prime Minister


Major discussion point

Obstacles to Translating AI Governance Principles into Practice


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Lan Xue

Agreed on

Need for international cooperation despite geopolitical challenges


Centralization of power through big tech giants and oligarchs prevents effective regulation and democratic control

Explanation

Despite AI and internet technologies being designed to decentralize power, Papandreou observes that power has actually become more centralized in the hands of big tech companies and oligarchs. This concentration of economic and wealth power creates resistance to regulation and undermines democratic governance.


Evidence

Observation that AI and Internet were originally intended to decentralize power but have achieved the opposite effect


Major discussion point

Power Concentration and Democratic Governance


Topics

Legal and regulatory | Economic


Agreed with

– Robert Trager

Agreed on

Power concentration as a critical threat to democratic governance


Need for a fourth branch of government – a deliberative branch using AI to enable citizen participation in policy-making

Explanation

Drawing from ancient Greek democratic traditions, Papandreou proposes creating a new governmental branch that would use AI to facilitate genuine citizen deliberation on laws and policies. This would move beyond current social platforms that create false empowerment and instead provide real democratic participation where every citizen has one voice in consensus-building processes.


Evidence

References to ancient Greek democracy and participative democracy concepts, mentions his report on participative and deliberative democracy for the Council of Europe


Major discussion point

Power Concentration and Democratic Governance


Topics

Legal and regulatory | Sociocultural


Movement of power from political institutions to corporations beyond normal governance structures weakens democratic control

Explanation

Papandreou explains that there has been a shift of power away from traditional political institutions to large corporations that operate outside normal governance structures. This transfer of power puts governments in difficult positions as they lack the authority that citizens expect them to have.


Evidence

Personal experience as a former Prime Minister observing the limitations of governmental power relative to corporate influence


Major discussion point

Power Concentration and Democratic Governance


Topics

Legal and regulatory | Economic


Agreed with

– Robert Trager

Agreed on

Power concentration as a critical threat to democratic governance


Small to medium-sized countries like Greece can serve as experimental grounds for AI governance approaches

Explanation

Papandreou suggests that countries like Greece, which are developed but face challenges similar to developing nations, can serve as valuable testing grounds for AI governance experiments. The country’s geography of islands and mountains makes it suitable for pilot projects.


Evidence

Examples of Greece’s existing environmental experiments with green islands completely free of fossil fuels


Major discussion point

International Cooperation and Standards


Topics

Legal and regulatory | Development


Societies have become addicted to social platforms and AI rather than being freed by them, creating behavioral problems for democratic discourse

Explanation

Instead of liberating societies as intended, AI and social platforms have created addiction and behavioral problems that undermine democratic discourse. This addiction leads to bullying, polarization, and prevents the kind of constructive deliberation needed to solve complex problems.


Evidence

Observation of how social platforms affect political discourse and democratic deliberation based on his experience as a politician


Major discussion point

Societal and Behavioral Impacts


Topics

Sociocultural | Human rights


Need to bring back critical thinking and ethical considerations about power usage, drawing from ancient philosophical traditions

Explanation

Papandreou argues that ancient Greek philosophers, while not having all the solutions, asked the right ethical questions about power usage, justice, and peaceful interaction. He believes we need to return to this tradition of critical thinking and ethical reasoning to address AI governance challenges.


Evidence

References to ancient Greek philosophical traditions and their focus on ethical use of power and democratic principles


Major discussion point

Societal and Behavioral Impacts


Topics

Sociocultural | Human rights


Disagreed with

– Artemis Seaford

Disagreed on

Approach to AI governance – principles vs. practice-first


Technology is not neutral and how it is used matters significantly

Explanation

Papandreou challenges the notion that simply developing more technology will solve world problems, arguing instead that technology is not neutral and that the manner of its implementation and use is crucial. This perspective emphasizes the importance of conscious decision-making about technological deployment.


Major discussion point

Societal and Behavioral Impacts


Topics

Sociocultural | Legal and regulatory


D

Dawn Song

Speech speed

142 words per minute

Speech length

989 words

Speech time

415 seconds

Fragmentation in approaches to AI policy and governance due to different opinions within research and policy communities requires science and evidence-based solutions

Explanation

Song identifies fragmentation as a major obstacle, noting that even within AI research and policy communities, there are different opinions about the best approaches to AI governance. To address this, she proposes grounding all conversations in science and evidence as a common language that can bring everyone to the table.


Evidence

References to a recent proposal she co-authored with leading AI researchers for ‘a path for science and evidence-based AI policy’


Major discussion point

Obstacles to Translating AI Governance Principles into Practice


Topics

Legal and regulatory | Sociocultural


AI systems lack proof of guarantees for trustworthiness, safety, and security, requiring development of quantitatively safe AI with better guarantees

Explanation

Song explains that current AI systems, including large language models, are not well understood and lack proof of guarantees for safety and security. She argues for developing new technologies that can provide quantitative safety guarantees, especially as AI systems make increasingly important decisions.


Evidence

References to work with Yoshua Bengio and other leading researchers on ‘quantitatively safe AI’ and mentions issues like hallucination and various vulnerabilities in current AI systems


Major discussion point

Technical Challenges and AI Safety


Topics

Cybersecurity | Legal and regulatory


AI agents increasingly capable in cybersecurity but currently advantage attackers more than defenders, requiring secure-by-design approaches

Explanation

Song presents research showing that AI agents can now find zero-day vulnerabilities in open source software and solve cybersecurity bounty tasks worth tens of thousands of dollars. However, her analysis indicates that AI currently helps attackers more than defenders, creating a concerning imbalance in cybersecurity.


Evidence

Specific research projects including Cybergym and BountyBench that demonstrated AI agents finding zero days and solving bounty tasks worth tens of thousands of dollars


Major discussion point

Technical Challenges and AI Safety


Topics

Cybersecurity


Need for safe-by-design and secure-by-design approaches to shift the balance toward helping defenders

Explanation

To address the current advantage that AI provides to attackers, Song advocates for developing safe-by-design and secure-by-design approaches for building both secure systems and secure AI systems. This would help shift the balance to make AI more beneficial for defenders in cybersecurity contexts.


Evidence

Analysis showing that AI reduces attack costs and increases scale for attackers, while the overall security posture on the internet is poor


Major discussion point

Technical Challenges and AI Safety


Topics

Cybersecurity | Infrastructure


A

Artemis Seaford

Speech speed

155 words per minute

Speech length

909 words

Speech time

351 seconds

Paradigm tension between Silicon Valley’s problem-first iterative approach and traditional institutions’ principle-based approach creates implementation challenges

Explanation

Seaford identifies a fundamental tension between the Silicon Valley approach of starting with problems and iterating solutions (MVP, 80-20 rule) versus traditional institutions that begin with principles. She argues that the optimal solution is meeting in the middle, likely at the regulatory layer, but this takes time.


Evidence

References to Silicon Valley concepts like Minimum Viable Product (MVP) and the 80-20 rule from consulting, contrasting with approaches from international rights lawyers and traditional institutions


Major discussion point

Obstacles to Translating AI Governance Principles into Practice


Topics

Legal and regulatory | Economic


Disagreed with

– George Papandreou

Disagreed on

Approach to AI governance – principles vs. practice-first


Companies want clear regulation but need to avoid unpredictability and fragmentation across jurisdictions

Explanation

Contrary to common misconceptions, Seaford argues that most smart companies actually want regulation, but they are caught between uncertainty/unpredictability and fragmentation of rules across different jurisdictions. Companies need regulation that provides clarity while avoiding conflicting requirements.


Evidence

Her experience working in Silicon Valley for both larger companies and startups in internal policy and safety roles


Major discussion point

Regulatory Approaches and Industry Needs


Topics

Legal and regulatory | Economic


For well-defined problems, industry needs clear rules and responsibility allocation along the complex AI supply chain

Explanation

For problems where the issues are clearly understood (like scams, deepfakes, illegal AI uses), Seaford argues that industry needs clear rules about what the boundaries are and who is responsible for enforcement. She emphasizes the complexity of the AI supply chain from chip manufacturers to end customers makes responsibility allocation particularly challenging.


Evidence

Examples of clearly defined problems including scams, deep fakes, non-consensual sexual imagery, and illegal uses of AI; description of the complex AI supply chain from chip manufacturers through model developers to end customers


Major discussion point

Regulatory Approaches and Industry Needs


Topics

Legal and regulatory | Human rights


For uncertain problems, iterative governance bodies are needed to work with industry over time

Explanation

For problems with high uncertainty (like existential AI risks or advanced adversarial abuse), Seaford argues that traditional clear rules aren’t sufficient. Instead, governance bodies need to be created that can work iteratively with industry, sharing information and reducing asymmetries while avoiding both premature innovation hampering and delayed problem recognition.


Evidence

Examples of uncertain problems like ‘is AI gonna kill us all and how’ and advanced adversarial abuse scenarios


Major discussion point

Regulatory Approaches and Industry Needs


Topics

Legal and regulatory | Cybersecurity


L

Lan Xue

Speech speed

140 words per minute

Speech length

512 words

Speech time

219 seconds

Technology moves very fast while governance moves much slower, creating a fundamental pacing problem

Explanation

Xue identifies the speed differential between technological advancement and governance development as the greatest challenge for creating practical and effective governance regimes and policies. This pacing problem makes it difficult for governance structures to keep up with technological changes.


Major discussion point

Obstacles to Translating AI Governance Principles into Practice


Topics

Legal and regulatory


Agreed with

– Artemis Seaford
– Robert Trager

Agreed on

Time and uncertainty as fundamental obstacles to AI governance


Joint ignorance of risks ahead exists between regulators and companies, adding to governance challenges

Explanation

Xue explains that there is a shared uncertainty about future risks and problems, with both regulators and companies lacking knowledge about what challenges lie ahead in AI applications. This mutual ignorance, combined with the pacing problem, creates tremendous challenges for effective governance.


Evidence

Case studies and conversations with companies showing that even companies don’t know the risks ahead, contrary to regulator assumptions


Major discussion point

Obstacles to Translating AI Governance Principles into Practice


Topics

Legal and regulatory


Agreed with

– Artemis Seaford
– Robert Trager

Agreed on

Time and uncertainty as fundamental obstacles to AI governance


Tremendous potential exists for collecting best practices from companies and industries into global standards

Explanation

Xue sees significant opportunity in gathering best practices that companies and industries have already developed and synthesizing them into global standards through international platforms for idea exchange. This could provide a foundation for international cooperation on AI governance.


Evidence

Observation that many companies and industries have already developed their own best practices


Major discussion point

International Cooperation and Standards


Topics

Legal and regulatory | Economic


AI governance and safety should be a safe zone for international cooperation regardless of other differences

Explanation

Xue argues that AI governance and safety should transcend geopolitical competition and serve as an area where all parties can work together for humanity’s benefit, regardless of other political differences. He specifically mentions China’s willingness to participate in international cooperation on these issues.


Evidence

China’s participation in international venues and Chinese companies signing various agreements; China’s position that it doesn’t want geopolitical competition in AI governance


Major discussion point

International Cooperation and Standards


Topics

Legal and regulatory | Cybersecurity


Agreed with

– George Papandreou

Agreed on

Need for international cooperation despite geopolitical challenges


R

Robert Trager

Speech speed

130 words per minute

Speech length

765 words

Speech time

350 seconds

Four fundamental obstacles prevent translating AI governance principles into practice: time constraints, technological uncertainty, geopolitics, and power concentration

Explanation

Trager synthesizes the panel discussion by identifying four overarching obstacles that run through all the panelists’ contributions. He emphasizes how little time is available given the pace of technological change, the uncertainty about technological futures, geopolitical tensions, and concentrations of power as the main barriers to effective AI governance implementation.


Evidence

Summary of all panelists’ contributions throughout the discussion


Major discussion point

Obstacles to Translating AI Governance Principles into Practice


Topics

Legal and regulatory | Economic


Agreed with

– George Papandreou

Agreed on

Power concentration as a critical threat to democratic governance


AI presents both significant challenges and potential solutions, requiring balanced consideration of risks and opportunities

Explanation

Trager notes that the discussion revealed AI as presenting both serious challenges (such as finding zero-day vulnerabilities) and potential hope for solutions to those very same challenges. This duality suggests that with proper work and attention, AI could provide solutions to some of the problems it creates.


Evidence

References to Dawn Song’s research on AI finding zero days and the potential for AI to help defenders


Major discussion point

Technical Challenges and AI Safety


Topics

Cybersecurity | Legal and regulatory


M

Moderator

Speech speed

128 words per minute

Speech length

64 words

Speech time

30 seconds

The central challenge in AI governance is transforming principles into practical implementation

Explanation

The moderator frames the entire discussion around the key challenge of moving from theoretical principles and statements to actual real-world policy and practice. This transformation from principle to practice represents the core difficulty facing AI governance efforts globally.


Evidence

References to many years of important discussions and statements of principle that now need practical implementation


Major discussion point

Obstacles to Translating AI Governance Principles into Practice


Topics

Legal and regulatory


Preventing concentrations of power in AI is a unifying goal that the world can broadly agree on

Explanation

The moderator identifies power concentration as an area where there could be broad international consensus and cooperation. This suggests that addressing power concentration could serve as a foundation for broader AI governance cooperation across different countries and political systems.


Evidence

Observation of common themes across panelists’ discussions about power concentration


Major discussion point

Power Concentration and Democratic Governance


Topics

Legal and regulatory | Economic


Agreed with

– George Papandreou
– Robert Trager

Agreed on

Power concentration as a critical threat to democratic governance


Agreements

Agreement points

Time and uncertainty as fundamental obstacles to AI governance

Speakers

– Artemis Seaford
– Lan Xue
– Robert Trager

Arguments

Time is actually another obstacle, and I know I’m fighting your question on providing multiple obstacles like George did, but if you think about it, all these complex industries of the past, the auto industry, the aviation industry, nuclear even, there was a lot more time between the development of the technology and its effective regulation. But time, as much as we need it, is not a luxury we can afford in this case.


Technology moves very fast while governance moves much slower, creating a fundamental pacing problem


Joint ignorance of risks ahead exists between regulators and companies, adding to governance challenges


Four fundamental obstacles prevent translating AI governance principles into practice: time constraints, technological uncertainty, geopolitics, and power concentration


Summary

All speakers agree that the rapid pace of technological development creates a fundamental timing problem for governance, compounded by uncertainty about future risks and technological developments.


Topics

Legal and regulatory


Power concentration as a critical threat to democratic governance

Speakers

– George Papandreou
– Robert Trager

Arguments

Centralization of power through big tech giants and oligarchs prevents effective regulation and democratic control


Movement of power from political institutions to corporations beyond normal governance structures weakens democratic control


Four fundamental obstacles prevent translating AI governance principles into practice: time constraints, technological uncertainty, geopolitics, and power concentration


Preventing concentrations of power in AI is a unifying goal that the world can broadly agree on


Summary

Both speakers identify power concentration in tech companies as undermining democratic institutions and effective governance, with Trager noting this as an area for potential global consensus.


Topics

Legal and regulatory | Economic


Need for international cooperation despite geopolitical challenges

Speakers

– George Papandreou
– Lan Xue

Arguments

Geopolitical tensions prevent necessary global cooperation and create competitive dynamics that threaten governance and world peace


AI governance and safety should be a safe zone for international cooperation regardless of other differences


Summary

Both speakers acknowledge geopolitical tensions as obstacles but emphasize the critical need for international cooperation on AI governance as a shared human concern.


Topics

Legal and regulatory | Cybersecurity


Similar viewpoints

Both speakers identify fragmentation and different approaches as major obstacles, with Song proposing science-based solutions and Seaford advocating for meeting in the middle between different paradigms.

Speakers

– Dawn Song
– Artemis Seaford

Arguments

Fragmentation in approaches to AI policy and governance due to different opinions within research and policy communities requires science and evidence-based solutions


Paradigm tension between Silicon Valley’s problem-first iterative approach and traditional institutions’ principle-based approach creates implementation challenges


Companies want clear regulation but need to avoid unpredictability and fragmentation across jurisdictions


Topics

Legal and regulatory | Economic


Both speakers emphasize the need for iterative, collaborative approaches between industry and governance bodies to address AI safety challenges, particularly given current technical limitations.

Speakers

– Dawn Song
– Artemis Seaford

Arguments

For uncertain problems, iterative governance bodies are needed to work with industry over time


AI systems lack proof of guarantees for trustworthiness, safety, and security, requiring development of quantitatively safe AI with better guarantees


Topics

Legal and regulatory | Cybersecurity


Both speakers reject technological determinism and emphasize the importance of conscious decision-making about how technology is implemented and regulated.

Speakers

– George Papandreou
– Artemis Seaford

Arguments

Technology is not neutral and how it is used matters significantly


For well-defined problems, industry needs clear rules and responsibility allocation along the complex AI supply chain


Topics

Legal and regulatory | Sociocultural


Unexpected consensus

Industry desire for regulation

Speakers

– Artemis Seaford

Arguments

Companies want clear regulation but need to avoid unpredictability and fragmentation across jurisdictions


Explanation

Seaford’s assertion that ‘most companies, smart companies actually want regulation’ challenges the common assumption that industry opposes regulation. This represents an unexpected consensus opportunity between industry and regulators.


Topics

Legal and regulatory | Economic


China’s cooperative stance on AI governance

Speakers

– Lan Xue

Arguments

AI governance and safety should be a safe zone for international cooperation regardless of other differences


Explanation

Xue’s explicit statement that China ‘does not want to get into this geopolitical competition’ and views AI governance as transcending political differences represents an unexpected opening for international cooperation despite broader geopolitical tensions.


Topics

Legal and regulatory | Cybersecurity


Mutual ignorance between regulators and companies

Speakers

– Lan Xue

Arguments

Joint ignorance of risks ahead exists between regulators and companies, adding to governance challenges


Explanation

The acknowledgment that both regulators and companies lack knowledge about future AI risks creates an unexpected basis for collaborative learning and shared problem-solving rather than adversarial relationships.


Topics

Legal and regulatory


Overall assessment

Summary

The speakers demonstrate strong consensus on fundamental challenges (time, uncertainty, power concentration, geopolitical tensions) and the need for international cooperation, with surprising agreement on industry-regulator collaboration opportunities.


Consensus level

High level of consensus on problem identification and broad solution directions, suggesting significant potential for coordinated action on AI governance despite acknowledged obstacles. The agreement spans technical experts, policymakers, industry representatives, and academics, indicating robust cross-sector alignment on core issues.


Differences

Different viewpoints

Approach to AI governance – principles vs. practice-first

Speakers

– Artemis Seaford
– George Papandreou

Arguments

Paradigm tension between Silicon Valley’s problem-first iterative approach and traditional institutions’ principle-based approach creates implementation challenges


Need to bring back critical thinking and ethical considerations about power usage, drawing from ancient philosophical traditions


Summary

Seaford advocates for a Silicon Valley approach starting with problems and iterating solutions (MVP, 80-20 rule), while Papandreou emphasizes the importance of returning to principle-based ethical thinking and philosophical foundations from ancient Greek traditions. This represents a fundamental tension between pragmatic iteration versus principled foundation-building.


Topics

Legal and regulatory | Sociocultural


Unexpected differences

Industry’s relationship with regulation

Speakers

– George Papandreou
– Artemis Seaford

Arguments

Centralization of power through big tech giants and oligarchs prevents effective regulation and democratic control


Companies want clear regulation but need to avoid unpredictability and fragmentation across jurisdictions


Explanation

This disagreement is unexpected because it reveals fundamentally different views of industry motivation. Papandreou presents big tech as resistant to regulation due to power concentration, while Seaford, speaking from industry experience, argues that smart companies actually want regulation. This represents a significant gap between political and industry perspectives on regulatory appetite.


Topics

Legal and regulatory | Economic


Overall assessment

Summary

The panel showed relatively low levels of direct disagreement, with most speakers identifying similar obstacles (time, uncertainty, geopolitics, power concentration) but proposing different solutions. The main areas of disagreement centered on methodological approaches (principles vs. practice-first) and industry attitudes toward regulation.


Disagreement level

Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and approaches. This suggests that while there is consensus on challenges, the path forward remains contested. The disagreements are more complementary than contradictory, indicating potential for synthesis rather than fundamental incompatibility. However, the different perspectives on industry motivation and governance approaches could create implementation challenges if not reconciled.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers identify fragmentation and different approaches as major obstacles, with Song proposing science-based solutions and Seaford advocating for meeting in the middle between different paradigms.

Speakers

– Dawn Song
– Artemis Seaford

Arguments

Fragmentation in approaches to AI policy and governance due to different opinions within research and policy communities requires science and evidence-based solutions


Paradigm tension between Silicon Valley’s problem-first iterative approach and traditional institutions’ principle-based approach creates implementation challenges


Companies want clear regulation but need to avoid unpredictability and fragmentation across jurisdictions


Topics

Legal and regulatory | Economic


Both speakers emphasize the need for iterative, collaborative approaches between industry and governance bodies to address AI safety challenges, particularly given current technical limitations.

Speakers

– Dawn Song
– Artemis Seaford

Arguments

For uncertain problems, iterative governance bodies are needed to work with industry over time


AI systems lack proof of guarantees for trustworthiness, safety, and security, requiring development of quantitatively safe AI with better guarantees


Topics

Legal and regulatory | Cybersecurity


Both speakers reject technological determinism and emphasize the importance of conscious decision-making about how technology is implemented and regulated.

Speakers

– George Papandreou
– Artemis Seaford

Arguments

Technology is not neutral and how it is used matters significantly


For well-defined problems, industry needs clear rules and responsibility allocation along the complex AI supply chain


Topics

Legal and regulatory | Sociocultural


Takeaways

Key takeaways

Four fundamental obstacles prevent translating AI governance principles into practice: time constraints due to rapid technological advancement, uncertainty about technological futures, geopolitical tensions, and concentrations of power


AI governance requires a science and evidence-based approach to provide common ground for fragmented stakeholders with different opinions and geopolitical interests


There is a paradigm tension between Silicon Valley’s iterative, problem-first approach and traditional institutions’ principle-based approach that needs to be bridged through regulatory frameworks


Current AI systems lack proof of guarantees for safety and security, with AI currently advantaging attackers over defenders in cybersecurity, requiring development of secure-by-design approaches


Companies want clear regulation but need predictability and consistency across jurisdictions, with clear responsibility allocation along complex AI supply chains


Power has shifted from democratic institutions to corporations, requiring new forms of citizen participation and deliberative democracy to address complex AI governance challenges


International cooperation on AI governance and safety should be treated as a ‘safe zone’ regardless of other geopolitical differences, as it serves humanity’s collective interests


Resolutions and action items

Develop quantitatively safe AI systems with better proof of guarantees for trustworthiness and security


Advance secure-by-design and safe-by-design approaches to shift the cybersecurity balance toward helping defenders


Create governance bodies that work iteratively with industry over time to address uncertain AI problems


Establish clear rules and responsibility allocation for well-defined AI problems like scams and deepfakes


Collect and synthesize best practices from companies and industries into global standards through international platforms


Consider creating a fourth branch of government – a deliberative branch using AI to enable citizen participation in policy-making


Unresolved issues

How to effectively address the fundamental pacing problem between rapid technological advancement and slower governance processes


How to overcome geopolitical tensions that prevent necessary global cooperation on AI governance


How to prevent further concentration of power in big tech companies while maintaining innovation


How to allocate responsibility across the complex AI supply chain from chip manufacturers to end users


How to balance the need for regulation with avoiding fragmentation across different jurisdictions


How to address the ‘joint ignorance’ of risks between regulators and companies


How to implement participative and deliberative democracy mechanisms using AI technology


How to determine what constitutes valid ‘science and evidence’ for AI policy-making


Suggested compromises

Meet in the middle between top-down principle-based approaches and bottom-up problem-first approaches through the regulatory layer


Use small to medium-sized countries like Greece as experimental grounds for AI governance approaches before broader implementation


Treat AI governance and safety as a neutral ‘safe zone’ for international cooperation despite other geopolitical differences


Implement iterative policy approaches that don’t hamper innovation too early but also don’t catch problems too late


Ground policy discussions in science and evidence as a common language to bridge different stakeholder perspectives


Thought provoking comments

The greatest obstacle, in my opinion, to translating AI governance principles into practice may actually be in the very question itself. So I work in Silicon Valley in industry… our approach is to start with a practice, to start with a problem… there is a bit, perhaps, of a paradigm tension between a more Silicon Valley tech startup approach… and more traditional institutions that have a more principle-based approach

Speaker

Artemis Seaford


Reason

This comment is deeply insightful because it challenges the fundamental framing of the entire discussion. Rather than accepting that principles should be translated into practice, Seaford suggests the problem lies in starting with principles at all. She introduces the concept of paradigm tension between bottom-up (problem-first) and top-down (principle-first) approaches, which reframes the entire governance challenge.


Impact

This comment fundamentally shifted the discussion from ‘how do we implement principles’ to ‘maybe we’re approaching this backwards.’ It introduced the critical insight that the methodology of governance itself might be the obstacle, leading to a more nuanced understanding of why implementation is difficult. It also validated the industry perspective while showing how it could complement traditional governance approaches.


AI is a dual-use technology. It can help both attacker side and defender side… with our recent analysis, it shows that, unfortunately, AI is going to help attackers more in the near future… AI agents can now find zero days in widely distributed open source software actually relatively easily

Speaker

Dawn Song


Reason

This comment is particularly thought-provoking because it moves beyond abstract governance discussions to concrete, immediate security implications. Song provides specific evidence that AI is already capable of finding zero-day vulnerabilities and solving high-value cybersecurity bounties, while simultaneously revealing that this capability currently favors attackers over defenders.


Impact

This comment grounded the entire discussion in urgent, practical reality. It demonstrated that AI governance isn’t just about future hypothetical risks but about current, measurable capabilities that are already shifting power balances in cybersecurity. It also introduced hope by suggesting that ‘secure-by-design’ approaches could eventually reverse this advantage, showing how technical solutions could address governance challenges.


Why not create a fourth branch of government, a deliberative branch, using AI, so merging technology for democracy, where citizens can deliberate on all the laws and policies… where algorithms allow for real debate, not bullying, not polarization, but real debate, but also consensus building

Speaker

George Papandreou


Reason

This comment is remarkably innovative because it proposes using AI itself as a solution to democratic governance challenges. Rather than seeing AI as something to be governed, Papandreou suggests AI could enhance democratic participation and deliberation. The idea of a ‘fourth branch of government’ is constitutionally radical and addresses the concentration of power problem he identified earlier.


Impact

This comment introduced a completely new dimension to the discussion – the possibility that AI could be part of the solution to its own governance challenges. It connected ancient Greek democratic ideals with cutting-edge technology, showing how historical wisdom could inform future governance structures. It also shifted the conversation from defensive (how to control AI) to constructive (how to use AI for better governance).


Companies want regulation, but regulation that avoids both the hard rock of unpredictability, the rock of unpredictability, and the hard place of fragmentation… The issue is… companies, particularly in the current moment in the AI space, are stuck between a rock and a hard place

Speaker

Artemis Seaford


Reason

This comment is insightful because it directly contradicts the common assumption that companies oppose regulation. Seaford reveals that the real industry concern isn’t regulation itself, but poorly designed regulation that creates uncertainty and fragmentation. She also provides a clear framework for understanding what makes regulation helpful versus harmful to innovation.


Impact

This comment significantly shifted the tone of the discussion from adversarial (regulators vs. industry) to collaborative. It showed that there’s potential alignment between industry needs and governance goals, which opened up space for more constructive policy discussions. It also provided concrete guidance for policymakers on how to design regulation that supports rather than hinders responsible AI development.


China has not, does not want to get into this geopolitical competition. China has never been fully understand why China’s viewed as an adversary in this situation… AI governance, AI safety is a safe zone, that no matter what differences you have, this is something we can work together

Speaker

Lan Xue


Reason

This comment is thought-provoking because it directly addresses the geopolitical elephant in the room from a Chinese perspective. Xue’s assertion that China doesn’t understand why it’s viewed as an adversary and his proposal that AI safety should be a ‘safe zone’ for cooperation challenges the prevailing narrative of inevitable AI competition between superpowers.


Impact

This comment introduced a different geopolitical perspective that complicated the discussion in important ways. While it may have been met with some skepticism, it highlighted the possibility that some geopolitical tensions around AI might be based on misunderstandings rather than fundamental conflicts of interest. It reinforced the theme that cooperation is both necessary and potentially achievable, even amid broader geopolitical tensions.


Overall assessment

These key comments fundamentally transformed what could have been a routine discussion about implementation challenges into a sophisticated exploration of paradigm shifts, immediate technical realities, and innovative solutions. Seaford’s paradigm challenge reframed the entire premise, Song’s technical insights grounded abstract governance in concrete cybersecurity realities, Papandreou’s fourth branch proposal showed how AI could solve its own governance challenges, and the industry-regulation alignment revelation opened new collaborative possibilities. Together, these comments elevated the discussion from identifying problems to reimagining approaches, demonstrating that the most significant obstacles to AI governance may require fundamental shifts in how we think about governance itself, rather than just better implementation of existing approaches.


Follow-up questions

How do we do things on a timeline that is relevant to the pace of change?

Speaker

Robert Trager


Explanation

This addresses the critical pacing problem where technology moves very fast while governance moves much slower, which was identified as one of the greatest challenges for effective AI governance


What do we mean by evidence and how do we collect evidence for science and evidence-based AI policy?

Speaker

Dawn Song


Explanation

This is fundamental to implementing the proposed approach of grounding AI policy conversations in science and evidence, but the specifics of what constitutes evidence and collection methods remain unclear


How can we develop new technologies for quantitatively safe AI with better proof of guarantees?

Speaker

Dawn Song


Explanation

This addresses the critical gap in current AI deployment where there are no proof of guarantees that AI systems will be trustworthy, safe and secure


How can we shift the balance to have AI help defenders more than attackers in cybersecurity?

Speaker

Dawn Song


Explanation

Current analysis shows AI will help attackers more in the near future, so research is needed to reverse this trend and develop secure-by-design approaches


How can we create a fourth branch of government – a deliberative branch using AI for participative democracy?

Speaker

George Papandreou


Explanation

This explores how to merge technology with democracy to enable real citizen participation in deliberating laws and policies, moving beyond current polarizing platforms


How can we collect and synthesize best practices from companies and countries into global standards?

Speaker

Lan Xue


Explanation

While there’s tremendous potential to internationalize best practices, the mechanisms for collection, synthesis and standardization need to be developed


How can we establish clear chains of responsibility in the complex AI industry vertical?

Speaker

Artemis Seaford


Explanation

The AI industry involves multiple layers from chip manufacturers to end customers, making it complicated to allocate responsibility for downstream problems


How can we create governance bodies that work iteratively with industry to address uncertain AI risks without hampering innovation?

Speaker

Artemis Seaford


Explanation

For problems with high uncertainty like advanced AI risks, new governance structures are needed that can adapt over time while balancing innovation and safety


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.