WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy
26 Jun 2025 10:30h - 11:45h
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy
Session at a glance
Summary
This discussion focused on the launch of a roadmap for AI policy research developed by the Global AI Policy Research Network (GlobalPOL), a collaborative initiative between the AI Policy Lab at Umeå University and Mila Quebec AI Institute. The session aimed to bridge the gap between AI research and policy implementation to ensure evidence-based governance that serves societal needs rather than purely economic interests.
Virginia Dignam emphasized that AI is not an inevitable force like weather, but rather a socio-technical system shaped by human choices and values. She argued that current AI development is dominated by monopolistic corporate interests and reinforces existing inequalities, particularly affecting Global South countries and marginalized communities. The roadmap proposes core principles including human and planetary welfare, accountability, inclusivity, and ethical governance, with research priorities focusing on transboundary governance, measuring AI benefits and risks, and sector-specific policy needs.
Multiple speakers addressed the tension between innovation and regulation, with Alex Moltzau from the European Commission’s AI Office explaining how the AI Act incorporates scientific panels and regulatory sandboxes to create evidence-based governance. Industry representative Eltjo Poort noted that clear regulatory guidance actually accelerates innovation by reducing uncertainty for businesses operating across multiple jurisdictions.
The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing that different regions have varying priorities and cultural contexts. Participants emphasized the need for contextualized, evidence-based approaches to AI policy that involve multi-stakeholder collaboration including academia, government, and industry. The session concluded with calls for continued research to support democratic decision-making about AI deployment and to resist pushback against evidence-based regulation.
Keypoints
## Major Discussion Points:
– **AI Policy Research Roadmap and Global Network Launch**: The session introduced a comprehensive roadmap for AI policy research developed by the Global AI Policy Research Network (GlobalPOL), emphasizing the need for evidence-based, multidisciplinary approaches to AI governance that bridge the gap between research and practice.
– **Balancing Global Cooperation with Regional Diversity**: Extensive discussion on how to achieve policy interoperability across different jurisdictions while respecting cultural contexts and avoiding regulatory capture, with speakers emphasizing that “one-size-fits-all” global governance is neither realistic nor appropriate.
– **Innovation vs. Regulation Debate**: Multiple speakers challenged the false dichotomy between innovation and regulation, arguing that well-designed regulation actually accelerates innovation by providing clear guardrails and reducing uncertainty for organizations and businesses.
– **Practical Implementation Mechanisms**: Discussion of concrete tools for integrating research into policy, including the EU’s scientific panel under the AI Act, regulatory sandboxes, codes of practice, and fellowship programs to build capacity and facilitate knowledge exchange.
– **Sectoral Applications and Contextual Challenges**: Specific focus on healthcare as a case study, along with discussions of military AI applications, public sector privacy concerns, and the unique position of IT consultants and systems integrators in the AI governance landscape.
## Overall Purpose:
The discussion aimed to introduce and gather feedback on an AI policy research roadmap while exploring how to effectively bridge the gap between academic research and practical AI governance. The session sought to build a community of practice around evidence-based AI policy development and address key challenges in creating inclusive, sustainable, and regionally-sensitive AI governance frameworks.
## Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, characterized by academic rigor combined with practical urgency. Speakers demonstrated mutual respect and built upon each other’s points rather than engaging in adversarial debate. The tone was notably solution-oriented, with participants sharing concrete examples and actionable recommendations. There was an underlying sense of urgency about the need for immediate action in AI governance, balanced with recognition of the complexity of the challenges involved. The hybrid format (in-person and online) maintained engagement across both audiences, with moderators successfully facilitating inclusive participation.
Speakers
**Speakers from the provided list:**
– **Tatjana Titareva** – Staff scientist at AI Policy Lab at Umeå University, session moderator
– **Isadora Hellegren** – Senior Projects Manager for AI Policy Research at Mila Quebec AI Institute, co-moderator
– **Virginia Dignam** – Professor of Responsible AI at Umeå University and Director of the AI Policy Lab
– **Alex Moltzau** – Second national expert in the European AI office within the European Commission DigiConnect, background in social data science
– **Eltjo Poort** – Vice President Consulting at CGI in the Netherlands, IT consultancy and systems integration
– **Jason Tucker** – Associate Professor at AI Policy Lab at Umeå University and researcher at the Institute for Future Studies, works in AI and global health
– **Joanna Bryson** – (Role/title not specified in transcript, participated online)
– **Anne Flanagan** – Void Strategy Group, former EU policymaker, based in San Francisco
– **Audience** – Various audience members including:
– Petter Eriksson – Staff scientist at AI Policy Lab
– Mattias Brändström – Researcher in the AI policy lab
**Additional speakers:**
– **Knut** – From Norwegian tax administration (participated online, question read by moderator)
Full session report
# AI Policy Research Roadmap Launch: Bridging Research and Practice for Responsible AI Governance
## Executive Summary
This discussion centred on the launch of a roadmap for AI policy research, developed through the collaborative efforts of the Global AI Policy Research Network (GlobalPOL), a joint initiative between the AI Policy Lab at Umeå University and the Mila Quebec AI Institute. The session brought together academics, policymakers, and industry representatives to address the critical gap between AI research and policy implementation, emphasising the need for evidence-based governance frameworks.
The hybrid format facilitated participation from both in-person and online attendees, with speakers contributing from various locations including Bangkok, where Virginia Dignam was attending the UNESCO Forum for the Ethics of AI.
## Introduction and Context Setting
The session was moderated by Tatjana Titareva, Staff Scientist at the AI Policy Lab at Umeå University, alongside co-moderator Isadora Hellegren, Senior Projects Manager for AI Policy Research at Mila Quebec AI Institute. Hellegren opened by noting the absence of Nima Lukangira, member of parliament in Tanzania, who “sends her regrets as she’s unable to join us today.”
Hellegren introduced the AI Policy Research Roadmap, emphasising that it represents a collective effort to inform global approaches to AI governance through best practices and international collaboration. She posed a question to online participants: “What is the most important thing to consider in AI policy research?”
## The AI Policy Research Roadmap: Core Framework
Virginia Dignam, Professor of Responsible AI at Umeå University and Director of the AI Policy Lab, presented the roadmap’s foundational framework with a powerful reframing: “AI doesn’t happen to us. AI is not weather. AI is developed by us, is developed by organisations, by people, and it’s ultimately dependent the way AI looks, what we’re doing with AI is ultimately dependent from the choice that we make.”
This positioning of AI as a socio-technical system requiring deliberate human choices established the philosophical foundation for the roadmap’s approach.
### Core Principles and Research Priorities
The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclusivity, diversity and capacity building, ethical research practice, ethical governance and equitable economic growth.”
Dignam outlined key research priority areas: “transboundary governance, about means and tools and possibilities to define and measuring the benefits of AI and at the same time also be able to define and measure the challenges and the risks of AI, foresight and the proactive regulation, looking at codes of conduct, looking how can we embrace and do research based on collaboration, on participation and not just make that a kind of a side sort or a check in the checklist. We also need to look at the different sectors and how the policies and research in policies need to be done to be aligned with the needs and the characteristics of different sectors.”
## Current Challenges in AI Governance
Dignam highlighted critical challenges, referencing a Mozilla Internet Foundation image showing “how AI sees the world” from 2022, noting that “if anything, today, this image is even more skewed than what is showing in the slide.” She argued that current AI deployment reinforces existing inequalities and marginalises non-Western worldviews.
The gap between AI development speed and understanding of its impacts continues to widen, with policy responses remaining fragmented and dominated by short-term interests rather than comprehensive approaches considering long-term societal impacts.
## The Innovation Versus Regulation Debate
Multiple speakers challenged the perceived tension between innovation and regulation. Alex Moltzau, working “as a second national expert in the European AI office within the European Commission DigiConnect” (speaking in personal capacity, “not on behalf of the European Commission”), argued that “the binary opposition between innovation and regulation is false – well-applied regulation creates better innovation for citizens and communities.”
Eltjo Poort, Vice President Consulting at CGI in the Netherlands, supported this view: “Regulation does not hamper innovation but actually speeds it up by providing clear guidance and reducing uncertainty for organisations.” He explained that clear regulatory guardrails help organisations innovate faster by eliminating compliance concerns.
## Global Cooperation Versus Regional Diversity
Joanna Bryson, participating online, provided a critical perspective on global harmonisation: “I also think that it’s a mistake to have a single governance structure that would probably get captured. That’s the problem of regulatory capture and market concentration we’re seeing right now. And a lot of the people pushing very hard for uniformity are coming from the country where a lot of the concentrated power is right now.”
Anne Flanagan, from Void Strategy Group and former EU policymaker, supported this perspective whilst offering a practical framework: “We need to think about interoperability rather than harmonisation,” emphasising that different regions can maintain distinct approaches whilst ensuring policies can work together effectively.
## Evidence-Based Policymaking: Mechanisms and Challenges
Moltzau detailed how the EU AI Act incorporates scientific input: “We have a scientific panel that is established within the AI Act, which provides a direct mechanism for the research community to engage in the regulatory framework.” He also described regulatory sandboxes as mechanisms for testing AI systems in controlled environments.
However, Flanagan highlighted a fundamental challenge: “The lack of evidence base makes it difficult to legislate technologies that haven’t fully materialised yet, requiring sandbox environments for testing.”
Moltzau referenced the “Dutch welfare scandal” as an example of AI governance failures, emphasising the importance of learning from such cases.
## Sectoral Applications: Healthcare Insights
Jason Tucker, Associate Professor at the AI Policy Lab, provided insights into healthcare AI, arguing that “AI advances in healthcare exist only in some areas and for some people, with benefits not equally distributed globally.”
He cautioned against techno-solutionist approaches: “If we throw resources at AI, we can fix the healthcare system. So we’re diverting resources away from chronically underfunded public healthcare systems into healthcare in the hope that a magic pill will be created to fix the healthcare system.”
However, Tucker noted that healthcare AI governance can benefit from historical precedent, pointing to how regulation has enabled medical innovation through international treaties regulating medicines.
## Industry Perspectives: Systems Integration Challenges
Poort highlighted a gap in current policy frameworks: “The role of consultants and systems integrators falls a little bit into a gap in the policy so far. For example, looking at the AI act, you see that there’s a lot of attention for the providers and for deployers. And we tend to be in the middle of that.”
He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overly detailed technical specifications that become unmaintainable.
## The Question Zero Approach
A significant concept emerged around fundamental evaluation of AI deployment. Dignam emphasised: “Question zero must be asked: whether AI is the best option, using AI because we should rather than because we can.” This approach challenges assumptions about AI deployment and requires explicit justification for why AI represents the best available solution.
## Military and Dual-Use Applications
Petter Eriksson, Staff Scientist at the AI Policy Lab, addressed military applications, referencing “Dario Guarachio” and the “AI Labour and Society workshop.” He suggested taking “inspiration from the work of academics back in the 50s and so on, 60s, on nuclear proliferation, for example, and consider areas where it is not appropriate to apply AI technologies and gather a global perspective on that and make for a global push on limiting those potential very actual harms of AI technology.”
## Capacity Building and Implementation
The discussion highlighted capacity building as critical for effective AI governance. Moltzau emphasised that “competence building in public sector organisations and collaboration with research communities is essential for appropriate AI implementation.”
Questions from online participants, including Knut from the Norwegian tax administration, highlighted specific challenges facing public sector organisations in implementing AI whilst protecting privacy.
## Addressing Future Risks
Poort raised concerns about potential political pushback: “Strong pushback against AI regulation may affect AI policy research funding by labelling it as activist research.” Hellegren emphasised that “building resilient institutions for academic integrity is crucial when political winds change direction.”
## Next Steps and Network Development
The discussion concluded with plans for the Global AI Policy Research Network, including fellowship programmes, student and staff exchanges, and an annual AI Policy Summit scheduled for mid-November in the Netherlands.
Key implementation mechanisms include developing AI policy briefs, capacity building initiatives, and continued integration of the research community into policy processes like EU AI Act implementation.
## Conclusion
The discussion demonstrated broad agreement on fundamental principles: that regulation can enable innovation, that evidence-based research is essential for effective policy, and that multidisciplinary collaboration is necessary for addressing AI governance challenges.
As Dignam concluded: “Evidence-based scientific research is more important than ever to counter political whims and private sector dominance in AI decisions.” The AI Policy Research Roadmap and Global AI Policy Research Network represent steps towards ensuring that AI governance is informed by rigorous research and commitment to human welfare.
The emphasis on “question zero” – whether AI is the appropriate solution to a given problem – provides a framework for ensuring that AI deployment serves genuine human needs rather than simply advancing technological capabilities. The path forward requires sustained commitment to building bridges between research and practice whilst maintaining focus on how AI can serve human flourishing.
Session transcript
Tatjana Titareva: Good morning, we are extremely happy to see all of you both in person and online. My name is Tatyana Tytareva, I’m a staff scientist at AI Policy Lab at Umeå University, and I’m going to moderate the session in person, and I also have a co-moderator online. Isadora, would you like to introduce yourself?
Isadora Hellegren: Hi, everyone. Welcome to the session. My name is Isadora Hellegren. I am a Senior Projects Manager for AI Policy Research at Mila Quebec AI Institute.
Tatjana Titareva: Thank you so much. Today’s session’s focus is to discuss the roadmap for AI Policy Lab that we have developed within the community of international AI policy researchers. You can see the QR codes both to the roadmap as well as to the community that we are going to launch soon, and we would like to achieve the following goals for today’s session. Number one, to introduce and discuss the key concepts of the roadmap. Secondly, to discuss with you both in person and online, how can AI policy research support global cooperation in AI governance while preserving the regional diversities? And thirdly, what mechanisms can best support the access to an effective integration of AI policy research into AI governance processes? So our session is structured in the following way. We will start with the background of the roadmap by Isadora, then we will have a keynote speech by Professor Virginia Dignam, who is currently based in Bangkok. Then we will go into the speaker interventions from different regional and sectoral perspectives. We will open the floor to the discussion, hopefully for around 15 to 20 minutes, and then Professor Virginia Dignam will close the session with her final remarks. Isadora, the floor is yours.
Isadora Hellegren: Thank you so much, Tatiana. It is really a true pleasure to be here with all of you today. And before kicking off, I will share a question with all of you online participants as well, whom I wish would like to share one word on what is the most important thing to consider in AI policy research. So I will be checking back in on this question a little bit later. But now, let me begin. We’re very glad to be having this conversation here, especially at the Internet Governance Forum. Many of the challenges and opportunities that we face in AI governance are, although sometimes treated as such, not new. We have much to learn from internet governance history, and we do find ourselves at a moment right now in AI governance, where we are coming out of a strong embrace of the many possibilities of AI, much like the early days of widespread and popular access to the internet, where much of the AI landscape and ecosystems remain untested and unregulated. But we have also reached a point where we’ve been able to identify and document many of the risks, harms, and impacts of this widespread adoption. Tackling the multidimensional and complex challenges and opportunities of AI requires us to move to action. and move forward in an informed way. So, for responsible AI governance to be able to respond to actual needs, as defined by those who experience them, we must bridge the gap between research and practice to ensure robust and evidence-based AI policy. This need led us up to the inaugural AI Policy Research Summit in Stockholm last November, as a joint initiative by the AI Policy Lab and MiLab. The summit brought together a community eager to address this need for better synergies between research, policy and impact to realize responsible, equitable and sustainable AI for the benefit of all. And following the summit, we established the Global AI Policy Research Network, or GlobalPOL. I’m not sure we have the slides up and running. If we do not, we’ll have them up in just a moment where you will be able to access the QR codes where you can see more about the GlobalPOL Network. A core objective of the GlobalPOL Network is to inform global approaches to AI governance by sharing best practices and fostering collaboration on developing AI policy. This includes advancing responsible AI policy research that meets the growing need for governance grounded in ethical, transparent and evidence-based practices to shape inclusive and trustworthy policies. The GlobalPOL Network is guided by the AI Policy Research Roadmap, central to our session here today. This roadmap was developed through collaborative discussions at the inaugural AI Policy Research Summit. This roadmap provides guidance on how to ensure advancements in AI align with local ethical, legal and social priorities. And in a minute, it will be my sincere pleasure to hand over to an initiator of the inaugural AI Policy Summit and founding member of the GlobalPOL Network, Virginia Dignam. Virginia will open the session by highlighting the critical role of AI policy research in shaping inclusive, context-aware and globally relevant AI governance. Drawing on insights from the AI Policy Research Roadmap, she will outline key priorities for responsible AI, discuss how multi-stakeholder collaboration can strengthen governance frameworks while respecting regional diversity and global interdependence, and frame this session’s focus on practical pathways to integrate research into policy for ethical, effective and future-proof AI systems. Welcome, Virginia Dignam, Professor of Responsible AI at Umeo University and Director of the AI Policy Lab.
Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that in the 15 minutes that I have, I will be able to meet the task. But let me start by sharing my slides. Just a second. Always changing from one to the other. Okay. So the Roadmap for AI Policy Research, and thank you, Isadora, for the short introduction that you made about how we started and where we are now and also about the community, the network that we are launching today. I’m trying to give a little bit more of the background on why do we start this, how we see the roadmap being used and what for can we use this roadmap. Why? First, in order to understand why we have this roadmap for AI policy research, of course, it’s important to understand what we mean by AI policy research and why do we need AI policy research. It’s no surprise to any of you that AI is shaping societies in a profound and very impactful way and it’s affecting us positively and negatively in many different ways, in our rights, our decisions, our behavior, our agency, but also it’s shaping and modifying global power dynamics and that is ever more visible since the last few months and how geopolitics are changing. But despite this growing awareness of the change of society, policy responses are very often fragmented, reactive and also very much dominated by short-term interests. There is no way or no continuity or no globality on addressing policy response to the impact of AI. The gap between the development of AI and our understanding of its impact is also widening. We are seemingly able to change quicker the technology and the systems that we are developing than our understanding of what exactly these systems are doing and therefore we need scientific research and mostly multidisciplinary research or I would say further, we need to go beyond the disciplines because AI is not just a technology, it’s a social technical system, it’s a system of systems and one discipline alone is not sufficient to address it. So we really need to look at how can we go beyond disciplines and create a new field to address all this complexity. This is not just a technical challenge, it’s really a societal imperative. as we look at the impact of AI in society. And if we look at that, it’s always powerful to use this image by the Mozilla Internet Foundation showing how AI sees the world. And this is a image from 2022. If anything, today, this image is even more skewed than what is showing in the slide. And of course, even within those blue areas of the world, the skewedness of access and participation in the AI is different for different communities, for different groups, for different demographics. So we do need to understand the power or the capability of AI to shape and to affect our abilities of decision-making, our own behavior, and also the power structures, like I just said. The current deployment tends to reinforce existing inequalities. And at the same time, it keeps marginalizing no Western worldviews and indigenous knowledge and other types of knowledge. So responsible AI needs to go just much beyond the technical fixes. It needs to require a much broader understanding and the instruments to do that for inclusive governance, for the contextual understanding and accountability of all of us. So the context in which we are now, in one end, we see the area of regulations and policies, which are many and growing in one end. On the other end, we are also seeing again and again, a pushback towards regulation with this fallacious and completely fault idea that innovation is hampered by regulation, which has been proven again and again, that’s not the case. But we still see that in one end, we see all these regulations and policies. And I’m at this moment in Bangkok at the UNESCO Forum for the Ethics of AI. And it’s one of the things that we discuss here. And at the other end, we see the pushback on the regulations and policies, because the corporate power on developing AI remains strong. Monopolies that are controlling and determining what AI is, how we are using AI, what can AI do or not, are monopolies that are very much outside of the control of any democratic process. The impact on state sovereignty is huge. It may be, and like I said, dependent and much more evident now, given the changes in geopolitics than before. This is affecting smaller countries, countries in the Global South, much more than other countries. And the economic interests, all these developments are much more led by economic and capitalistic interests than by the impact and the ability to serve communities. If we look, we are at the IGF today, so we have to also to look at the way that the digital infrastructure is also shaping the global power and access. The infrastructure decisions have very deep implications to the access to AI, to surveillance, to the digital rights, but also to human rights in general. And even also here, we see a fragmented governance and which are weakening accountability. And therefore, the need for inclusive and transparent and equitable participation is again, very important. And again, we are seeing more and more this sovereignty debates showing how different and how the impact is in the North and in the South. So, regulation of AI is one end, one part of it, but we also are looking at how AI is enforcing regulation, our AI systems enforcing existing regulatory systems. And in other end, we are also seeing AI informing the way that the regulation is put forward. And looking at this cycle again, we need to address the research, the development, the understanding of this complex feedback loops from a perspective of multidisciplinary, a coherent and comprehensive research approach. Because AI doesn’t happen to us. The current narrative is often that AI is something like the weather. We have no idea how to control it. The only thing we can do is to take an umbrella if it’s going to rain, so address the effects of the weather. But AI is not weather. AI is developed by us, is developed by organizations, by people, and it’s ultimately dependent the way AI looks, what we’re doing with AI is ultimately dependent from the choice that we make. Who is making this choice? Which are the values that are considered in this choice? And how are we prioritizing these values? If we are not part of this conversation, someone else is taking the decisions for us. And also it’s increasingly important to understand and to provide both the research and the tools to ask the question zero, the question whether AI is the best option. It’s not about using AI because we can, but we need more and more to be able to understand how to use AI because we should. And also when not to use AI because the impact of AI might be bigger than not using AI. All these solutions are much more about societal humanities and social science than just about the technological science. And please stop me when the time is over because I cannot see my clock somehow in my screen. I think I talked already about innovation versus regulation, so just let’s go move and say a little bit more about the roadmap for responsible AI policy research. What we want, we want, we recognize the risks and the responsibilities of AI, but this is just the beginning. We believe and seeing that coordinated scientific research, scientific research across disciplines, beyond disciplines, participatory, and integrating not only the academic research, but again, different, different types of knowledge from indigenous knowledge, from contextual knowledge, of the affected populations needs to be integrated in order to guide policy in a way that it’s really grounded in ethics and sustainability. We need the tools and opportunities and the methods to identify the gaps, to prioritize global inclusivity and to build the mechanisms for the responsible AI development. For too long, we have been in the field of responsible AI talking about the principles and guidelines. It’s now time to really go on and deploy AI responsibly. And therefore, these are what we want to achieve with AI policy research roadmap. I think the slides, the QR code has been shared already. We need it because change is political, is not technical. This technology alone is not really moving, going to create the change that we need, the political change that we need, the social change that we need, the way that we are addressing the impact of AI. Digitalization raised many questions which are still unanswered, but we cannot wait for answering the questions about digitalization and then go and answer and address the questions of the next wave, the AI wave. And this feedback loops again and again is the core of the issues why AI policy is especially important. The core principles of our roadmap are the following and I will quickly go through them so because we can discuss that later in the Q&A. Human and planetary welfare, accountability and transparency, inclusivity, diversity and capacity building, ethical research practice, ethical governance and equitable economic growth. Those are the principles that we believe should be guiding and leading research and application of AI policy, research on AI policy and application and deployment of AI policies. Our research priorities are around transboundary governance, about means and tools and possibilities to define and measuring the benefits of AI and at the same time also be able to define and measure the challenges and the risks of AI, foresight and the proactive regulation, looking at codes of conduct, looking how can we embrace and do research based on collaboration, on participation and not just make that a kind of a side sort or a check in the checklist. We also need to look at the different sectors and how the policies and research in policies need to be done to be aligned with the needs and the characteristics of different sectors. The guiding actions and Isidore already also talked about it. We want to establish this community of practice to which we invite and welcome all of you. We are working on visiting AI fellowships at the. different groups involved in the community. Mila and us at AI Policy Lab already have starting fellowship programs in place. We have this annual AI Policy Summit, save the date. It will be mid of November in the Netherlands. The call for action, which we are sharing with you today, but we also want to work together on AI policy briefs and supporting the capability for student exchange, staff exchange, and other ways to support capacity building and AI literacy. So the time to act is now. AI is shaping our collective future. And if we don’t act today, we need to act today. And we need to act today, not only from the perspective of disparate policies, but in a comprehensive and scientifically ground research on the policies and the implementations that we are making. Thank you so much.
Tatjana Titareva: Thank you so much, Virginia. We really appreciate several very important points. And we are looking forward to a discussion later on after the intervention part, reflecting on your presentation. And now we will move on to several interventions. And we’ll start with the EU perspective by Alex Maltzau. Alex, the floor is yours.
Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a second national expert in the European AI office within the European Commission DigiConnect. And for today, I’m not speaking on behalf of the European Commission or representing any kind of official views or perspectives thereof. So first, I wanted to say that I was there in a personal capacity as well as this workshop. And I found it to be really wonderful to see everyone gathered together to kind of like how to collaborate as different research actors and how to engage with policy and trying to build this bridge. My background is in social data science. So in a way, with a data science background, but also this social science aspect, but also artificial intelligence and public services. However, I am working now directly trying to shape policymaking in the European area and in the internal market. And as someone who is working in the European Commission, as a policymaker, it’s really kind of invigorating, I think, to kind of come and to participate in the research community. And five years prior to joining the AI office, I worked directly with the research community of Norway nationally with AI, machine learning, and robotics. So maybe I’m biased, but maybe that’s a positive bias in a sense that if we want good policymaking, it’s really crucial that we have an evidence basis upon which we act because in many ways, we are running after all these targets and we want to hit all these milestones and I’m working now directly with the AI Act and also writing an implementing act for AI regulatory sandboxes. So I think this kind of way, what way can we work to create good feedback loops? How can we think of ways of involving the scientific community, involving the research community? I’m also really happy to say that it’s kind of like a core part of the AI Act itself and a lot of the processes surrounding this regulatory work with the AI Act. And I have two points in that regard and the first one being the scientific panel. So ingrained within the AI Act itself is this governance mechanism that establishes a scientific panel. The implementing act for this has already been accepted and the call has recently been published with the deadline this autumn. So in a way, we are kind of like actively recruiting experts in sort of GPI systems, AI impacts or related fields and 80% of course will come from the EU or EFTA but it’s also open to international experts to join this scientific panel. And it’s not just kind of a nice galleon to look at, it’s actually has a governance mechanism. So in a way, the scientific panel can also raise qualified alerts about certain systems as kind of general purpose AI models that does not kind of immediately sort of fit those like explicit requirements. So I think this is of course quite an important function but also like an important way that the scientific community can directly engage into the regulatory framework of the AI Act. And the second way, I think is in a process that is called shaping sort of like. the code of practice. So this is the code of practice for the general purpose AI models and you have about 1 000 stakeholders and many of them are researchers and most of the chairs have research backgrounds and I mean it’s really incredible to see kind of the reflections in that community and I think you know we also have to think about our regulatory work now in a way especially because the field of AI requires often kind of deep domain knowledge so sort of like how can we not ignore scientists you know and I think this is something that our trends you know in policy making that we want to avoid you know we have to listen to scientists we have to listen to researchers we have to listen to reason to facts if we do not do that then we will be worse off we will not have as good policy making process we will not have as good outcomes or we won’t even know what the outcomes are so I think I mean what outcomes do we want as citizens what outcomes do we want as communities and sort of like talking back to Virginia’s point AI is social you know it’s part of our lives it’s part of kind of our societies and and we really have to work hard to make sure that it works in a way that doesn’t create like any or worse in the climate crisis I think or the sustainable development goals I think one of the inspiring papers from Virginia as well and a lot of co-authors like go into this direction of like mapping AI and sustainable development goals so I think like there is so much to explore but if we do not listen to researchers if we do listen to scientists we have a big problem people so everyone here today let’s make sure that we bake that into everything that we do in our regulatory practice everything we do to shape a better environment for responsible AI thank you yes can you hear me yes we can
Tatjana Titareva: thank you thank you so much for providing your perspective on this Alex it’s very valuable to hear from someone who is working very concretely with this so thank you for this now I am sad to inform the audience here that due to circumstances our third speaker Nima Lukangira member of parliament in Tanzania sends her regrets as she’s unable to join us today but I’m happy on the other hand to say that we will have the chance to hear from a short port vice president consulting CGI in the Netherlands who will be sharing with us his reflections on the impact of AI policy research on IT consultancy and systems integrations welcome Elcho
Eltjo Poort: thank you Isadora yeah and thanks for giving me the opportunity to say a few things I there’s a little bit of happenstance here because I ran into Virginia when we were working on a on a project together and I hadn’t seen her for years and then she asked me, oh, can you say a few words at this IGF forum? So I had some sparring session with some of the experts within CGI and hence, yeah, I can say something about our position in this. So for those of you who do not know CGI, we are an IT services company, quite large, about 100,000 employees in 40 countries and we do mostly IT consultancy and systems integration. And we are also very involved in AI policy making, we have our own responsible AI framework, I’ll tell you why in a minute. And we are also one of the first signatories of the AI pact and we also know the people working there and they know CGI. So the first thing I would like to say is that this role of consultants and systems integrators falls a little bit into a gap in the policy so far. For example, looking at the AI act, you see that there’s a lot of attention for the providers and for deployers. And we tend to be in the middle of that. We integrate systems, we give consultancy, we give recommendations about deployment, but we are not a provider per se of AI systems. So that gives us some unique problems. One of the problems is, for example, the divergence of AI policy in different geographies. Many of our clients are multinationals. So I’m very happy to look at your focus on transboundary AI policy because that will definitely help us in our role as systems integrators. By the way, that is also one of the reasons that we have our own responsible AI framework and we have to keep it in line with all the various policies around the globe that our clients have to comply with. Another thing that I would like to say is about the level of detail of the policies. And here, there are actually two points that seem to be paradoxical. But on the one hand, it’s very important that the policy is at a principle level, the level of principles. If policy becomes too detailed, then it becomes very hard to maintain, especially when it comes to technology. This is a technology that is evolving very quickly. And if the policy is based on things like the number of floating point operations, like we meet, for example, in the AIX, then that is not really maintainable. It presents problems. So in that sense, it has to be on the level of principles. On the other hand, there also have to be very practical guardrails, because practical guardrails will indeed speed up innovation. But we need to stay away from the roadblocks that slow innovation. What we see is that organizations that have clear guidance and they they speed up innovation in AI terms because they don’t have to look over their shoulder all the time. They have no uncertainty about, are we breaking laws? Are we at risk from non-compliance, et cetera, et cetera. Maybe now we’re in the futurist. The sooner this gets clarified and becomes very clear, the easier it is to fast innovation. So this is how we see that AI policy actually does not slow innovation, but it speeds it up. And so those are the most important points that I would like to make. And of course, in order to do that, the policy researchers need to collaborate closely with the private sector, including the systems integrators. And as I said, we are already doing that. And yeah, this is also the first time that I heard from the network. So I will definitely make sure that we join that.
Tatjana Titareva: Thank you so much, Eltjo. We really appreciate the industry’s perspective. And now we would like to move on to our last intervention by Dr. Jason Tucker. He’s an Agent Associate Professor at AI Policy Lab at Umeå University. As well as a researcher at the Institute for Future Studies. Jason, floor is yours.
Jason Tucker: Thank you. So I wear two hats. I’m an academic, but I also work in public policy. And this is why I’m sort of particularly interested in these questions. And I work in AI and global health. And we hear repeatedly how AI is gonna revolutionize healthcare. Through many of the sessions over the last few days alone, I’ve heard six or seven accounts of healthcare being used as the primary example of the massive potential of AI to transform our societies for the better and to meet complex societal challenges. And it’s true, there are sort of areas where AI is making advances in healthcare. But critically, only in some areas and for some people. And this isn’t necessarily always positive. And we shouldn’t take for granted that these advancements will continue. Or that the benefits of this will be equally distributed globally. And I think one of the factors that’s limiting the potential of AI in healthcare, and also weakening global health itself, is the lack of international governance on AI. And I won’t talk so much about the reasons for this. Virginia’s touched upon these, the sort of shifting global political economy of healthcare as a result of AI and the concentrations of power in the hands of non-traditional health actors and new relationships we have between people and new healthcare providers. And the impact this is having on the norms and the roles of traditional healthcare actors, like clinicians. And these relationships are quite opaque. And there’s also grown concerns about security and the environmental cost of relying on AI to fix healthcare systems. As well as increasing evidence that, you know, this actually worsens health inequalities in some situations. It creates new health concerns and new problems. And it can cause more harm than good. But I find one of the things that worries me the most is this idea that if we throw resources at AI, we can fix the healthcare system. So we’re diverting resources away from chronically underfunded public healthcare systems into healthcare in the hope that a magic pill will be created to fix the healthcare system. If it works, that’s fantastic, but this is incredibly risky. And I think we can take these risks, but we have to be very strategic in how we do this. So this fragmented approach to AI governance, I think is bad for health, it’s bad for business, it’s bad for innovation. And we need a coordinated multi-stakeholder framework of governance to support the responsible use of AI in healthcare. And my argument here is based on historical precedent. The foundations, the greatest innovations we’ve had in healthcare and global health have been based on regulation to incorporate and the incorporation of new technologies. So some examples, modern medicine is built on regulation. A hundred years ago, a group of international experts met in Brussels to sign a treaty which regulated the potent medicines. So before then, when you went to the pharmacy, you had no idea what you’re getting. You could sell anything and anyone could sell it. And now when we go to the pharmacy, we feel confident when we buy paracetamol that it doesn’t really matter where we buy it from, it’s gonna be safe and it’s gonna be efficient. And effective. Also global health is reliant on international cooperation. I think we only have to think about the COVID pandemic and where there was cooperation and the benefits of this and when it wasn’t. cooperation and the devastating consequences of this. We don’t really have these choices. Health doesn’t exist within national borders. Similarly, healthcare is useful here because it’s rooted in scientific method and evidence-based research. When we go to the clinician and we ask for a diagnosis, we expect this to be rooted in scientific research. And similarly, so we can use this as a way to cut through the hype of AI and demand greater clarity in these systems. Healthcare is also multi-stakeholder. You can’t do public health interventions without including complex, diverse stakeholders. Otherwise, the policies just don’t land well. We see this again and again. So what are the next steps? We have some great foundations from the World Health Organization, their ethical guidelines on artificial intelligence and health. And we have this OECD and the EU AI Act as well, identifying health as a high-risk area and extra concerns around that. It’s fantastic. But what we’re really lacking is evidence-based research to inform this policy. And some of this exists and it’s siloed within academia, and some of it just doesn’t exist and it’s desperately needed. So this is why I got involved with this roadmap on AI policy research to try and figure out this disconnect, where we can better connect research to policymaking and also where there’s gaps we can address this through scientific research. Thank you.
Tatjana Titareva: Thank you so much. And now after several interventions, we are moving to the exciting discussion part and we are opening the floor for both in-person and online participants. And in order to ask questions, we would really appreciate if you could use the mics on the sides, as well as when you speak to introduce yourself. And for the discussion, we are going to address, start addressing the three questions on the screen. And I believe Isadora also would like to take it over.
Isadora Hellegren: Thank you so much, Tania. I will also, of course, want to invite everyone who is participating online to be active contributors to this discussion. I also shared the discussion questions in the chat for you to have a look at and refer to. We want to go over in the last remaining 25 minutes or so before we move into closing remarks, the following three topics. How can AI policy research support global cooperation in AI governance while respecting regional diversity? What mechanisms can best support the access to an effective integration of AI policy research into AI governance processes? We’ve heard some examples here from our previous speakers, but we would love to also hear your thoughts on these or other potential mechanisms as well that you might want to bring up. And what are the most pressing areas in sustainability and digital rights protection where AI policy should be targeted? Where are we not looking and where would we need to redirect or provide more attention? These are our questions, and we will open up the floor to all of you.
Alex Moltzau: Yes, so one thing that I didn’t mention that we are working on currently is also these AI regulatory sandboxes. So I’m currently part of writing also the Implementing Act for the AI regulatory sandboxes, which is a new framework that we’re working on and it’s a framework that we’ve been working on for a number of years now. And it’s a framework that we’ve been working on which means kind of operationalizing the way that these are being rolled out across the European region. And I was just prior to this in a discussion with sandboxes globally as well, because for me personally, again, I think like we don’t have all the solutions, obviously in Europe, and we should look at like all these different locations across the world. A lot of people forget that like digital payment didn’t kind of originate so much from Europe as from Africa. And like, I think like what ways can we create an evidence basis, you know? So for the sandboxes as well, these are in the shape of like exit reports, you know? So that the dissemination and communication of like how do these SMEs and startups like in health or like in other areas actually work for citizens? Do they work well? Do they fulfill even existing regulatory requirements or kind of the interplay of these different regulations that we have and how to solve that in the best way possible? Because we want innovation, you know, it’s like amazing. I mean, like I use often use cars as an example. I know it’s a bit silly, but I also like to use the child seat, you know, because I have a daughter and it’s not like when I put in the child seat in the car, I don’t think like, oh, this is not so innovative. You know, it’s like, it’s almost a comedic, right? To see people talk that there’s like this binary opposition between like innovation and regulation. Regulation well-applied creates better innovation for citizens and for communities. And like, you were mentioning the flop count, you know, like I think one of the speakers here today from CGI, the VP at Showport. And I think this is a really good, I mean, example, right? Because like, do we. do we think that speed limits is a bad idea? It’s only a threshold, right? So like, it only means that over a certain threshold, there are kind of other requirements that may apply, right? So in a way, like, we’re not trying to stop innovation, but like, if you train models of a certain size, and like, I think that you are still kind of working out what is the best way to do this, maybe there are additional requirements that you have to think about, right? So, I mean, like, we haven’t figured out everything, right? And that’s why we are working with now 97 people. 97 people in the AI office, we want to recruit 140 by the end of the year. So, I mean, these are just some points from our side.
Tatjana Titareva: Thank you so much, Alex. I would like to jump in here and also just open up for two comments or questions from our online participants. We have two hands raised. Joanna Bryson, would you like to? Go first.
Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in Oslo. I wanted to come specifically to your question about global cooperation. So while I strongly agree, of course, it’s nice to make things easier for the, for the, you know, people who produce the AI. I also think that it’s a mistake to have a single governance structure that would probably get captured. That’s the problem of regulatory capture and market concentration we’re seeing right now. And a lot of the people pushing very hard for uniformity are coming from the country where a lot of the, the concentrated power is right now. So I think it’s important that we recognize that all different countries have different priorities, capacities, risks, and it makes sense that we have at least some diversity in legislation. I think what the EU has demonstrated is that if you do harmonize your legislation, then you can have a bigger ask and people are still willing to come just to get access to your government. So, I mean, not to your government, to your economy, right? So basically the, there’s a, the real Brussels effect is the proportionality between how hard is it to do business where you are and what is your GDP, right? So what are you paying back for the effort made to comply with your regulations? There are global efforts to create sort of modules so that transparency is similar in different places. But I think the things that have happened in the last few months really show us that we want resilience through diversity. So thank you.
Isadora Hellegren: Thank you so much, Joanna, with this comment that resonates much with what we’ve also heard here earlier. Anne Flanagan.
Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on your screen. I’m Anne Flanagan of Void Strategy Group. I’m Irish, but based in San Francisco. And I have been one of those EU policymakers and I’ve also worked in one of the largest AI model providers. leaders. And sort of coming from both of those perspectives, I want to really just double click on that first question about how AI policy research can support global cooperation and AI governance while respecting regional diversity. I think we really need to zoom in on this idea of policy interoperability. We’re never going to have a global regime. We’re never going to have a single global governance structure. It’s not realistic. It’s also not appropriate because when we look at AI, I think really when we’re looking at the harms and protecting people, what harms mean can look different in different regions, different cultural contexts. It can be different even within the same country from person to person. So you have people are coming at this from different circumstances. One of the biggest challenges for me when I was in the Irish government looking at digital policy, I used to work on telecoms, infrastructure, data policy, et cetera, et cetera, and early AI policy. And one of the biggest challenges is that when you’re looking at something like this where there is a step change and you’re affecting multiple sectors, the evidence is thin as to what the impact of these technologies actually is. And by that, I mean that it’s really, really difficult to legislate or regulate something that hasn’t happened yet, right? We know as human beings, we know from other policy areas that there may be dangers that are imminent, but it’s very, very challenging when there is a lack of evidence base. That pulls you very quickly into ex ante rather than ex post regulation. There’s a tricky space to be in. If you look at what the OECD guidelines say around better regulation, we really should have an evidence base when we’re looking at what happens. But where does that leave us? I think the role of policy researchers is so incredibly crucial here. Alex mentioned the sandbox initiatives early on. Having those environments where you can test, trial and error and find out what the harms are and how they actually play out, particularly, for example, for GPI systems, is really, really something where the scientific and research community can help policymakers to make better decisions. The private sector, of course, has a role to play here as well. And I think encouraging the private sector to engage in partnerships with the policy research community is always a really, really healthy thing. They will be less willing to come forward to help governments. It’s very encouraging to see the EU set up that scientific research panel, and I think they probably will come forward in that respect. But this really is a case that it requires a multi-stakeholder effort to, one, surface different potential policy harms, two, test and unearth that evidence, and then three, really, really bring forward diverse perspectives around potential harms around AI.
Isadora Hellegren: Much appreciated. And important comments there as well for reflection here. I would like to turn it back to the room. We’ve had a couple of questions. There are more comments in the chat, but I’m happy to come back to them as I cannot see who might want to be speaking in the room. So over to you, Tanya.
Tatjana Titareva: We do not really have speakers at the mic, so we can continue with online. Oh, I’m sorry, Jason.
Jason Tucker: Thank you. I also think I agree with the the comments made about a global governance approach not being appropriate in this case, and the role of complexity in pushing forward these agendas. But I think one of the other areas we need to think about, especially in healthcare for example, the public health authorities are huge players and they can drive markets, they can demand standards that force the private sector and public-private partnerships to meet. So I think we also need to be creative in how we align our values. And this I think speaks a little bit to the conversations we had earlier of principles and then operationalization and how we ensure that we can meet the Hippocratic Oath and provide good public healthcare, but also constantly regulate and maintain this.
Tatjana Titareva: Isadora, would you like to take to the online participants? Thank you. We do have participation here from Petter Eriksson.
Audience: Hello, thank you so much. So I’m Petter Eriksson, I’m a staff scientist also at the AI Policy Lab, so some of you people I know very well. So I would like to bring up the question of military and AI. I was at the AI Labour and Society workshop a couple of weeks back and Dario Guarachio made a very good speech about the military and civilian trajectories of AI. We went into how tight the integration has been between AI research and the military-industrial complex in the US in particular, but also globally. And I think that’s maybe a place where we as researchers, we as economics and policymakers can really lead on a global harmonized effort, even if all areas of AI may not be appropriate to have as a globally harmonized regulatory space. I think that something we can take inspiration for, for example, from the work of academics back in the 50s and so on, 60s, on nuclear proliferation, for example, and consider areas where it is not appropriate to apply AI technologies and gather a global perspective on that and make for a global push on limiting those potential very actual harms of AI technology.
Isadora Hellegren: Thank you, Petter, for a very pertinent and timely intervention as well. A question I think many have on their minds as well regarding military use of AI nowadays. Thank you for this, Petter. I think we have another question, but I don’t know if you want to go ahead and answer it. question here from Knut, so we will go to that one before going back to the questions and see where we should be addressing more. Knut? Are you with us in voice, Knut? Otherwise I can read your question from the chat. Okay, I can read it out loud. There, question from Knut. The Norwegian tax administration, how can scientific research contribute to policies that help the public sector better hit the mark in safeguarding privacy when using AI? Our experience today is that we swing from one extreme to the other, either neglecting privacy or being overly cautious, which hinders our ability to be sufficiently innovative in the use of AI. Do we have any responders to this question? Alex, what would you like to address?
Alex Moltzau: I want to address this with an anecdote. Because I am Norwegian, I feel partly responsible here. I mean, I’m seconded as a national expert to the AI office through the EAFDA agreement. But one story from a friend in Norway as well, who was kind of in the situation where she was out of a job. I mean, it was the pandemic and she was pregnant. And then she was going to switch over from pregnancy pay and then back to unemployment pay. And that didn’t go automatically. And then because of some slowness in the system at the time, I think there was like a three-month wait until getting back on. And you have a mortgage running and you have things happening, right? And when I called them up because I was angry about this, I was like, what’s happening here? I mean, just as a concerned person. Their argument was that for privacy reasons they were not sharing data and fluency between. I love privacy, but we need to be competent enough to understand when we are trying to protect people and when data sharing within an organization is appropriate. And of course we have federated learning and privacy enhancing technologies, but that still means we have to be careful, right? Because there are cases that can go horribly wrong when we try to model citizens, as shown by the Dutch welfare scandal. People were placed in prisons and lost their rights to their kids. There are some serious potential consequences for citizens that are sometimes hard to predict. So I would say, raise the competence in the tax authority and work with research communities. There are many research centers established now that you can reach out to, also in Norway, that work on AI.
Tatjana Titareva: Thank you, Alex. I would like to say indeed that part of the roadmap is the need for capacity building initiatives as well. Raising the general awareness, whether it be within the public sector or the general public, on how to navigate these systems and use them well is indeed a crucial priority as well. So thank you for highlighting this, Alex. Isadora or Jason would like to add?
Jason Tucker: Just to jump in on that, I agree with Alex, but I also think we need to take a step back as well, because there’s an assumption that AI is a solution to sort of a broad range of societal challenges and complexities. And in some cases it will be enormously beneficial, but I think we need to remember there’s a cost financially in terms of energy and data and privacy and how we do this. So we need to be very strategic and think, is AI the best? What’s the sort of problem we’re trying to address? And is AI a solution? And we do a lot of work on this in the AI policy. like trying to discourage people using AI where it’s going to be wasteful or not efficient and where non-technical solutions are better or more sustainable.
Tatjana Titareva: Yes, we call it question zero in terms of even if we could, should be in the first place. And here we are talking about organizational levels, when organizations are having a formal moment running to adopt AI tool without understanding what kind of problems they are trying to solve with these tools. Isidore, as of now, I don’t see any questions in the audience. Do we have any online?
Isadora Hellegren: We do, from Elcho.
Eltjo Poort: Hi, yeah. This is not a question about either of these, but it is relevant in a meta way. We see some very strong pushback against AI regulation in the United States right now. And this may eventually also affect AI policy research, because in one way it may be considered in the future to be activistic research and thus putting funding at risk. Do you see that risk and do you see any ways to protect this very important research from such a risk?
Virginia Dignam: Maybe I can take this one. Yes, thank you for the comment, Elcho. Yes, it is a risk and it is an issue that is concerning many of us. The line between AI policy research and activist AI is becoming very, very narrow. And with all the pushbacks, not only in the United States against regulation, but we see that in Europe as well. And just recently the Swedish Prime Minister has proposed to the European Union at least a stop or a moratorium on further development and implementation of the AI Act. So we see that all over the place. And it shows, I think, if anything, the importance of grounded evidence-based scientific research around these topics. It cannot be just a whim of politicians, it cannot be left to the private sector alone. But we really need to, more than ever, work on the fundamental grounds on which we can measure and determine understand the impact of AI. Several of my colleagues have talked about the question zero, which is one of the main issues. It’s not about should we use AI or can we use AI. Using AI because we can, but also because we should. But this understanding why we should use AI is not a technical question, is a fundamental societal and political question. Depends the way that we ask the question, the answers that come. We really need to look at that also from, again, a multidisciplinary and interdisciplinary perspective, because it’s not just letting the technologists or the politicians answer the question of whether should AI be used or not. Because what we gain with it and what we lose with it are questions that really require deep participatory and fundamentally participatory way.
Eltjo Poort: Thanks, Virginia.
Isadora Hellegren: Yes, indeed. The call for also building resilient institutions for academic integrity when the wind blows in different directions is important here. Over to the room.
Tatjana Titareva: Yeah. Alex has an intervention.
Alex Moltzau: I think I just also wanted to speak to this question on the importance of evidence-based policymaking. I mean, it bears repeating, we should not take it for granted. I mean, look around the world and look around the politics and look around the statements that are being made. Like we have to work really hard to bridge that gap. I mean, that is why this initiative is so timely. That is why this initiative is so important. Because we need people to work on this actively, like an AI policy roadmap is what has been created. And now it has also kind of fairly clear actions to follow up on it. And this is a global concern. And I mean, it’s something that we can contribute to. There are actions that we can take to improve this. And like people that are listening in on this and people in the room, like, I mean, think about this and like do something about it when you get back to wherever you are working. I mean, like, and you could be working in a government, you could be working in a private company. Like, this is something that you can do something with. You know, like it’s in your hands and like, it’s something that you really can take responsibility for. So I would just encourage that, you know, not to take it for granted.
Isadora Hellegren: Yeah. So we do have one last question online. I think this might be our last question before we move into the closing remarks. Mattias Brändström.
Audience: Hello. I’m also a researcher in the AI policy lab. And I also want to comment on this. I also want to comment on this. the relationship between innovation and the pushback against regulation. And I think I just want to push for going really to the ground with a highly contextualized problem descriptions rather than engaging the problem from the too generalized standpoint. Because if you go down and talk to the actual problems in private sector, or in public sectors which people are actually dealing with, they are often from the perspective of lacking regulation or lacking the safety net that regulation creates as long as the problems are concrete enough. But when they become sweeping and generalized, we lose this connection. So I really think that public support for the use of regulation would really mean to bring up the concrete problems.
Tatjana Titareva: Matthias, thank you so much for bringing us to the ground and reminding about different stakeholders and their positionality. Isadora, I think we could move. We are left five minutes. Thank you so much for the moderation of online participation. And Virginia, would you like to take over for the concluding remarks?
Virginia Dignam: Sure, thank you for that. Thank you all for your participation, the ones who contribute, but also the ones who have been listening. Thank you so much to everybody. It’s a pity again I cannot be there in person to discuss it further with you guys. But I have been given the task to try to come up with some type of key takeaways from what we have discussed. And I think that it’s clear that one of the issues that is in the mind of many of us is around the governance, is around this eternal discussion between policy and innovation, regulation and innovation. And I think that there is exactly where, like I said just before, where we need much more effort into evidence-based, into scientific research, because that’s where we can really show the benefits of regulation, comparing different ways of regulation, because we tend to think about regulation as being set in stone, whereas on the other side, when we think about technology, we see a dynamic and adaptable way of building it. We need to also use the tools we have for technological innovation to support us on regulatory and organisational innovation, and looking at regulations, looking at organisational and societal transformation from the perspective of understanding and comparing and evaluating the capabilities and the weaknesses and powers of different types of regulation. So that is where research is important. I think that we looked also at the issues of interoperability and the danger of governance capture. So it is not, I think, the aim at all for research to come up with the one and only governance model, but exactly to understand and to deal with and provide the… the means for interoperability between regulatory and governatory models. Again, there is an issue for governance and I think that we all in the different types of interactions have considered the importance of participation and of inclusivity, inclusivity, inclusion of different groups, inclusion of different demographics, different communities, but also the inclusion of many different disciplines. And if I look back at the words that Isadora asked us to share, those are typically not the topics or the words that we associate with technology but are fundamentally societal issues. Like inequality, equity, humanity, inclusion, environment, and so on. And I think that’s there where we need to ground going forward on AI policy research. Thank you.
Tatjana Titareva: Thank you so much. We really appreciate in-person participants. And Isadora, would you like to finalize?
Isadora Hellegren: I will also extend our sincere thanks to all online participants for your active participation in this hybrid event, and for all who are in-person as well, to our esteemed speakers for your very pertinent interventions, contributing from all your diverse perspectives on these crucial questions at this pivotal point of time in AI governance as we move forward and need actively to take action. It’s been a very enriching conversation, and we do hope that you will continue following the updates from Global AI Policy Research Network. And please sign and endorse the roadmap if this calls to you. We do hope to see further adoption and adaption of these principles in networks as we move forward. And also, of course, thanks to the IGF for allowing us to host this session. And we hope to engage more with all of you through various channels as we move forward in this active pursuit to bridge policy practice and research. Thank you, everyone. And thank you, Tanya.
Tatjana Titareva: Thanks. And if you are allowed, we will leave our QR codes on the screen. Have a great day and lunch.
Isadora Hellegren
Speech speed
138 words per minute
Speech length
1291 words
Speech time
560 seconds
Building resilient institutions for academic integrity is crucial when political winds change direction – Academic Integrity
Explanation
There is a need to build resilient institutions for academic integrity when the wind blows in different directions. This refers to protecting academic research and maintaining institutional integrity despite changing political pressures and pushback against regulation.
Major discussion point
Risks and Future Challenges
Topics
Legal and regulatory | Development
Virginia Dignam
Speech speed
130 words per minute
Speech length
2586 words
Speech time
1187 seconds
Core principles include human and planetary welfare, accountability, transparency, inclusivity, diversity, capacity building, ethical research practice, and equitable economic growth – Core Principles
Explanation
The roadmap is guided by seven core principles that should lead research and application of AI policy. These principles encompass both human welfare and environmental concerns, ensuring that AI development serves broader societal needs rather than narrow interests.
Evidence
Those are the principles that we believe should be guiding and leading research and application of AI policy, research on AI policy and application and deployment of AI policies
Major discussion point
AI Policy Research Roadmap and Global Network
Topics
Human rights | Development | Legal and regulatory
Research priorities focus on transboundary governance, measuring AI benefits and risks, foresight and proactive regulation, and sector-specific policy alignment – Research Priorities
Explanation
The roadmap identifies key research priorities including transboundary governance, developing means and tools to define and measure both benefits and challenges of AI, foresight and proactive regulation, and looking at how policies need to be aligned with the needs and characteristics of different sectors.
Evidence
We also need to look at the different sectors and how the policies and research in policies need to be done to be aligned with the needs and the characteristics of different sectors
Major discussion point
AI Policy Research Roadmap and Global Network
Topics
Legal and regulatory | Economic
AI is fundamentally a socio-technical system requiring multidisciplinary research beyond single disciplines to address societal complexity – Multidisciplinary Approach
Explanation
AI is not just a technology but a social technical system, a system of systems, and one discipline alone is not sufficient to address its complexity. There is a need to go beyond disciplines and create a new field to address all this complexity as a societal imperative.
Evidence
AI is not just a technology, it’s a social technical system, it’s a system of systems and one discipline alone is not sufficient to address it
Major discussion point
AI Governance and Regulation Challenges
Topics
Sociocultural | Legal and regulatory
Agreed with
– Anne Flanagan
Agreed on
AI requires multidisciplinary and participatory approaches
Current AI deployment reinforces existing inequalities and marginalizes non-Western worldviews and indigenous knowledge – Inequality Reinforcement
Explanation
The current deployment of AI tends to reinforce existing inequalities and keeps marginalizing non-Western worldviews and indigenous knowledge. This is illustrated by how AI systems see the world in a skewed manner, with unequal access and participation even within developed regions.
Evidence
Mozilla Internet Foundation image from 2022 showing how AI sees the world, which is even more skewed today than what is showing in the slide
Major discussion point
AI Governance and Regulation Challenges
Topics
Human rights | Development | Sociocultural
Policy responses are fragmented, reactive, and dominated by short-term interests rather than comprehensive approaches – Fragmented Responses
Explanation
Despite growing awareness of AI’s impact on society, policy responses are very often fragmented, reactive and dominated by short-term interests. There is no continuity or globality in addressing policy responses to the impact of AI.
Evidence
There is no way or no continuity or no globality on addressing policy response to the impact of AI
Major discussion point
AI Governance and Regulation Challenges
Topics
Legal and regulatory
Gap between AI development speed and understanding of its impacts is widening, requiring scientific research to bridge this divide – Development-Understanding Gap
Explanation
The gap between the development of AI and our understanding of its impact is widening. We are able to change the technology and systems we are developing quicker than our understanding of what exactly these systems are doing, therefore requiring scientific research and multidisciplinary research.
Evidence
We are seemingly able to change quicker the technology and the systems that we are developing than our understanding of what exactly these systems are doing
Major discussion point
AI Governance and Regulation Challenges
Topics
Legal and regulatory | Development
Question zero must be asked: whether AI is the best option, using AI because we should rather than because we can – Question Zero
Explanation
It’s increasingly important to understand and provide both the research and tools to ask the question zero – whether AI is the best option. It’s not about using AI because we can, but we need to understand how to use AI because we should, and also when not to use AI because the impact might be bigger than not using AI.
Evidence
It’s not about using AI because we can, but we need more and more to be able to understand how to use AI because we should
Major discussion point
Fundamental Questions and Approaches
Topics
Legal and regulatory | Human rights
AI is not like weather – it’s developed by people and organizations, with choices about values and priorities that can be democratically influenced – Human Agency
Explanation
AI doesn’t happen to us like weather that we cannot control. AI is developed by organizations and people, and it’s ultimately dependent on the choices that we make. The key questions are who is making these choices, which values are considered, and how we prioritize these values.
Evidence
AI is not weather. AI is developed by us, is developed by organizations, by people, and it’s ultimately dependent the way AI looks, what we’re doing with AI is ultimately dependent from the choice that we make
Major discussion point
Fundamental Questions and Approaches
Topics
Human rights | Legal and regulatory
Evidence-based scientific research is more important than ever to counter political whims and private sector dominance in AI decisions – Scientific Research Importance
Explanation
With pushbacks against regulation in various countries, there is more need than ever for grounded evidence-based scientific research around AI topics. It cannot be just a whim of politicians or left to the private sector alone, but requires fundamental grounds to measure and understand AI’s impact.
Evidence
Just recently the Swedish Prime Minister has proposed to the European Union at least a stop or a moratorium on further development and implementation of the AI Act
Major discussion point
Risks and Future Challenges
Topics
Legal and regulatory | Human rights
Agreed with
– Alex Moltzau
– Jason Tucker
– Anne Flanagan
Agreed on
Evidence-based research is crucial for AI policy development
Alex Moltzau
Speech speed
158 words per minute
Speech length
1948 words
Speech time
736 seconds
Regulatory sandboxes provide framework for testing AI systems and creating evidence basis through exit reports and collaboration – Regulatory Sandboxes
Explanation
The EU is working on AI regulatory sandboxes as a framework that allows for testing and creating an evidence basis through exit reports. These sandboxes help determine how SMEs and startups work for citizens and whether they fulfill existing regulatory requirements.
Evidence
For the sandboxes as well, these are in the shape of like exit reports, you know? So that the dissemination and communication of like how do these SMEs and startups like in health or like in other areas actually work for citizens
Major discussion point
AI Governance and Regulation Challenges
Topics
Legal and regulatory | Economic
False binary opposition between innovation and regulation is false – well-applied regulation creates better innovation for citizens and communities – False Binary Opposition
Explanation
There is a false binary opposition between innovation and regulation. Using the example of child seats in cars, regulation well-applied creates better innovation for citizens and communities rather than hindering it.
Evidence
I use often use cars as an example. I know it’s a bit silly, but I also like to use the child seat, you know, because I have a daughter and it’s not like when I put in the child seat in the car, I don’t think like, oh, this is not so innovative
Major discussion point
Innovation vs Regulation Debate
Topics
Legal and regulatory | Economic
Agreed with
– Virginia Dignam
– Eltjo Poort
Agreed on
Regulation enables rather than hinders innovation
Scientific panel established within AI Act provides direct mechanism for research community to engage in regulatory framework – Scientific Panel Integration
Explanation
The AI Act establishes a scientific panel as a governance mechanism where 80% of experts come from EU or EFTA but is also open to international experts. This panel can raise qualified alerts about certain systems and provides a direct way for the scientific community to engage in the regulatory framework.
Evidence
The implementing act for this has already been accepted and the call has recently been published with the deadline this autumn
Major discussion point
Evidence-Based Policymaking and Research Integration
Topics
Legal and regulatory
Competence building in public sector organizations and collaboration with research communities is essential for appropriate AI implementation – Competence Building
Explanation
Public sector organizations need to raise competence and work with research communities to understand when data sharing is appropriate versus when it creates privacy risks. The example of Norwegian systems not sharing data between departments due to privacy concerns shows the need for better understanding.
Evidence
Norwegian tax administration example where pregnancy pay and unemployment pay systems didn’t share data, causing three-month delays for citizens with mortgages and expenses
Major discussion point
Evidence-Based Policymaking and Research Integration
Topics
Development | Legal and regulatory
Individual Responsibility is needed from individuals in government and private companies to bridge the research-policy gap – Individual Responsibility
Explanation
People working in government and private companies need to take active responsibility for bridging the research-policy gap. This is something that individuals can do something about when they return to their work, regardless of whether they work in government or private sector.
Evidence
People that are listening in on this and people in the room, like, I mean, think about this and like do something about it when you get back to wherever you are working
Major discussion point
Risks and Future Challenges
Topics
Development | Legal and regulatory
Agreed with
– Virginia Dignam
– Jason Tucker
– Anne Flanagan
Agreed on
Evidence-based research is crucial for AI policy development
Eltjo Poort
Speech speed
146 words per minute
Speech length
747 words
Speech time
305 seconds
Regulation does not hamper innovation but actually speeds it up by providing clear guidance and reducing uncertainty for organizations – Regulation Enables Innovation
Explanation
Organizations with clear guidance speed up innovation in AI because they don’t have to look over their shoulder all the time about breaking laws or compliance risks. The sooner regulations become clear, the easier it is to fast innovation, showing that AI policy speeds up rather than slows innovation.
Evidence
What we see is that organizations that have clear guidance and they they speed up innovation in AI terms because they don’t have to look over their shoulder all the time
Major discussion point
Innovation vs Regulation Debate
Topics
Economic | Legal and regulatory
Agreed with
– Virginia Dignam
– Alex Moltzau
Agreed on
Regulation enables rather than hinders innovation
Clear regulatory guardrails help organizations innovate faster by eliminating concerns about compliance and legal risks – Clear Guidance Benefits
Explanation
There need to be very practical guardrails because practical guardrails will speed up innovation by providing clear guidance. Organizations need to stay away from roadblocks that slow innovation while having certainty about compliance requirements.
Evidence
They have no uncertainty about, are we breaking laws? Are we at risk from non-compliance, et cetera, et cetera
Major discussion point
Innovation vs Regulation Debate
Topics
Economic | Legal and regulatory
Agreed with
– Virginia Dignam
– Alex Moltzau
Agreed on
Regulation enables rather than hinders innovation
Policy must balance principle-level guidance with practical guardrails while avoiding overly detailed technical specifications that become unmaintainable – Policy Balance
Explanation
Policy needs to be at a principle level because if it becomes too detailed, it becomes hard to maintain, especially with rapidly evolving technology. However, there also need to be practical guardrails, avoiding technical details like floating point operations that are not maintainable.
Evidence
If the policy is based on things like the number of floating point operations, like we meet, for example, in the AIX, then that is not really maintainable
Major discussion point
Innovation vs Regulation Debate
Topics
Legal and regulatory
Systems integrators and consultants fall into policy gaps, being neither providers nor deployers but playing crucial intermediary roles – Policy Gap Identification
Explanation
The role of consultants and systems integrators falls into a gap in current policy frameworks like the AI Act, which focuses on providers and deployers. These intermediaries integrate systems and give consultancy but are not providers per se, creating unique problems especially with multinational clients.
Evidence
Looking at the AI act, you see that there’s a lot of attention for the providers and for deployers. And we tend to be in the middle of that
Major discussion point
Sector-Specific Applications and Challenges
Topics
Economic | Legal and regulatory
Transboundary AI policy cooperation is essential for multinational clients while respecting regional differences – Transboundary Cooperation
Explanation
The divergence of AI policy in different geographies creates problems for systems integrators whose clients are multinationals. This makes transboundary AI policy focus important, and is one reason why companies develop their own responsible AI frameworks to align with various global policies.
Evidence
Many of our clients are multinationals. So I’m very happy to look at your focus on transboundary AI policy because that will definitely help us in our role as systems integrators
Major discussion point
Global Cooperation vs Regional Diversity
Topics
Economic | Legal and regulatory
Research Funding Risk – Strong pushback against AI regulation may affect AI policy research funding by labeling it as activist research
Explanation
There is strong pushback against AI regulation in the United States that may eventually affect AI policy research by considering it activist research, thus putting funding at risk. This represents a concerning trend that could undermine important research.
Evidence
We see some very strong pushback against AI regulation in the United States right now
Major discussion point
Risks and Future Challenges
Topics
Legal and regulatory | Development
Joanna Bryson
Speech speed
154 words per minute
Speech length
269 words
Speech time
104 seconds
Single global governance structure risks regulatory capture and market concentration, making diversity in legislation important for resilience – Diversity for Resilience
Explanation
A single governance structure would probably get captured, which is the problem of regulatory capture and market concentration we’re seeing now. A lot of people pushing for uniformity come from countries with concentrated power, so diversity in legislation provides resilience.
Evidence
A lot of the people pushing very hard for uniformity are coming from the country where a lot of the, the concentrated power is right now
Major discussion point
Global Cooperation vs Regional Diversity
Topics
Legal and regulatory | Economic
Agreed with
– Anne Flanagan
– Virginia Dignam
Agreed on
Diversity in governance approaches is necessary over single global framework
Disagreed with
– Anne Flanagan
– Virginia Dignam
Disagreed on
Global governance structure vs. regulatory diversity
Different countries have different priorities, capacities, and risks, making some diversity in legislation sensible and necessary – Regional Differences
Explanation
All different countries have different priorities, capacities, and risks, making it sensible to have at least some diversity in legislation. The EU has demonstrated that harmonized legislation can create bigger asks while people are still willing to comply to access the economy.
Evidence
The real Brussels effect is the proportionality between how hard is it to do business where you are and what is your GDP
Major discussion point
Global Cooperation vs Regional Diversity
Topics
Legal and regulatory | Economic
Agreed with
– Anne Flanagan
– Virginia Dignam
Agreed on
Diversity in governance approaches is necessary over single global framework
Disagreed with
– Anne Flanagan
– Virginia Dignam
Disagreed on
Global governance structure vs. regulatory diversity
Anne Flanagan
Speech speed
183 words per minute
Speech length
539 words
Speech time
176 seconds
Policy interoperability is crucial since global regime is unrealistic and inappropriate given different cultural contexts and definitions of harm – Policy Interoperability
Explanation
We’re never going to have a global regime or single global governance structure because it’s not realistic or appropriate. When looking at AI harms and protecting people, what harms mean can look different in different regions and cultural contexts, even within the same country from person to person.
Evidence
What harms mean can look different in different regions, different cultural contexts. It can be different even within the same country from person to person
Major discussion point
Global Cooperation vs Regional Diversity
Topics
Legal and regulatory | Human rights
Agreed with
– Joanna Bryson
– Virginia Dignam
Agreed on
Diversity in governance approaches is necessary over single global framework
Disagreed with
– Joanna Bryson
– Virginia Dignam
Disagreed on
Global governance structure vs. regulatory diversity
Lack of evidence base makes it difficult to legislate technologies that haven’t fully materialized yet, requiring sandbox environments for testing – Evidence Challenges
Explanation
One of the biggest challenges in digital policy is that evidence is thin regarding the actual impact of these technologies. It’s difficult to legislate or regulate something that hasn’t happened yet, which pulls policymakers into ex ante rather than ex post regulation.
Evidence
It’s really, really difficult to legislate or regulate something that hasn’t happened yet, right? We know as human beings, we know from other policy areas that there may be dangers that are imminent, but it’s very, very challenging when there is a lack of evidence base
Major discussion point
Evidence-Based Policymaking and Research Integration
Topics
Legal and regulatory
Agreed with
– Virginia Dignam
– Alex Moltzau
– Jason Tucker
Agreed on
Evidence-based research is crucial for AI policy development
Multi-stakeholder partnerships between policy researchers and private sector are essential for surfacing potential harms and testing solutions – Multi-stakeholder Partnerships
Explanation
The role of policy researchers is crucial, and encouraging private sector engagement in partnerships with the policy research community is healthy. This requires multi-stakeholder effort to surface different potential policy harms, test and unearth evidence, and bring forward diverse perspectives around potential AI harms.
Evidence
Having those environments where you can test, trial and error and find out what the harms are and how they actually play out, particularly, for example, for GPI systems, is really, really something where the scientific and research community can help policymakers
Major discussion point
Evidence-Based Policymaking and Research Integration
Topics
Legal and regulatory | Economic
Agreed with
– Virginia Dignam
Agreed on
AI requires multidisciplinary and participatory approaches
Jason Tucker
Speech speed
173 words per minute
Speech length
1089 words
Speech time
375 seconds
AI advances in healthcare exist only in some areas and for some people, with benefits not equally distributed globally – Healthcare Inequality
Explanation
While AI is repeatedly cited as revolutionizing healthcare, there are areas where AI is making advances but only in some areas and for some people. This isn’t necessarily always positive, and we shouldn’t take for granted that advancements will continue or that benefits will be equally distributed globally.
Evidence
Through many of the sessions over the last few days alone, I’ve heard six or seven accounts of healthcare being used as the primary example of the massive potential of AI
Major discussion point
Sector-Specific Applications and Challenges
Topics
Human rights | Development
Healthcare AI governance should be based on historical precedent of regulation enabling medical innovation, such as international treaties regulating medicines – Historical Precedent
Explanation
The greatest innovations in healthcare and global health have been based on regulation and incorporation of new technologies. Modern medicine is built on regulation, exemplified by the Brussels treaty regulating potent medicines 100 years ago, which gave consumers confidence in pharmaceutical safety and effectiveness.
Evidence
A hundred years ago, a group of international experts met in Brussels to sign a treaty which regulated the potent medicines. So before then, when you went to the pharmacy, you had no idea what you’re getting
Major discussion point
Sector-Specific Applications and Challenges
Topics
Legal and regulatory | Human rights
Public health authorities can drive markets and demand standards that force private sector compliance with values and requirements – Public Authority Power
Explanation
Public health authorities are huge players that can drive markets and demand standards that force the private sector and public-private partnerships to meet certain requirements. This shows how values can be aligned creatively through the power of public authorities.
Evidence
The public health authorities are huge players and they can drive markets, they can demand standards that force the private sector and public-private partnerships to meet
Major discussion point
Sector-Specific Applications and Challenges
Topics
Legal and regulatory | Human rights
Evidence-based research is desperately needed to inform AI policy, with some existing research siloed in academia and other research gaps needing to be addressed – Evidence Gaps
Explanation
What is lacking is evidence-based research to inform AI policy. Some of this research exists but is siloed within academia, while other research simply doesn’t exist and is desperately needed. This disconnect between research and policymaking needs to be addressed.
Evidence
Some of this exists and it’s siloed within academia, and some of it just doesn’t exist and it’s desperately needed
Major discussion point
Evidence-Based Policymaking and Research Integration
Topics
Legal and regulatory | Development
Agreed with
– Virginia Dignam
– Alex Moltzau
– Anne Flanagan
Agreed on
Evidence-based research is crucial for AI policy development
Audience
Speech speed
131 words per minute
Speech length
346 words
Speech time
158 seconds
Military AI applications require global harmonized regulatory effort, taking inspiration from academic work on nuclear proliferation – Military AI Regulation
Explanation
Military and AI integration has been tight between AI research and the military-industrial complex globally. This is an area where researchers, economists and policymakers can lead on a global harmonized effort, taking inspiration from academic work on nuclear proliferation in the 1950s and 1960s.
Evidence
Dario Guarachio made a very good speech about the military and civilian trajectories of AI. We went into how tight the integration has been between AI research and the military-industrial complex in the US in particular, but also globally
Major discussion point
Sector-Specific Applications and Challenges
Topics
Cybersecurity | Legal and regulatory
Privacy protection requires competent understanding of when data sharing is appropriate versus when it creates risks for citizens – Privacy Competence
Explanation
Organizations need to be competent enough to understand when they are trying to protect people and when data sharing within an organization is appropriate. There are cases that can go horribly wrong when trying to model citizens, as shown by the Dutch welfare scandal where people were placed in prisons and lost rights to their children.
Evidence
Dutch welfare scandal. People were placed in prisons and lost their rights to their kids. There are some serious potential consequences for citizens that are sometimes hard to predict
Major discussion point
Fundamental Questions and Approaches
Topics
Human rights | Legal and regulatory
Contextualized problem descriptions are more effective than generalized approaches for building public support for regulation – Contextualized Problems
Explanation
Going to the ground with highly contextualized problem descriptions rather than engaging problems from too generalized standpoints is more effective. When talking to actual problems in private or public sectors, they often lack regulation or the safety net that regulation creates, but this connection is lost when problems become sweeping and generalized.
Evidence
If you go down and talk to the actual problems in private sector, or in public sectors which people are actually dealing with, they are often from the perspective of lacking regulation or lacking the safety net that regulation creates
Major discussion point
Fundamental Questions and Approaches
Topics
Legal and regulatory
Tatjana Titareva
Speech speed
157 words per minute
Speech length
881 words
Speech time
335 seconds
Session goals focus on introducing roadmap concepts, discussing global cooperation while preserving regional diversity, and identifying mechanisms for research integration into governance
Explanation
The session aims to achieve three specific goals: introducing and discussing key concepts of the AI policy roadmap, exploring how AI policy research can support global cooperation while preserving regional diversities, and identifying mechanisms for effective integration of AI policy research into governance processes.
Evidence
We would like to achieve the following goals for today’s session. Number one, to introduce and discuss the key concepts of the roadmap. Secondly, to discuss with you both in person and online, how can AI policy research support global cooperation in AI governance while preserving the regional diversities? And thirdly, what mechanisms can best support the access to an effective integration of AI policy research into AI governance processes?
Major discussion point
AI Policy Research Roadmap and Global Network
Topics
Legal and regulatory | Development
Capacity building initiatives are crucial for raising awareness and competence in navigating AI systems across public sector and general public
Explanation
Part of the roadmap emphasizes the need for capacity building initiatives to raise general awareness about how to navigate AI systems and use them effectively. This applies to both public sector organizations and the general public who need to understand how to work with these systems appropriately.
Evidence
Part of the roadmap is the need for capacity building initiatives as well. Raising the general awareness, whether it be within the public sector or the general public, on how to navigate these systems and use them well is indeed a crucial priority
Major discussion point
Evidence-Based Policymaking and Research Integration
Topics
Development | Legal and regulatory
Question zero approach emphasizes evaluating whether AI is the appropriate solution before implementation at organizational levels
Explanation
Organizations are rushing to adopt AI tools without understanding what problems they are trying to solve with these tools. The ‘question zero’ approach advocates for first determining whether AI is the best solution before proceeding with implementation, rather than adopting AI simply because it’s available.
Evidence
Here we are talking about organizational levels, when organizations are having a formal moment running to adopt AI tool without understanding what kind of problems they are trying to solve with these tools
Major discussion point
Fundamental Questions and Approaches
Topics
Legal and regulatory | Economic
Agreements
Agreement points
Regulation enables rather than hinders innovation
Speakers
– Virginia Dignam
– Alex Moltzau
– Eltjo Poort
Arguments
False binary opposition between innovation and regulation is false – well-applied regulation creates better innovation for citizens and communities – False Binary Opposition
Regulation does not hamper innovation but actually speeds it up by providing clear guidance and reducing uncertainty for organizations – Regulation Enables Innovation
Clear regulatory guardrails help organizations innovate faster by eliminating concerns about compliance and legal risks – Clear Guidance Benefits
Summary
All three speakers strongly reject the false dichotomy between innovation and regulation, arguing that well-designed regulation actually accelerates innovation by providing clarity and reducing uncertainty for organizations.
Topics
Legal and regulatory | Economic
Evidence-based research is crucial for AI policy development
Speakers
– Virginia Dignam
– Alex Moltzau
– Jason Tucker
– Anne Flanagan
Arguments
Evidence-based scientific research is more important than ever to counter political whims and private sector dominance in AI decisions – Scientific Research Importance
Individual Responsibility is needed from individuals in government and private companies to bridge the research-policy gap – Individual Responsibility
Evidence-based research is desperately needed to inform AI policy, with some existing research siloed in academia and other research gaps needing to be addressed – Evidence Gaps
Lack of evidence base makes it difficult to legislate technologies that haven’t fully materialized yet, requiring sandbox environments for testing – Evidence Challenges
Summary
Speakers unanimously emphasize the critical need for robust, evidence-based research to inform AI policy decisions, highlighting gaps between research and practice that must be bridged.
Topics
Legal and regulatory | Development
Diversity in governance approaches is necessary over single global framework
Speakers
– Joanna Bryson
– Anne Flanagan
– Virginia Dignam
Arguments
Single global governance structure risks regulatory capture and market concentration, making diversity in legislation important for resilience – Diversity for Resilience
Different countries have different priorities, capacities, and risks, making some diversity in legislation sensible and necessary – Regional Differences
Policy interoperability is crucial since global regime is unrealistic and inappropriate given different cultural contexts and definitions of harm – Policy Interoperability
Summary
Speakers agree that a single global AI governance structure is neither feasible nor desirable, advocating instead for diverse approaches that can interoperate while respecting regional differences and cultural contexts.
Topics
Legal and regulatory | Human rights
AI requires multidisciplinary and participatory approaches
Speakers
– Virginia Dignam
– Anne Flanagan
Arguments
AI is fundamentally a socio-technical system requiring multidisciplinary research beyond single disciplines to address societal complexity – Multidisciplinary Approach
Multi-stakeholder partnerships between policy researchers and private sector are essential for surfacing potential harms and testing solutions – Multi-stakeholder Partnerships
Summary
Both speakers emphasize that AI’s complexity as a socio-technical system requires collaborative, multidisciplinary approaches involving diverse stakeholders rather than single-discipline solutions.
Topics
Legal and regulatory | Sociocultural
Similar viewpoints
All three speakers advocate for the ‘question zero’ approach – critically evaluating whether AI is the appropriate solution before implementation, rather than adopting AI simply because it’s technologically possible.
Speakers
– Virginia Dignam
– Jason Tucker
– Tatjana Titareva
Arguments
Question zero must be asked: whether AI is the best option, using AI because we should rather than because we can – Question Zero
Question zero approach emphasizes evaluating whether AI is the appropriate solution before implementation at organizational levels
Topics
Legal and regulatory | Human rights
Both speakers highlight how current AI deployment patterns exacerbate existing inequalities and fail to provide equitable benefits across different populations and regions.
Speakers
– Virginia Dignam
– Jason Tucker
Arguments
Current AI deployment reinforces existing inequalities and marginalizes non-Western worldviews and indigenous knowledge – Inequality Reinforcement
AI advances in healthcare exist only in some areas and for some people, with benefits not equally distributed globally – Healthcare Inequality
Topics
Human rights | Development
Both speakers emphasize the critical need for capacity building and competence development in public sector organizations to effectively implement and govern AI systems.
Speakers
– Alex Moltzau
– Tatjana Titareva
Arguments
Competence building in public sector organizations and collaboration with research communities is essential for appropriate AI implementation – Competence Building
Capacity building initiatives are crucial for raising awareness and competence in navigating AI systems across public sector and general public
Topics
Development | Legal and regulatory
Unexpected consensus
Industry-academia-government collaboration necessity
Speakers
– Eltjo Poort
– Alex Moltzau
– Anne Flanagan
Arguments
Transboundary AI policy cooperation is essential for multinational clients while respecting regional differences – Transboundary Cooperation
Scientific panel established within AI Act provides direct mechanism for research community to engage in regulatory framework – Scientific Panel Integration
Multi-stakeholder partnerships between policy researchers and private sector are essential for surfacing potential harms and testing solutions – Multi-stakeholder Partnerships
Explanation
Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collaborative approaches, despite these sectors often having competing interests. This suggests a mature understanding that AI governance challenges are too complex for any single sector to address alone.
Topics
Legal and regulatory | Economic
Rejection of technological determinism
Speakers
– Virginia Dignam
– Jason Tucker
– Audience
Arguments
AI is not like weather – it’s developed by people and organizations, with choices about values and priorities that can be democratically influenced – Human Agency
Contextualized problem descriptions are more effective than generalized approaches for building public support for regulation – Contextualized Problems
Explanation
There was unexpected consensus on rejecting technological determinism – the idea that AI development is inevitable and uncontrollable. Speakers from different backgrounds agreed that AI is shaped by human choices and can be democratically influenced, which is significant given the common narrative of AI as an unstoppable force.
Topics
Human rights | Legal and regulatory
Overall assessment
Summary
The discussion revealed strong consensus on several key principles: regulation enables innovation rather than hindering it, evidence-based research is essential for effective AI policy, diverse governance approaches are preferable to single global frameworks, and multidisciplinary collaboration is necessary. There was also unexpected agreement across sectors on the need for collaborative approaches and rejection of technological determinism.
Consensus level
High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggests a mature field where stakeholders share core values about responsible AI development while recognizing the complexity of operationalizing these principles. The consensus has positive implications for building effective AI governance frameworks that can balance innovation with protection of rights and values.
Differences
Different viewpoints
Global governance structure vs. regulatory diversity
Speakers
– Joanna Bryson
– Anne Flanagan
– Virginia Dignam
Arguments
Single global governance structure risks regulatory capture and market concentration, making diversity in legislation important for resilience – Diversity for Resilience
Different countries have different priorities, capacities, and risks, making some diversity in legislation sensible and necessary – Regional Differences
Policy interoperability is crucial since global regime is unrealistic and inappropriate given different cultural contexts and definitions of harm – Policy Interoperability
Summary
Joanna Bryson and Anne Flanagan strongly argue against a single global governance structure, emphasizing the need for regulatory diversity and interoperability. While Virginia Dignam doesn’t explicitly advocate for a single global system, her roadmap approach suggests more coordinated global responses, creating tension with the diversity advocates.
Topics
Legal and regulatory | Economic
Unexpected differences
Approach to addressing AI policy research funding risks
Speakers
– Eltjo Poort
– Virginia Dignam
– Isadora Hellegren
Arguments
Research Funding Risk – Strong pushback against AI regulation may affect AI policy research funding by labeling it as activist research
Evidence-based scientific research is more important than ever to counter political whims and private sector dominance in AI decisions – Scientific Research Importance
Building resilient institutions for academic integrity is crucial when political winds change direction – Academic Integrity
Explanation
While all speakers acknowledge the risk of political pushback against AI policy research, they propose different responses. Eltjo raises concerns about research being labeled as ‘activist,’ Virginia emphasizes doubling down on evidence-based research, and Isadora focuses on building resilient institutions. This disagreement is unexpected because they’re all supportive of AI policy research but have different strategies for protecting it.
Topics
Legal and regulatory | Development
Overall assessment
Summary
The main areas of disagreement center on global governance approaches (centralized vs. diverse), implementation mechanisms for evidence-based policy (academic vs. institutional vs. industry-focused), and strategies for protecting research from political pressures.
Disagreement level
The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of responsible AI governance and evidence-based policy, their different approaches could lead to conflicting strategies. The disagreement on global governance structure is particularly important as it affects how international cooperation should be organized and whether diversity or coordination should be prioritized.
Partial agreements
Partial agreements
Similar viewpoints
All three speakers advocate for the ‘question zero’ approach – critically evaluating whether AI is the appropriate solution before implementation, rather than adopting AI simply because it’s technologically possible.
Speakers
– Virginia Dignam
– Jason Tucker
– Tatjana Titareva
Arguments
Question zero must be asked: whether AI is the best option, using AI because we should rather than because we can – Question Zero
Question zero approach emphasizes evaluating whether AI is the appropriate solution before implementation at organizational levels
Topics
Legal and regulatory | Human rights
Both speakers highlight how current AI deployment patterns exacerbate existing inequalities and fail to provide equitable benefits across different populations and regions.
Speakers
– Virginia Dignam
– Jason Tucker
Arguments
Current AI deployment reinforces existing inequalities and marginalizes non-Western worldviews and indigenous knowledge – Inequality Reinforcement
AI advances in healthcare exist only in some areas and for some people, with benefits not equally distributed globally – Healthcare Inequality
Topics
Human rights | Development
Both speakers emphasize the critical need for capacity building and competence development in public sector organizations to effectively implement and govern AI systems.
Speakers
– Alex Moltzau
– Tatjana Titareva
Arguments
Competence building in public sector organizations and collaboration with research communities is essential for appropriate AI implementation – Competence Building
Capacity building initiatives are crucial for raising awareness and competence in navigating AI systems across public sector and general public
Topics
Development | Legal and regulatory
Takeaways
Key takeaways
AI policy research must be multidisciplinary and evidence-based to bridge the gap between research and practice for responsible AI governance
The false binary between innovation and regulation was debunked – well-designed regulation actually accelerates innovation by providing clear guidance and reducing uncertainty
Global AI governance should prioritize policy interoperability over uniform regulation, respecting regional diversity while enabling cooperation
Question zero must always be asked: whether AI is the best solution to a problem, using AI because we should rather than because we can
AI is a socio-technical system shaped by human choices and values, not an uncontrollable force like weather
Current AI deployment reinforces existing inequalities and marginalizes non-Western worldviews, requiring inclusive and participatory governance approaches
Evidence-based policymaking is crucial but challenging when regulating emerging technologies, requiring sandbox environments and multi-stakeholder partnerships
Capacity building in public sector organizations and collaboration with research communities is essential for appropriate AI implementation
The AI Policy Research Roadmap provides a framework guided by principles of human welfare, accountability, transparency, inclusivity, and ethical governance
Resolutions and action items
Launch of the Global AI Policy Research Network (GlobalPOL) to foster international collaboration on AI policy research
Endorsement and adoption of the AI Policy Research Roadmap with its core principles and research priorities
Establishment of AI fellowships and student/staff exchange programs between participating institutions
Annual AI Policy Summit scheduled for mid-November in the Netherlands
Development of AI policy briefs and capacity building initiatives
Integration of research community into EU AI Act implementation through scientific panel and regulatory sandboxes
Call for individuals in government and private companies to actively bridge the research-policy gap in their work
Encouragement for organizations to join the GlobalPOL network and engage with the roadmap
Unresolved issues
How to protect AI policy research from being labeled as ‘activist research’ amid political pushback against regulation
Specific mechanisms for ensuring policy interoperability across different regional regulatory frameworks
How to address the policy gap for systems integrators and consultants who fall between AI providers and deployers
Concrete approaches for measuring and comparing the benefits and risks of different AI applications
How to effectively integrate indigenous knowledge and non-Western worldviews into AI governance frameworks
Strategies for preventing regulatory capture while maintaining necessary industry engagement
How to balance privacy protection with appropriate data sharing in public sector AI applications
Specific frameworks for global cooperation on military AI applications and dual-use technologies
Suggested compromises
Policy should operate at the principle level to remain maintainable while providing practical guardrails for implementation
Regulatory sandboxes as a middle ground for testing AI systems while gathering evidence for policy development
Multi-stakeholder partnerships that include private sector engagement while maintaining democratic oversight
Flexible regulatory frameworks that can adapt to technological changes without requiring complete overhauls
Regional diversity in AI governance approaches while working toward interoperability standards
Contextualized problem descriptions rather than generalized approaches to build broader public support for regulation
Thought provoking comments
AI doesn’t happen to us. The current narrative is often that AI is something like the weather. We have no idea how to control it. The only thing we can do is to take an umbrella if it’s going to rain, so address the effects of the weather. But AI is not weather. AI is developed by us, is developed by organizations, by people, and it’s ultimately dependent the way AI looks, what we’re doing with AI is ultimately dependent from the choice that we make.
Speaker
Virginia Dignam
Reason
This metaphor fundamentally reframes the AI governance debate by challenging the fatalistic narrative that AI development is inevitable and uncontrollable. It shifts agency back to human decision-makers and emphasizes the political nature of technological choices.
Impact
This comment established a foundational framework for the entire discussion, moving away from reactive approaches to proactive governance. It influenced subsequent speakers to focus on deliberate policy choices and the importance of asking ‘question zero’ – whether AI should be used at all in specific contexts.
I also think that it’s a mistake to have a single governance structure that would probably get captured. That’s the problem of regulatory capture and market concentration we’re seeing right now. And a lot of the people pushing very hard for uniformity are coming from the country where a lot of the concentrated power is right now.
Speaker
Joanna Bryson
Reason
This comment introduces a critical counterpoint to the assumed benefits of global harmonization, highlighting power dynamics and the risk of regulatory capture. It challenges the premise that unified governance is inherently better.
Impact
This intervention shifted the discussion from seeking global uniformity to embracing ‘resilience through diversity.’ It prompted other participants to reconsider the balance between cooperation and maintaining regional autonomy, leading to discussions about policy interoperability rather than harmonization.
The role of consultants and systems integrators falls a little bit into a gap in the policy so far. For example, looking at the AI act, you see that there’s a lot of attention for the providers and for deployers. And we tend to be in the middle of that.
Speaker
Eltjo Poort
Reason
This comment identifies a concrete governance gap that hadn’t been explicitly addressed, highlighting how current regulatory frameworks may miss important actors in the AI ecosystem.
Impact
This observation added practical complexity to the discussion and demonstrated how theoretical policy frameworks can have unintended gaps when applied to real-world business structures. It reinforced the need for evidence-based, contextual policy research.
We see some very strong pushback against AI regulation in the United States right now. And this may eventually also affect AI policy research, because in one way it may be considered in the future to be activistic research and thus putting funding at risk.
Speaker
Eltjo Poort
Reason
This comment raises the meta-concern about the sustainability and independence of AI policy research itself, highlighting how political winds can threaten the very research needed for evidence-based governance.
Impact
This intervention prompted Virginia Dignam to acknowledge the narrowing line between AI policy research and activism, leading to a discussion about the need for resilient institutions and academic integrity. It added urgency to the conversation about protecting evidence-based research from political interference.
If we throw resources at AI, we can fix the healthcare system. So we’re diverting resources away from chronically underfunded public healthcare systems into healthcare in the hope that a magic pill will be created to fix the healthcare system. If it works, that’s fantastic, but this is incredibly risky.
Speaker
Jason Tucker
Reason
This comment challenges the techno-solutionist narrative by highlighting opportunity costs and the risk of diverting resources from proven solutions to speculative AI applications.
Impact
This intervention grounded the discussion in concrete sectoral realities and reinforced the importance of ‘question zero’ – whether AI is the appropriate solution. It demonstrated how AI policy research must consider not just AI’s potential benefits but also its opportunity costs.
I think that something we can take inspiration for, for example, from the work of academics back in the 50s and so on, 60s, on nuclear proliferation, for example, and consider areas where it is not appropriate to apply AI technologies and gather a global perspective on that and make for a global push on limiting those potential very actual harms of AI technology.
Speaker
Petter Eriksson
Reason
This comment draws a powerful historical parallel between AI governance and nuclear proliferation, suggesting that some AI applications may require absolute prohibitions rather than just regulation.
Impact
This intervention introduced the concept of AI applications that should be completely off-limits, moving beyond the typical regulatory framework discussion to consider fundamental prohibitions. It suggested that military AI might be an area where global harmonization is both possible and necessary.
Overall assessment
These key comments fundamentally shaped the discussion by challenging several underlying assumptions about AI governance. Virginia Dignam’s weather metaphor established human agency as central to AI development, while Joanna Bryson’s intervention on regulatory capture shifted the conversation from seeking uniformity to embracing diversity. The practical insights from industry (Poort) and healthcare (Tucker) grounded theoretical discussions in real-world complexities, while concerns about research independence (Poort again) and military applications (Eriksson) added urgency and scope to the governance challenges. Together, these interventions moved the discussion from abstract principles toward concrete, contextual, and politically-aware approaches to AI policy research, emphasizing the need for evidence-based, participatory, and resilient governance frameworks that can adapt to diverse contexts while maintaining democratic accountability.
Follow-up questions
How can we create better feedback loops between the scientific community and policymakers in AI governance?
Speaker
Alex Moltzau
Explanation
This is crucial for ensuring evidence-based policymaking and avoiding the gap between research and practice in AI regulation
How can we operationalize the principles of the AI Policy Research Roadmap into concrete, actionable guidelines?
Speaker
Virginia Dignam
Explanation
Moving from principles to practical implementation is essential for the roadmap to have real-world impact
What are the most effective ways to measure both benefits and risks of AI systems across different contexts?
Speaker
Virginia Dignam
Explanation
Standardized measurement approaches are needed to inform evidence-based policy decisions
How can policy interoperability be achieved between different regional AI governance frameworks?
Speaker
Anne Flanagan
Explanation
This addresses the challenge of coordinating AI governance globally while respecting regional diversity
What evidence-based research is needed to inform AI policy in healthcare specifically?
Speaker
Jason Tucker
Explanation
Healthcare AI policy lacks sufficient evidence base, and this research could inform better governance in this high-risk sector
How can we develop global harmonized efforts to limit military applications of AI?
Speaker
Petter Eriksson
Explanation
Drawing inspiration from nuclear proliferation controls, this could address one of the most concerning applications of AI technology
How can scientific research help public sector organizations better balance privacy protection with AI innovation?
Speaker
Knut (via chat)
Explanation
Public sector organizations struggle with finding the right balance between being overly cautious and neglecting privacy when implementing AI
How can AI policy research be protected from political pushback and funding risks?
Speaker
Eltjo Poort
Explanation
With increasing political opposition to AI regulation, there’s a risk that policy research could be viewed as activist and lose funding
How can we develop more contextualized, ground-level problem descriptions to build public support for AI regulation?
Speaker
Mattias Brändström
Explanation
Moving away from generalized discussions to concrete, specific problems could help build better understanding and support for regulation
What regulatory and organizational innovation methods can be developed using technological innovation tools?
Speaker
Virginia Dignam
Explanation
Applying dynamic and adaptable approaches from technology development to regulatory frameworks could improve governance effectiveness
How can AI regulatory sandboxes be optimized to generate better evidence for policymaking?
Speaker
Alex Moltzau
Explanation
Sandboxes provide testing environments that could generate crucial evidence about AI impacts, but their design and implementation need optimization
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
