Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44

11 Oct 2023 07:45h - 08:45h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Chris Jones

Geopolitical discussions should focus on areas of agreement rather than disagreement to foster cooperation and prevent conflicts. This approach aligns with SDG 16: Peace, Justice and Strong Institutions. Breaking down large tasks into smaller manageable ones, advocated by engineer Chris Jones, promotes effective problem-solving and resource allocation, in line with SDG 9: Industry, Innovation and Infrastructure. A positive stance towards international cooperation and addressing challenges through understanding and managing smaller components is supported, aligning with SDG 17: Partnerships for the Goals. Large organizations may need to make changes to become more agile and adapt to emerging technologies, a principle aligned with SDG 9. Governance discussions should consider both shared values and technical requirements, as highlighted by SDG 16. The process of governance is equally important as the final product, as demonstrated by the UK’s online harms legislation. Multi-stakeholder governance, involving diverse expertise and perspectives, is crucial, echoing SDG 17. The airline industry’s success in implementing common standards serves as an example of a bottom-up approach aligned with SDG 9. These approaches, emphasizing collaboration, agility, inclusive governance, and bottom-up solutions, contribute to sustainable development, peace, and justice.

Sheetal Kumar

The analysis examines the perspectives surrounding future technologies and their impact on marginalized groups, as well as the governance and development of these technologies.

One argument put forward is that future technology developments may not necessarily bring positive impacts, particularly for marginalized groups. New technologies like quantum-related developments, metaverse platforms, nanotech, and human-machine interfaces can be complex and intimidating, making it difficult for already marginalized individuals to access and benefit from them. This highlights the potential for further exacerbation of inequalities if technology is not developed and implemented in an inclusive manner.

On the other hand, there is a strong emphasis on the importance of inclusive technology development and governance. The argument asserts that the development and governance of technology should be more inclusive, particularly in relation to marginalized groups. This approach recognizes the need for diverse perspectives and experiences to be considered to avoid further marginalisation and ensure equitable access to technological advancements.

Furthermore, the analysis suggests that governments and industry stakeholders should prioritise engaging in multistakeholder discussions related to technology developments. Examples such as the IGF Best Practice Forum on Cybersecurity and the policy network on internet fragmentation are cited as instances of successful multistakeholder dialogue. This underscores the significance of collaboration and cooperation among various stakeholders to ensure that technological advancements are beneficial and meet the needs of all.

In terms of future-proofing, an important observation is that high-tech solutions are not the only way to achieve this. While future technologies are often associated with cutting-edge advancements, it is important to recognise that future-proofing can also involve other approaches that do not solely rely on high-tech solutions.

Another noteworthy perspective is the advocacy for connecting multilateral spaces through people and not solely through novel technology. The analysis highlights the need to improve and enhance existing spaces where work is being done, making them more diverse, inclusive, and connected. By prioritising diversity and inclusivity in these spaces, stakeholders can foster collaboration, coordination, and cooperation, ultimately leading to more effective and equitable outcomes.

The analysis also praises the United Nations’ Internet Governance Forum (IGF) as an open, inclusive deliberative space that plays a crucial role in discussing and shaping technology governance. It emphasises the significance of preserving and enhancing spaces like the IGF, which offer unique opportunities for stakeholders to come together, exchange ideas, and collaboratively address the challenges associated with technology governance.

Additionally, transparency, engagement, and the preservation of user autonomy are considered fundamental principles that should be upheld in technology governance. The analysis argues that good governance principles, which are already known, should be applied to new technologies. This includes timely and clear information sharing that is accessible to a wide range of individuals, ensuring transparency and meaningful engagement.

Another notable point is the integration of high-level principles, specifically the international human rights framework, in guiding the use of technologies. The analysis highlights that technologies like AI and data impact various aspects of life and suggests that the international human rights framework can be embedded throughout the technology supply chain through standards. This approach promotes a rights-respecting world where everyone benefits and ensures that the development and usage of technology uphold human rights.

In conclusion, the analysis presents various perspectives on the impact and governance of future technologies. It highlights the importance of inclusive technology development, multistakeholder engagement, connecting multilateral spaces through people, and embedding high-level principles such as the international human rights framework. By considering these perspectives and incorporating them into technology governance, it is possible to strive towards a more equitable and beneficial technological future.

Gallia Daor

Intergovernmental organisations, such as the Organisation for Economic Co-operation and Development (OECD), have demonstrated their ability to be agile while maintaining a thorough and evidence-based approach. The OECD’s AI principles were adopted in an impressive one-year time frame, making it the fastest process ever at the organisation. This highlights the organisation’s ability to adapt to the rapidly evolving landscape of emerging technologies.

To facilitate global dialogue on emerging technologies, the OECD established the Global Forum on Technology. This platform provides an avenue for stakeholders from different countries and sectors to come together and discuss the challenges and opportunities presented by these new technologies. This engagement ensures that decisions made by intergovernmental organisations are well-informed and incorporate perspectives from various stakeholders.

The importance of multi-stakeholder and interdisciplinary engagement in decision-making within intergovernmental organisations is evident through the OECD’s network of AI experts. With more than 400 experts from different stakeholder communities, the OECD is able to tap into a wide range of expertise and perspectives. This inclusivity ensures that the decisions made by the organisation are comprehensive and representative of diverse viewpoints.

Recognising the need to keep pace with emerging technologies, intergovernmental organisations like the OECD have established dedicated working groups that focus on different sectors. These working groups, such as those on compute, climate, and AI future, allow for a deeper understanding of the specific challenges and opportunities posed by each sector. By focusing on these emerging technology sectors, intergovernmental organisations can proactively address the unique issues that arise within each area.

High-level principles, such as trustworthiness, responsibility, accountability, inclusiveness, and alignment with human rights, are considered important and relevant for all technologies. Intergovernmental organisations aspire to develop technologies that are trustworthy, responsible, and inclusive, while also being aligned with human rights. It is essential to factor in potential risks to human rights and ensure accountability in the development processes of these technologies.

However, there is often a gap between these high-level principles and their actual implementation in specific technologies. Variations exist between technologies, and the importance of certain issues like data bias may be specific to AI. This calls for a careful examination and consideration of these factors during the governance processes of emerging technologies.

To address the complexity and differing requirements of different technologies, there may be a need to break up the governance processes into smaller components. By doing so, intergovernmental organisations can accommodate the varying expertise and process requirements associated with different technologies. This approach ensures that governance structures are tailored to the specific needs of each technology, promoting more effective decision-making and implementation.

In conclusion, intergovernmental organisations have shown their ability to be agile, adaptable, and evidence-based in the face of emerging technologies. The OECD’s fast adoption of AI principles and the establishment of the Global Forum on Technology exemplify their commitment to staying at the forefront of technological advancements. The inclusive and interdisciplinary approach to decision-making, along with the focus on specific technology sectors, further enhances the effectiveness of intergovernmental organisations in addressing the challenges and harnessing the opportunities presented by emerging technologies.

Carolina Aguirre

The analysis considered various perspectives on technological development and governance. The speakers emphasised the need to maintain openness in both processes, drawing parallels with the Internet Governance Forum (IGF), which has nearly 20 years of experience in dealing with open technology. They highlighted that the IGF’s bottom-up approach plays a vital role in achieving openness.

The growing influence of the private sector in shaping technological developments was recognised as an important aspect. The speakers noted that many new technological advancements are being driven and progressed by private companies. This recognition indicates the need to understand the limits and the actors shaping technology ecosystems.

There was concern that new technologies are being developed behind closed doors, deviating from the open nature of the Internet’s original development. The speakers argued that such closed development is less open by nature. This observation raises questions about transparency and inclusivity in the creation of new technologies.

The speakers universally agreed that technology is not neutral and is influenced by societal values. This recognition signals the importance of considering the ethical and social implications of technological advancements. The broader impact on society must be a critical consideration in technological development and decision-making.

The adequacy of existing institutions in the face of challenges posed by globalisation and technological development was called into question. One speaker, Carolina Aguirre, expressed scepticism about the sufficiency of the institutions currently in place. The analysis revealed a need for institutions to adapt and keep up with the rapid changes brought about by technological progress.

Furthermore, the analysis highlighted the decline of globalisation in terms of trade and international dialogue. This observation suggests that traditional processes concerning internationalisation are struggling to keep pace with technological advancements.

In conclusion, the analysis presented a multi-faceted view on technological development and governance. The speakers stressed the importance of openness, raised concerns about closed development, highlighted the influence of the private sector, and acknowledged the influence of societal values on technology. Additionally, the analysis pointed out the challenges faced by existing institutions and the decline of globalisation. These insights shed light on the need for continuous evaluation and adaptation in the realms of technology and governance.

Thomas Schneider

The analysis highlights several key points regarding disruptive technologies, global digital governance, and the regulation of artificial intelligence (AI). Firstly, it emphasizes the need for a change in approach towards disruptive technologies. As technologies continue to develop rapidly, with increasing complexity, it is important to adopt a more distant perspective to effectively regulate them. The analysis suggests that machines and algorithms can play a crucial role in developing regulations for disruptive technologies, taking into account their unique characteristics and potential impact.

In terms of governance, the analysis asserts that collaboration is a better approach than conflict. It argues that leaders have been losing sight of the notion of cooperation, which is crucial for achieving sustainable and effective global digital governance. Collaboration is believed to promote a better working environment and foster long-term solutions to complex challenges.

Moreover, the analysis delves into the regulation of AI. It argues that human beings are relatively stable over time, which necessitates the adaptation of regulations surrounding AI. The historical reactions to new technologies, including fear of job loss and ignorance of technology’s potential, are cited to highlight the need for a balanced and adaptable regulatory framework.

The analysis also highlights the importance of building a network of norms in response to advancements in AI. It emphasizes the need for different levels of harmonization depending on the context and argues that institutional arrangements should adapt to technological innovations to effectively govern AI.

Additionally, the analysis makes an interesting observation about the notion of a multi-stakeholder approach. It suggests that this concept is here to stay and proposes that with technology dematerializing, rule-making should also dematerialize. This means that decisions should be made based on stakeholder involvement rather than geographical boundaries, indicating a shift towards a more inclusive and participatory governance model.

In conclusion, the analysis brings attention to the need for a change in approach towards disruptive technologies, the importance of collaboration over conflict in global digital governance, the need to adapt regulation of AI in response to human stability, the necessity of building a network of norms to govern AI advances, and the significance of the multi-stakeholder approach in dematerializing rule-making. These insights provide valuable considerations for policymakers and organizations looking to navigate the complex landscape of disruptive technologies and governance in the digital age.

Alžběta Krausová

The convergence of technologies has become a cause for concern as it raises ethical and privacy issues. The development of human brain interfaces is particularly problematic as it intrudes on the privacy of our minds. This invasion into individuals’ innermost thoughts and feelings is seen as a major problem, raising questions about personal autonomy and the protection of mental privacy.

Additionally, there is a growing recognition of the importance of defining our future world. As technology continues to advance rapidly, it is crucial to establish clear guidelines and regulations to ensure its safe and ethical use. This includes operationalizing our current ethical principles in new and unfamiliar situations that arise with technological advancements. By applying our existing ethical frameworks to emerging technologies, we can address the ethical challenges they present and ensure they align with our values and principles.

Furthermore, it is argued that considering case-by-case scenarios is necessary when making decisions about the use of artificial intelligence (AI) and other advanced technologies. While general principles and guidelines guide our ethical considerations, it is important to take into account the specific context and circumstances surrounding each situation. This approach enables us to address the unique ethical dilemmas that may arise and make more nuanced and informed decisions.

Moreover, valuing cultural understanding and emotional connections is emphasized as a means to reduce inequalities and foster positive interpersonal relations. Recognizing the diversity of cultures and perspectives in our global society can help bridge gaps and promote empathy and understanding among individuals from different backgrounds. Striving for understanding beyond a rational level, including emotional understanding, is seen as crucial for building inclusive and harmonious societies.

In conclusion, the convergence of technologies presents complex ethical challenges that necessitate attention. Defining our future world, operationalizing our principles, considering case-by-case scenarios, and valuing cultural understanding and emotional connections are key aspects that stakeholders should address. By doing so, they can navigate the ethical landscape in a way that promotes fairness, inclusivity, and respect for individual privacy.

Cedric Sabbah

Cedric Sabbah, an expert in international governance, identifies the challenges posed by the rapid development of technology and its frequent disruption for global governance. He observes that periodically, a new technology becomes a major concern for the international community. These concerns have evolved from critical infrastructure to IoT, ransomware, and internet governance. Emerging issues, such as jurisdictions, content moderation, and encryption, have also come to the forefront.

Sabbah highlights the ever-changing nature of the global tech industry, emphasizing that international organizations cannot afford to be complacent. He suggests that an agile and bottom-up approach could assist in addressing the governance challenges posed by technology. Sabbah believes that as technology constantly evolves, policies need to be regularly revisited and updated. Incorporating domestic bottom-up principles into international governance may bring value in tackling these challenges.

Furthermore, Sabbah emphasizes the importance of future-proof and flexible global tech governance. He proposes an approach that can adapt to the changing technological landscape while maintaining long-lasting effectiveness. Sabbah also recognizes the potential of multi-stakeholder processes and bottom-up approaches in enhancing the quality of global governance mechanisms. He advocates for involving non-traditional stakeholders in discussions and encourages the development of rules by specialized networks.

However, the existence of numerous international bodies and initiatives addressing similar topics raises concerns about fragmentation within these organizations. This fragmentation includes bodies within the UN as well as external entities like ITU, UNESCO, Human Rights Council, WIPO, OECD, COE, and the EU. It prompts the question of whether fragmentation is advantageous, allowing for diverse efforts, or a disadvantage that diminishes focus and resources.

In conclusion, there is a need to reassess existing concepts and explore new approaches to effectively govern emerging technologies. Sabbah’s insights underscore the significance of an agile and bottom-up approach, as well as the potential value of multi-stakeholder processes in addressing technology governance challenges. The concern regarding possible fragmentation within international organizations calls for thorough examination and coordination of processes to ensure effective resource allocation. Overall, global governance mechanisms must adapt and evolve in response to the rapidly changing technology landscape.

Session transcript

Cedric Sabbah:
Sedgwick, Shomael? Hi. Hi. Yeah? You guys hear me? Yes, we can hear you. Awesome. Is now a good time to start? So we are about to start. Okay. I’m watching a game of musical chairs. Hi, Sedgwick. Yeah, I hope so. Hi, Sedgwick, I think you can start. Okay, awesome. Do you guys see the PowerPoint? Yes. Okay, awesome. Okay, so, hi, everyone. My name is Sedgwick Saba. I’m Director for Emerging Technologies at the Office of the Deputy Attorney General for International Law at Israel’s Ministry of Justice. I apologize for not being here in person. My colleagues and I had to cancel our flight at the last minute due to the difficult situation here in Israel. The events taking place here are very sad, and it’s difficult for me to proceed as if everything’s a-okay, because it’s not. However, I do believe that the topic today is important. And thanks to the support of the panelists and other friends, I’ll do my best to make it as interesting as possible. So let’s get straight into it. In this afternoon’s panel, we’re going to go on a kind of a sci-fi policy adventure. I’m going to ask all of you, our panelists in particular, to project yourselves in, let’s say, IGF 2030. Maybe it’s taking place on a gigantic international space station somewhere. And you’re trying to figure out how the international community should deal with this new thing that’s happening in technology, whether it might be quantum sensing, quantum computing, quantum communications, human-machine interface, immersive technologies. And we’ll ask our panelists now how they envision the international community dealing with these issues that could arise in the future. So as you all know, technology develops rapidly. We’re seeing disruptions every year. Every few years, we’re seeing things. And those of us who follow the technology, we see it happening incrementally. But there’s usually like a tipping point where the international community focuses on the next big issue and decides this is what we need to deal with, only to be replaced by another issue a few years later. So just looking back in the days of, you know, when we started with cyber, so everybody was talking about critical infrastructure, and then it was IoT, and now it’s ransomware and Internet governance. In the past, I remember having a lot of discussions about jurisdiction and then content moderation. And now we’re talking about, you know, decrypting, companies providing assistance to decrypt child sexual exploitation material. For AI, no sooner than we were talking about high-risk AI and, you know, we had in mind biometrics and discrimination, all of a sudden, generative AI becomes the thing we’re talking about. So this is the known challenge of how law and policy play catch up to technology, and maybe it can’t really ever catch up. Everything is highly dynamic. And there’s never a point at which international organizations can just say, you know, we can pack our bags now. Our work here is done. It’s always evolving. And one specific issue I’d like to explore today is whether an agile and bottom-up approach can help international institutions deal with these challenges. I’m thrilled to introduce to you an absolutely all-star cast. So we have online Carolina Aguirre, a professor at the Universidad Católica del Uruguay in the Department of Humanities and also a former member of the UNESCO Expert Working Group on AI. We have Galia Daur, a policy analyst at the OECD who coordinates the activities of CDEP. We have Sheetal Kumar, head of the Engagement and Advocacy at Global Partners Digital. Dr. Osveta Krasova online, who’s head of the Center for Innovation and Cyber Law Research at the Institute of State and Law in the Czech Academy of Sciences. And Chris Jones, Director of Technology and Analysis Director at the UK Foreign Commonwealth and Development Office. And of course, Ambassador Schneider, Thomas Schneider, who’s Ambassador and Director of International Affairs at the Swiss Federal Office of Communications in the Federal Department of the Environment, Transport, Energy and Communications. And to me, he’s Chairperson extraordinaire at CHI in the Council of Europe. So the structure of this session will be as follows. We’ll divide it into three parts. I’ll try to finish talking soon so we can give the floor to the panellists. First, we’ll talk about the challenges of international governance that are presented by the next wave of disruptive technologies and maybe looking at the past of AI and Internet governance to see what we can learn. Then we’ll explore whether principles of agile governance, and in particular, bottom-up principles that we know from domestic policy, can be sort of internationalized and harnessed to deal with global tech governance. And lastly, we’ll try to identify some common principles that can be long-lasting and future-proof to enable a certain degree of institutional agility without losing sight of the important things. For each of these topics, I’ll ask one or two panellists to share their thoughts, and then the other panellists can chime in. And then Elzebeta, towards the end, will provide some concluding remarks and observations, and hopefully we’ll have some time for Q&A. One disclaimer, what I’m going to say is my own personal views, not necessarily the views of the government of Israel. Now, before we start, just a second. Before we start, and just to change things up a little bit, the panel includes a challenge for you, the audience, in person and online, and also for the panellists. So I’ve asked the panellists to pick a few songs and artists that they like. You see them on the right, and the names of the panellists are on the left. And I also picked a song, and I selected from these the songs that connect with our panel today, and also I used Bing’s image creator to generate some really nice images that are inspired by the song titles. The challenge for all of you is to try and guess who picked which song, and all the speakers, including me, will be including a small clue in the presentation to help you figure it out. And you can give your answers to me in the Zoom chat or to any one of the panellists. I was planning on giving the winner some kind of small prize, but obviously I can’t right now, so I’ll try to keep that as a rain check for next year’s IGF or some other way. So now that all these explanations are out of the way, let’s get right into it. So let’s start talking about the challenge. So I’ll address first Tomas and Karolina. So the challenge for international organisations. So the first question is what lessons can we learn from our experience with Internet governance and AI governance in order to address the next wave of disruptive technologies? Specifically, what do you think should be the role of international bodies in addressing global digital governance challenge? I’ll paraphrase something that I heard a few days ago from my friend David Fairchild in another session. Many of the international bodies right now are, you could say, analogue bodies, asking them to deal with problems of a digital world. And also, if you can briefly address what I think is an elephant in the room, which is geopolitics that have a major role to play in shaping the debates. For example, ITU discussions on Internet governance, difficulties in making progress in the UN ad hoc committee on cybercrime. So can we really have a meaningful discussion on desirable and implementable global policy goals in light of geopolitics? So we’ll start with Tomas and then Karolina.

Thomas Schneider:
Okay, sometimes it helps to turn devices on. It’s a pity that you’re not here, but of course we do understand this. But I hope to see you again soon in Strasbourg, actually. Yeah, I think it’s a nice setting because it tries to be a little bit more forward-looking than other sessions. And hopefully a little bit, let’s say, also inspiring in a different way. Well, the challenges are, let’s say, substance-based and then there are geopolitical challenges. And this doesn’t go just for intergovernmental organizations. It actually goes for all those that are somehow dealing with policy and with rulemaking. Maybe I have to start with this is a crucial moment in history and things will be completely different tomorrow than they have been yesterday because this is what you hear throughout history ever since. Speeches are recorded. Every person thinks that that particular moment in time is the moment where everything will change. And it’s true. Everything changes every day, but it’s also there’s recurring patterns in human behavior, not just in physics, but also in human behavior. So, to cut the long story short, I think, but nevertheless, we have an extremely fast development of technologies, of growing complexity, of being less material, which has effects compared to technologies that used to be material-based because you couldn’t copy them so quickly. You couldn’t move them so quickly. You cannot apply them remotely. You cannot use a car remotely while being in another continent, for instance, and so on and so forth. So, there are many similarities with previous disruptive technologies in the way that humans reacted to it, in the way they were regulated. The disruptiveness of the new technologies, I think, are of a different nature that has implications. And it forces us as rulemakers or us as society to adapt. But I’m not sure whether we have to adapt in a sense that we have also learned to think quicker and calculate quicker in our brains. That may be difficult. So, we have to actually probably change the way at which we look at things. We may have to look at things a little bit more, again, like maybe with the Greeks and the Romans, from a little bit more of a distance and say, okay, what are the big developments? And trying to understand them. And then maybe use machines and use algorithms to develop regulation and develop concepts to cope with algorithms because our brains may not be able to compute the nitty-gritty details also with regulation for this. And, for instance, to give you an example, we have parliamentarians now in Switzerland that use ChatGPT to formulate parliamentarian interventions and requests. And we are not yet allowed, but we are waiting for the moment where we decide because it takes resources to answer these requests. And the more we get, the more resources we need. And an efficiency gain for us would be if we could also write the reports that are supposed to reply to the parliamentarian interventions with ChatGPT. So, in the end, you have two machines talking to each other and we can both go on holidays, the parliamentarians and the administration. I think that’s something to think about in the end. But now, to be serious, we need to find ways to become more agile, more dynamic, without becoming stressed. So, we are going in the wrong way if we try to do things quicker. We have to do things differently as human beings in general, but also as rule makers. So, we need to use the new tools to face the challenges that the new tools create. Otherwise, I think it won’t work. Don’t ask me how. I’m not a technician. Maybe Vint and others know. But at least on the concept level, I think we need to find a different approach. And just two words to the geopolitical environment. And this is something that, as somebody who has been in this since the WSIS, since 2003, in that period, we were all still in the hope of the end of history with the fall of the Berlin Wall, with Nelson Mandela, with people with charisma, avoiding wars, creating peace, bringing people together. And we were hoping that the new technologies would bring us together, would strengthen the rules-based international order based on shared values. Unfortunately, we somehow have lost the track. And in particular, the leaders, be it dictators or be it leaders that have been elected by more or less democratic processes, are losing track of this notion of cooperating is better than fighting against each other. And I just hope, I’m also a historian, that we don’t need to go to really ugly wars in order to realize that cooperation is better than fighting each other. But for the time being, it seems a little at least unsure how we deal with this. And then, of course, technologies are not just new tools to do good things, but also to do bad things. And I’m not a prophet, so I will not go into detail. But I think we should realize and we should work together with people that realize that working together is actually sustainable. It’s also more fun. It doesn’t just create less harm. It’s actually also more fun than working against each other. Because if that’s not the case, no intergovernmental institution or multistakeholder institution works because it’s all built on a notion of we cooperate together. So you can’t blame the ITU or the UN for not producing results if those that are shaping it, i.e. the member states or the stakeholders in multistakeholder institutions, are not willing to cooperate. So this is just a few thoughts of mine. Thank you.

Cedric Sabbah:
Carolina, you’re up next. Can you hear us?

Carolina Aguirre:
Yes, thank you. So to address these questions and following on Thomas’s intervention, so I do think that we have nearly 20 years of experience on our back with dealing with an open technology as the Internet and then with AI governance as an emerging challenge, global challenge, but that also is spread out very much everywhere. I do think that we still need to make strong efforts in keeping up the momentum on spaces and processes that achieve some kind of, in a way, what the IGF does in terms of its openness and bottom-up spaces. And we are seeing that kind of reflection around some of the AI governance developments, which look positively at spaces such as the IGF and some of the Internet governance approaches that have been taken over the last nearly two decades. We do need to sort of try to understand the limits and the actors that are shaping these ecosystems. So in that respect, I do believe that keeping up this effort despite maybe the less positive and maybe less vibrant sometimes mood that we may have towards these processes is very, very relevant in line with what Thomas was mentioning concerning cooperation as well, with trying to get to some kind of mutual understanding. I also think that trying to get to the idea of working together is also related with the third part of this intervention, the question, the prompt that you raised, Cedric, concerning the geopolitics, because we are in a different time and moment concerning globalization. So geopolitics today is unfolding as it did unfold differently in the early 2000s or late 90s. And now those states are certainly extremely important. I mean, so many of these new technological developments as in the past, they are also being shaped and taken forward by the private sector. And so when we talk about geopolitics and address technological changes and technological momentum, I mean, we do have to also address the elephant in the room on how to sort of work and define the scope and space for action for this private sector that has an increased power. And we are seeing that kind of momentum also shaping how we address and have concerns on how some of these new technologies are being sort of developed behind closed walls and are much less open by nature in terms of what the Internet originally was and still is. And finally, as a final observation, I mean, when we think about the developments of these technologies, including the Internet, I mean, technology is never neutral. Technology is never non-reliant on societal values. So we do have to keep that in mind when thinking about developing international processes around these new technologies. Thank you.

Cedric Sabbah:
OK, thanks, Karolina. I want to give a bit of the opportunity to other panelists to just chime in. It almost seems like hearing from both of you, Tomas and Karolina, I’m grossly oversimplifying, but it’s almost like you’re saying we’re OK. We the institutions we have are in place. The world is what it is. And we’ll just have to deal. Karolina, you’re not agreeing. So I misunderstood. Could you could you just refine what I’m saying?

Carolina Aguirre:
I’m certainly not saying that we are OK. I do think that we do have some interesting foundations, but that the challenges ahead are enormous and particularly because we are not as keen as Tomas, I think, as I understood him, correct me if I’m wrong, was stating that we are in a different moment in terms of how we address global cooperation as one of the angles to address globalization. Globalization is in decline in many respects concerning trade, concerning international dialogue. So, I do think that it is indeed an extremely challenging moment and maybe probably most of the processes that we are seeing concerning internationalization are really not up to the challenges that we face with the development of these technologies.

Cedric Sabbah:
OK. I’d like to give a few moments for Chris or anyone in the room if you want to relate to what you just heard. I think that’s a prompt, isn’t it, Cedric?

Chris Jones:
I think you want me to say something. So, first of all, Cedric. Not specifically you. So, I will. I’m delighted to be here. And, Cedric, I’m sorry you can’t be with us here personally, but I’m really happy to see you safe, albeit on a screen. So, you know, best of luck with everything that’s going on. I agree with what both of my co-panelists have just said. Geopolitics is a messy business, particularly right now. But I think there’s an opportunity here to focus on the areas where we agree, not on the areas where we disagree. And too often, and I’m sort of stealing my remarks from later, too often I feel we start with too big a picture. So, we try to do too much in one go. I’m an engineer, and my natural tendency is to break things into the smallest possible component I can because I’ve got a very small brain. And that means I can understand them. I can fix them. I can make them work. And I think there’s some parallels here for how we work in our multilateral and international organizations in helping address some of these challenges.

Cedric Sabbah:
Okay. I think there’s a lot to unpack in everything. But we’ll have the opportunity to continue to delve in. So, I’d like to go now in a little bit of the uncharted territory. We heard in a few panels in the last few days the idea of agile governance and sandboxes in domestic regulation in order to smartly regulate AI. And what I want to ask is whether this idea can be useful for global governance as well. Are international organizations capable of being agile? Or is this concept maybe completely antithetical to the way they’re meant to operate? When we talk about bottom-up regulation, the underlying idea generally is that rather than top-down where you have like a central institution that promotes and implements processes for its constituents, in bottom-up we empower the constituents to deal with the issues based on their concrete needs from the ground. We see the good in everyone’s contribution. So, can bottom-up and multi-stakeholder processes contribute to the quality of global governance mechanisms? And if so, how? Practical examples of bottom-up approaches to consider and I invite you to address any one of these or all of these or maybe something else. One example that’s already done to a large extent by the OECD is fostering policy experimentation by allowing exchanges of views. So, setting up a tech policy lab for international information sharing. Another one is actually fostering the experimentation by states by allowing for a space in which states can maybe succeed and fail in certain examples and then learning collectively from the successes and failures. Another one is maybe integrating in the bottom-up approach, integrating other stakeholders that maybe are not traditionally in the conversation. One example that comes to mind from our experience with AI in Israel is small and medium enterprises. And also maybe encouraging rule-making by specialized networks. So, instead of having, for example, the large generalist organizations that deal with the big issues, having networks of, for example, privacy regulators or cybersecurity regulators or AI regulators in the future to deal with things on their own. So, I’ll ask Galia and Chris, I’m turning to you as well again. I think each of you have unique viewpoints that you can share, so I’ll ask you to go first.

Chris Jones:
Yeah, thank you. So, I’ll go first just because I’ve been asked to. So, first of all, I’m interested in these songs and I really hope people in the audience are doing better than I am, because I have no clue. But when Cedric first suggested it, I thought Ambassador Schneider was actually going to play them all, which would be amazing. Look, I think it’s a little bit of a loaded question being at an event organized by a large international organization about whether they can be agile, because I think that could be quite a dangerous place to go. But I do think they can. I do think large organizations can be agile, but not in the way that we’re currently organized and the way that we operate. So, I think there are some parallels we can take from agile software development, where we define small chunks of activity and we work out how we define those. We don’t define the order in which we deliver them, we just define what they are. And the plan is always to get better. So, to incrementally deliver more, rather than trying to deliver everything in one go. And I think there’s a parallel there for how we work internationally. That’s what we’re trying to do with the UK’s hosted AI Safety Summit. So, we can’t do all of AI. So, on the 3rd of November, AI is not going to be solved. But what we can do is focus on a very narrow slice and get some broad international agreement. And I think there’s something we can do there. The second thing I wanted to talk about was different types of governance. And I think we always tend to focus on values first. We try to agree what are the values we want to see. And this, I think, comes to the geopolitics. I don’t think we will ever agree on a common set of values. Different countries are different countries for a reason. We have different national identities, we have different things that are important to us. And we have to embrace that diversity. But that doesn’t mean there aren’t some common values we can agree. So, I think we absolutely should focus on that. But there’s another type of governance, the technical governance. So, the things that we need to have in place in order to be able to interoperate, to talk, to work together. And I think it’s often easier to focus on those because we can get the engineers to be focusing on the really practical details of what does it take. I think there’s a difference between how and what. And I think very often we focus on the what, whereas what’s really important is the how. And I’ll give you the example of the UK’s online harms legislation. So, that has taken us six years and we’re nearly there. But even when we get there, you could never pick that legislation up and give it to another country, just wouldn’t work. But what would work is the process of how we got there. So, there’s some key things that you need to do to be able to develop that type of legislation. You need to define what constitutes a vulnerable group. There’ll be some common themes. So, children, I think everybody agrees that children are a vulnerable group. But different minority groups will be different in different countries. So, sharing the process, the how, I think is important for bringing these things together. Cedric, you talked about multi-stakeholderism. I think that is critical. I think all governance needs to be multi-stakeholder because nobody has all the answers. So, governments certainly don’t have the technical expertise. Technology companies don’t have the legislative expertise. And none of those really understands the impact on citizens and the civil societies organizations have. I think the IGF is a great example of how you bring that multi-stakeholder organization together. I mean, look at the range of organizations here. So, whether you’re the boss of a telecoms company or whether you’re a Ministry of Foreign Affairs official like me, they couldn’t be more different. But we’re all here talking about common issues. And then finally, I just wanted to talk about, Cedric, you wanted an example of bottom-up and where this has worked. And I really like the example of the airline industry where there’s a need to work together and agree common standards. You know, we needed to fly planes from one country to another. So, we needed a way to share data, a way to build planes that could fly into different territory. And that really forced people from a bottom-up perspective to work together. And I wonder what the parallel might be for artificial intelligence or quantum or dare I say, human rights. So, thank you. I’ll hand over to my colleagues.

Gallia Daor:
Thank you, Cedric. I don’t know if you wanna respond to that first or… No, go ahead, Gaia. So, thanks for this. I do love how all your examples are. I’m an engineer. So, I like to break things into little bits. So, I’m a lawyer. So, I like process. So, I think, but I think that that is part of the, will be really part of my answer because I think, yes, it’s very common to think that intergovernmental organizations can’t do that, that, you know, what’s agility got to do with like anything like intergovernmental organizations. But I think partly it’s by design because if we want to be, and I’m speaking from the perspective of an intergovernmental organization, if we want to be accountable to our members, if we want to be transparent, if we want to have multi-stakeholder consultations, if we want to be evidence-based, if we want to be thorough, it’s hard to also be fast. So, we, and if we want to maintain the credibility, that would mean that stakeholders actually want to come and engage with us because stakeholders have limited capacity and limited time. And they would only come if the conversation’s worth it. Then I think we also need to make sure that we uphold these standards. Nonetheless, the world is changing and things are happening. And in particular in the technology area, things are happening very fast. So, we can’t just stick to the way that we did things 60 years ago, for example, when the OECD was established. And I think, you know, Cedric, you mentioned sort of playing catch up with technology or sort of trying to be more anticipatory and sort of more planning ahead. And I think we’re moving there. And I think I can give sort of a couple of examples from the OECD’s perspective of, I think where we’ve tried to both be agile and to have this multi-stakeholder bottom-up approach. And so, one example that you mentioned briefly earlier is the OECD AI principles that were adopted in 2019 and were the first intergovernmental standard on AI. So, I think that’s one thing to say about that is it was the fastest process ever at the OECD to develop a recommendation. So, we did that basically in one year, which sounds like a lot, but really isn’t for something that’s so complex. And obviously, it builds on a lot of work that had been done before, but the process itself was remarkably fast and nonetheless was also absolutely multi-stakeholder and interdisciplinary. And I don’t think we would have gotten there. I’m sure we would not have gotten there without that kind of engagement that was essential. Also on the AI front, then we are now as part of the work to support countries and organizations in implementing these principles. So, we have a very extensive network of AI experts with more than 400 experts from different stakeholder communities from different countries. And that actually helps us. It sounds like it’s a big machinery, but it actually helps us move fast. And I think it’s a really helpful model because we can, like Chris said, we can break it up to little bits and little working groups that sort of focus on different aspects. And we can also adjust. So, we started with one set of working groups, but we’ve evolved them. So, we now have a group that focuses on compute, which isn’t something that we didn’t work on at first. We have a group that focuses on climate. We have a group that focuses on AI future, which is sort of a generative plus, plus what we might see coming ahead. So, I think that’s sort of perhaps one example. And then beyond AI, AI has taken up a lot of space in the discussions that I’ve been in over the last couple of days. So, beyond AI, sort of looking at emerging technologies and also looking at my colleague, Elizabeth here. We’ve, at the OECD, we created the Global Forum on Technology about a year ago with a lot of support from the UK, but really as a global venue for dialogue on emerging technologies and sort of anticipating and preparing for the opportunities and challenges that they might bring. And I was looking at Elizabeth because she’s actually leading this project. And it’s both sort of multi-stakeholder by design, but it also lets us sort of try to move relatively quickly on these different technologies. For example, quantum, for example, immersive technology. So, I don’t know if that’s not to say everything is perfect to your question, but I think there are ways to try to address some of these by design challenges and how international organizations are built.

Cedric Sabbah:
Okay, thanks, Gana. Here too, I’d like to invite maybe Sheetal, who hasn’t spoken yet, as well as Thomas, Kaolina, Asbetta, whoever would like to just add in their two cents on this agility question. Can you? Yes, okay, great.

Sheetal Kumar:
Thank you first for having me here. Let me start with looking at the, I was looking at the session description and all of the technologies that are listed there. The emergence of new tech, like quantum-related developments, metaverse platforms, nanotech, humans, machine interface. And it all sounds like going to a theme park and maybe having a great time. But actually, for a lot of people, this future could be a very difficult one. People who are already marginalized women, it’s not necessarily going to be a good future just because the technology is different or faster or more complex. So, as I think Kaolina was saying, technology is never neutral. And so, what we can do about that is ensure that the development of it and indeed the governance of it is more inclusive. So, we can’t predict the future. I don’t think any of us would claim to do that. But what I think I can say with some certainty is there’s going to be 24 hours in every day in the future unless something changes. So, that’s really a point around resourcing, right? So, if we have 24 hours a day, we sleep for about eight hours ideally. The rest of the time, what do we do with it? We’ll work, try and shape this world that we’re in. And what I would say is that there are spaces already where we’re doing that work and they can be improved. As I think Chris was saying, we can work with what we have and make things better incrementally. What does that mean for multi-stakeholder spaces where these discussions are happening? I think improving those, making them more open, where standards are being developed, making those more diverse, strengthening the IGF, for example, and connecting the discussions that happen here with the discussions that happen elsewhere in multilateral spaces. So, to give an example from the IGF, because we’re here, and I presume we all care about the IGF, that’s why we’re here, and I’ve been involved in some of the intersessional discussions at the IGF. And what I think is a good example is, for example, the Best Practice Forum on Cybersecurity. It’s an okay example, I’m actually going to say, because I think it could have been better. We are having discussions at the, well, the UN is having discussions about how to ensure that states behave responsibly in cyberspace. They’ve developed norms, they are continuing these discussions. How to implement them has been an ongoing one, and so the Best Practice Forum over the last few years has been taking the norms and analyzing cyber incidents, big, large cyber incidents that we’re all familiar with, and assessing how those have impacted people, like first responders and the people on the ground to inform the implementation of that. These are multi-stakeholder working groups or intersessionals, and we have had governments and others involved, particularly with the policy network on internet fragmentation, actually. It would be great, I think, if governments and industry and other stakeholders and civil society prioritized having, in their portfolios, time to engage with these forums and to bring their foreback, because we have to connect these spaces through people. We don’t have to connect them with some novel technology with what they’re doing elsewhere, and that way we can strengthen and empower, I think, our spaces to be more diverse and more inclusive. That also goes to opening up multilateral spaces, more through consultations, through engagement, and through modalities that really allow for meaningful inclusion. So final point, then, I guess, is that future-proofing doesn’t have to be high-tech. It can actually be quite basic. It can actually be quite simple. Of course, I’m not saying not using generative AI to help you with your reports from us wouldn’t be a good idea, but it doesn’t always have to be that way, and I think we have some basic things that we haven’t done that we need to do better, and those are some examples which I hope help. Thank you.

Cedric Sabbah:
Does anybody else want to say something about this concept of agility? I’m not seeing anyone. So, okay, we’ll try to package everything a little bit later. So now, oh, before I move to the next slide, I was pointed out to the person who chose the song from Rage Against the Machine. I won’t disclose who it is. I made a mistake in the title of the song. The song is called Take the Power Back, so I’ll have to change the image later, but anyway, so keep that in mind. We’re moving now to the next, I guess, the final theme for today. So I think it makes sense to say, and you’ve all kind of hinted at these concepts before, I think, Sheetal and also Galia, all of you who’ve spoken about multistakeholderism, sorry. So I think it makes sense to say that agile governance, if it’s this kind of theme that we’re trying to enshrine in the way international organizations work, it doesn’t operate in an absolute vacuum. So there should be, I guess you could say, maybe like a subtle line between agility and maybe anarchy between experimentation and free-for-all. So the question I think that begs to be asked is, are there any kind of like universal principles of global tech governance that should be kind of promoted across the board? Here in the image, I connected the song Born to Run by Bruce Springsteen because it includes the line, I want to guard your dreams and visions, which I think is a nice metaphor for the idea of a responsible innovation. So we have all these common buzzwords that have served us well, I think, so far in internet governance and AI governance. So buzzwords like multistakeholderism, interoperability, human rights that apply offline, apply online, trustworthy, human-centric. So do you think these concepts remain relevant for all other technologies, such as immersive technology, human-machine interface, all the quantums? Or do you think maybe they all apply, but they apply differently? Or do you think we might need to… up with new concepts and frameworks that enable us to grapple with the new challenges. Also, a lot of the issues are cross-cutting, so when we talk about, you know, we don’t want fragmentation, but we actually see a fragmentation of processes within the UN. There’s the ITU, UNESCO, the Human Rights Council, WIPO, and then outside of the UN we have, you know, the OECD, COE, the EU, of course, which is a major player, and then we have topic-specific initiatives like GPA, like the AI Safety Summit that Chris mentioned earlier. So is this fragmentation of efforts, is this, in your opinion, a feature or a bug? So I’d like to ask Sheetal first to address this question. Any universal principles? Should we be aiming for fragmentation, allowing for fragmentation? What do you think?

Sheetal Kumar:
Thank you for those questions. I think there’s something semantic sometimes when we talk about this topic, and fragmentation, if it’s diversity, then great. If it’s also, for example, normative efforts that are all aligning and reinforcing common principles, then great. If it’s duplication, and as I said, we have limited resources if we’re going to different places trying to do the same thing, but spending our time actually developing different frameworks that are competing, then no, it’s not. And there is a risk of that if we don’t coordinate and collaborate on some of these emerging issues. There is a lot, as I think we heard earlier, around AI at the moment happening on how to govern that, but at least we have, and this is, I know, something that people have felt fatigued about at this IGF, at least we have a space where we’re coming together and we are hearing about what everyone else is doing. We can try and make those connections and ensure these deliberative spaces, ensure the decision-making spaces are inclusive. So not necessarily, I guess is my answer to you, Cedric, it’s not necessarily a bad thing to have various processes at play as long as they coordinate and they’re inclusive. And I also just wanted to point out earlier what I mentioned about the importance of connecting what is an open and inclusive deliberative space like the UN’s IGF, which is so unique because we also need to remember that the IGF is not this annual event, it is the intersessionals, it is the hundreds of national and regional IGFs that happen every year and that provide these spaces for people to come together and very unique in that way. This is something that we need to preserve and so if we try and create something else that is exactly like that, then that is a problem. But the leadership panel, which I know we have a member of here, it is very important to create these connections with those who can then take on messages and connect to other spaces. So I think that’s what’s really important that we need to ensure that when we’re governing these new technologies and building the processes for them that they’re truly inclusive by design. We have endless tools and ways to do that, we know how to do it, we need to do it. And I would say that, as I said, it’s kind of old governance or old tech for new tech perhaps. It’s not that complex to ensure that information is shared in a timely manner, that information is clear, that it can be accessed by a range of different people, that they’re invited to the table. And of course new technologies can also be deployed to support that. So hopefully we can turn our minds, I think, to actually operationalizing what we already have and use good examples as those we’ve heard from before to ensure that when we’re confronted with these new challenges, the principles, you asked me about principles, the principles of transparency and engagement, of openness, of maintaining user and people’s autonomy, and of preserving openness, all of those are enshrined and preserved as we face the new challenges that we are.

Cedric Sabbah:
Maybe I’ll, does somebody else want to take the mic?

Gallia Daor:
Hang on, yeah, sorry. Yeah, no, I was just sort of as Sheetal was speaking and also sort of to your questions, I think one of the things that also at the OECD, but I’m sure in other places that we’ve been thinking about is really this sort of the gap between what is like the fairly high-level principles. You asked, Cedric, do we think that trustworthy and responsible and whatever are relevant? So I think yes, I mean, absolutely. And I think they are relevant to, I don’t know about all technologies in the world, but I think in principle, yes. We want them to be trustworthy. We want them to be responsible or the development to be responsible. We want accountability. We want that the process will be inclusive. And I think obviously we want sort of alignment with human rights where there’s the potential of risk of human rights, to human rights. And I think that’s also to Chris’s point earlier. I think that is the, these are the core values that I think we have to, like we already did, so we have to agree on. And so I think yes, at the high level, but then the question is, okay, what do you do with that? And that’s where I think sometimes there would be differences between technologies because, I don’t know, we had an AI discussion earlier and one of the points that was raised really is about the data and how important the data is in the context of AI and how issues of sort of data that’s not representative and bias. And so these are things that are perhaps specific to AI, might not be the case with a different technology. But so we need to be aware that when you implement the high-level principles to a specific technology, that’s where you’d have the difference. And I think that’s related to the governance question because I think that’s where perhaps you would split or you’d break up things into little bits because that’s where you really need the expertise and that’s where you might need to have processes happening in different places. So just the thought, I don’t know.

Sheetal Kumar:
Could I just add something very quickly on that? It’s, I think, yeah, exactly what was said about the need to integrate, I think, these high-level principles in various ways. We are now seeing that all these technologies that we’re using are impacting so many aspects of our lives in a way that makes, I think, it requires us to turn to what we have agreed on. And what we have agreed on is the international human rights framework. So that is a ready-made, a ready-agreed framework that we can embed throughout the supply chain of these technologies through the standards. And there are means and there are tools to do that. And so I think that’s also very important. Sorry to plug my session tomorrow, but the OHCHR is co-hosting with us a session tomorrow on their report on technical standard setting and human rights. So it is really, it’s an opportunity, I think, as these technologies evolve to ensure that we build them so that we have a rights-respecting world where everyone benefits from them. And in that sense, it’s quite, it is an exciting theme park then, I think. If I may hook in, Cedric, this is Thomas.

Thomas Schneider:
Something that always strikes me is when you talk about how does this need to evolve, is that while technologies involving probably institutions will somehow follow, human beings themselves are fairly stable over a longer period in time in the way they function. And if you take, and I’m often comparing engines as something that has differences, but it has many similarities in the way it’s disruptive, like AI. Engines were put in machines that were either moving something from A to B much faster than men or horses or whatever, or cows, or they were put in machines that were producing something, be it food, be it goods, whatever. And it’s similar to AI that are used to either generate content or put content new together, or to replace not physical human labor, but cognitive human labor. There’s less animals that you can replace because animals seem to have less cognitive capabilities, but so it’s manpower, cognitive manpower. And if you look at the reactions, this is the point I’m trying to make. If you look at the reactions of people to engines being used in different contexts. In Switzerland, near Zurich where I live, in 1833, a group of home weavers and small and medium enterprise weavers were burning down a textile factory after the government has decided not to ban factories like this to emerge, which is what they demanded. They just burned it down because they were afraid of losing their jobs. And actually some of them lost their jobs. Of course, then history has shown there’s actually more new jobs are created through industrialization than jobs are killed. So the fear of losing the job is something that we’ve had. Then ignorance is another thing. The last German, Kaiser Wilhelm II, he used to say somewhere in the early 20th century, I don’t believe in the automobile that has no future. I trust in the horse. Well, so and there are people that say to say, well, this is not really going to change much and so on. Everything will stay the same. Not necessarily. And then the other one is, again, those that banned things in Graubünden, which is the region in Switzerland that has touristic places like the Warth and St. Moritz and others. The government banned cars from the whole territory of the region in 1900. And only 25 years later, 25 years later in 1925, they allowed cars through a popular vote because the people thought like, well, actually we want to use them. And then the question is where the people or this was the government in Graubünden more environmentally friendly or whatever? Probably not. Maybe that the horse tourism industry, whatever there was, was just better organized in that region that made them ban cars for 25 years. So we have the same reactions to new technologies and we’ll probably have the same reactions in building a network of norms, be technical, legal, but also cultural norms in how to use not engines, but AI in different contexts with different levels of harmonization. In the airline business, you have a much higher harmonization than in the car infrastructure, in rules on cars, but you do have technical and legal and also cultural ways of dealing with ways of organizing stuff. And the same will happen with AI. And then the same needs to happen also with the institutional arrangements on how to take these decisions. And Wolfgang Kleinrecht and others have already used the frame 10 years ago, like we are trying to solve the problems of the 21st century with the institutional arrangements that we’ve made in the 19th century, which actually many countries coincide with industrial revolution, that you had kings and kaisers and not really democratic systems only. And then more or less in line with industrial revolution, you had the introduction of parliaments, of division of power with legislative, executive, and the court system. So also there, there is some influence on technology, not just on the daily lives, but also on the institutional setting. And the notion of multi-stakeholder, I don’t think it will go away because we will have to organize ourselves differently now that the technology is dematerializing. Maybe the rules making should also dematerialize from purely physical, I live in this country now, so the rule is made in this country for this country. Because if also people move around and if everything moves around, the physical fixing of rules just because you happen to be somewhere, or even worse, you happen to be born somewhere and have that citizenship, and you can only decide about the rules where you’ve been born and not where you actually live, may not make so much sense. So we may have to develop a new way of division of power, not among geographical political borders, but maybe in a more sophisticated stakeholder-based or situation-based or whatever, voluntary group-based schemes that are more representative of the people than classical 19th century parliaments. Thank you.

Cedric Sabbah:
Thank you so much, Tomas. It’s amazing to me how sometimes to think about the future, it’s helpful to look at history. So I would like to now turn to my dear friend Osbeth Izum to try and package this for us. We don’t have a lot of time left, so I think we’ll skip the Q&A. So, Osbeth, you’ve been attentively listening to our panelists. I know you’ve been involved with human-machine interface in the past and now, of course, AI. Can you share with us, in your view, some takeaways, some overarching thoughts, action items, areas for future research that you think we should be focusing on? Over to you.

Alžběta Krausová:
Thank you, Cedric. And thank you for organizing the panel despite the situation. And my heart goes to Israel. So let me now share quickly, because we don’t have much time, my observations. I made thorough notes. And I have to say that, actually, all the panelists went so nicely to follow up on each other. So the key messages I will try to summarize from each of them. From Tomasz, the disruption is too big now. And we need to change the way that we look at things, which resonates with me very much. And I will tell at the end of my speech why. Karolina, she said that we really need to define the scope of action right now. And also that the private sector is increasing power, which we need to focus on. Chris focused on finding the common values and sharing the how, which I think all of those are very nice action points. And Gania, actually, she spoke about the importance of multidisciplinarity and involving stakeholders. Sheetal spoke about diversity and also about the importance of space where we are coming together and operationalizing what we already have. I think those are nice action points that are actually coming together very nicely. And they respond to the questions you had about disruption, agility, and common principles. Now, to my personal observations, I think that the convergence of technologies that we are facing now, that’s what Tomasz mentioned in the beginning, is the biggest problem. That’s something that we really need to focus on. And in my personal opinion, we are kind of crossing the border because with the technologies like human brain, brain computer interfaces, when we are able to peek inside of a human brain, we really are able to cross the physical border of a human body and intrude the privacy of our minds, connected with AI, read the mind, and actually even influence the people. We really need to ask the main question now, which is, what world do we want to live in? Because that is crucial. We need to define where we are headed. And it’s the place where international organizations to steer the development, to steer it in a way that is thoroughly discussed. And yes, there is this cost of time where we really need to focus on, we really need to go deep, and we need to give it the time and attention and the thought to see where we want to go. We need to agree on how we are going to operationalize the principles that we are already having. I think that the principles, they need to be implemented to new situations, as it was already mentioned. We do have the common values like human life and physical and mental integrity. This is something we need to consider in new ways and see what does it mean in different scenarios. That’s why also the bottom-up approach is very important, because we need to see case by case what is happening and not just think about theoretically what might happen. We need to see what is happening and react as quickly as possible while balancing it with a thorough discussion. And as you said that we should suggest some parts of the song in our final speech, I would like to say that I feel like walking the world, which means for me we should get to know each other better and better and better, and not understand each other just in a rational way, but also in an emotional way, which means we should not meet at one place, we should travel, we should see each other, and we should understand each other on the human level, the complete package. Maybe this is just too much general of an observation, but this is my position. Thank you.

Cedric Sabbah:
So much, Elzbeth. This session for me, I wasn’t thinking of this, but it kind of occurred to me as we were going along. I love the idea of kind of like taking something, breaking it up, like deconstructing and then reconstructing. And I think there was this kind of this recurring concept here of like, yeah, a lot of these principles and concepts, we want them, but we might have to rethink how we do certain things. That’s not to say, you know, an absolute revolution is necessary, but kind of just like recalibrating so that we can adapt better for the future. So thanks so much, Elzbeth. I know the time is up, and I would just love this session to continue for another few hours and just hear what everybody has to say. But unfortunately, we have to stop now. I think I speak for everyone here in saying we learned a lot. I want to thank the panelists, especially Carolina, who’s, I think it’s quite early in the morning for you. Your interventions, I think they provide the foundations for some kind of follow-up, so maybe next year’s IGF or something else. Last thing, if you were attentive and you think you can guess who picked which song, let us know. Thanks, everybody in the audience in Kyoto and also virtually on the Zoom and on YouTube, and enjoy the last day of the IGF. Thanks so much. Thank you, Cedric, and all the best. Thank you, everyone. Thank you. Thank you.

Alžběta Krausová

Speech speed

149 words per minute

Speech length

681 words

Speech time

275 secs

Carolina Aguirre

Speech speed

120 words per minute

Speech length

625 words

Speech time

312 secs

Cedric Sabbah

Speech speed

163 words per minute

Speech length

2936 words

Speech time

1081 secs

Chris Jones

Speech speed

209 words per minute

Speech length

1148 words

Speech time

330 secs

Gallia Daor

Speech speed

182 words per minute

Speech length

1299 words

Speech time

428 secs

Sheetal Kumar

Speech speed

187 words per minute

Speech length

1568 words

Speech time

502 secs

Thomas Schneider

Speech speed

172 words per minute

Speech length

1931 words

Speech time

673 secs