Main Session on Artificial Intelligence | IGF 2023

10 Oct 2023 06:15h - 07:45h UTC

Event report

Speakers

  • Arisa Ema, Associate Professor, Institute for Future Initiatives, The University of Tokyo
  • Clara Neppel, Senior Director, IEEE European Business Operations
  • James Hairston, Head of International Policy and Partnerships, OpenAI
  • Seth Center, Deputy Envoy for Critical and Emerging Technology, U.S. Department of State
  • Thobekile Matimbe, Senior Manager, Partnerships and Engagements, Paradigm Initiative

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator 2 – Christian Guillen

During the discussion, the speakers focused on various aspects of AI regulation and governance. One important point that was emphasized is the need for AI regulation to be inclusive and child-centred. This means that any regulations and governance frameworks should take into account the needs and rights of children. It is crucial to ensure that children are protected and their best interests are considered when it comes to AI technologies.

Furthermore, the audience was encouraged to actively engage in the discussion by asking questions about AI and governance. This shows the importance of public participation and the involvement of various stakeholders in shaping AI policies and regulations. By encouraging questions and dialogue, it allows for a more inclusive and democratic approach to AI governance.

The potential application of generative AI in the educational system of developing countries, such as Afghanistan, was also explored. Generative AI has the potential to revolutionise education by providing innovative and tailored learning experiences for students. This could be particularly beneficial for developing countries where access to quality education is often a challenge.

Challenges regarding accountability in AI were brought to attention as well. It was highlighted that AI is still not fully understood, and this lack of understanding poses challenges in ensuring accountability for AI systems and their outcomes. The ethical implications of AI making decisions based on non-human generated data were also discussed, raising concerns about the biases and fairness of such decision-making processes.

Another significant concern expressed during the discussion was the need for a plan to prevent AI from getting out of control. As AI technologies advance rapidly, there is a risk of AI systems surpassing human control and potentially causing unintended consequences. It is important to establish robust mechanisms to ensure that AI remains within ethical boundaries and aligns with human values.

The importance of a multi-stakeholder approach in AI development and regulation was stressed. This means involving various stakeholders, including industry experts, policymakers and the public, in the decision-making process. By considering different perspectives and involving all stakeholders, it is more likely to achieve inclusive and effective AI regulations.

Lastly, the idea of incorporating AI technology in the development of government regulatory systems was proposed. This suggests using AI to enhance and streamline the processes of government regulation. By leveraging AI technology, regulatory systems can become more efficient, transparent and capable of addressing emerging challenges in a rapidly changing technological landscape.

Overall, the discussion highlighted the importance of inclusive and child-centred AI regulation and the need for active public participation. It explored the potential of generative AI in education, while also addressing various challenges and concerns related to accountability, ethics and control of AI. The multi-stakeholder approach and the incorporation of AI technology in government regulations were also emphasised as key considerations for effective and responsible AI governance.

Clara Neppel

During the discussion on responsible AI governance, the importance of technical standards in supporting effective and responsible AI governance was emphasised. It was noted that IEEE initiated the Ethical Aligned Design initiative, which aimed to develop socio-technical standards, value-based design, and an ethical certification system. Collaboration between IEEE and regulatory bodies such as the Council of Europe and OECD was also mentioned to ensure the alignment of technical standards with responsible AI governance.

The implementation of responsible AI governance was seen as a combination of top-down (regulatory frameworks) and bottom-up (individual level) approaches. Engagement with organizations like the Council of Europe, EU, and OECD for regulation was considered crucial. Efforts to map regulatory requirements to technical standards were also highlighted to bridge the gap between regulatory frameworks and responsible AI governance.

Capacity building in technical expertise and understanding of social legal matters was recognised as a key aspect of responsible AI implementation. The necessity of competency frameworks defining the necessary skills for AI implementation was emphasised. Collaboration with certification bodies for developing an ecosystem to support capacity building was also mentioned.

Efforts to protect vulnerable communities online were a key focus. Examples were given, such as the LEGO Group implementing measures to protect children in their online and virtual environments. Regulatory frameworks like the UK Children’s Act were also highlighted as measures taken to protect vulnerable communities online.

The discussion acknowledged that voluntary standards for AI can be effective and adopted by a wide range of actors. Examples were provided, such as UNICEF using IEEE’s value-based design approach for a talent-searching system in Africa. The City of Vienna was mentioned as a pilot project for IEEE’s AI certification, illustrating the potential for voluntary standards to drive responsible AI governance.

In terms of incentives for adopting voluntary standards, they were seen to vary. Some incentives mentioned include trust in services, regulatory compliance, risk minimisation, and the potential for a better value proposition. However, the discussion acknowledged that self-regulatory measures have limitations, and there is a need for democratically-decided boundaries in responsible AI governance.

Cooperation, feedback mechanisms, standardized reporting, and benchmarking/testing facilities were identified as key factors in achieving global governance of AI. These mechanisms were viewed as necessary for ensuring transparency, accountability, and consistency in the implementation of responsible AI governance.

The importance of global regulation or governance of AI was strongly emphasised. It was compared to the widespread usage of electricity, suggesting that AI usage is similarly pervasive and requires global standards and regulations for responsible implementation.

The need for transparency in understanding AI usage was highlighted. The discussion stressed the importance of clarity regarding how AI is used, incidents it may cause, the data sets involved, and the usage of synthetic data.

While private efforts in AI were recognised, it was emphasised that they should be made more trustworthy and open. Current private efforts were described as voluntary and often closed, underscoring the need for greater transparency and accountability in the private sector’s contribution to responsible AI governance.

The discussion also touched upon the importance of agility when it comes to generative AI. It was suggested that generative AI at organizational and global levels should be agile to adapt to the evolving landscape of responsible AI governance.

Feedback mechanisms were highlighted as essential for the successful development of foundational models. The discussion emphasised that feedback at all levels is necessary to continuously improve foundational models and align them with responsible AI governance.

High-risk AI applications were identified as needing conformity assessments by independent organizations. This was seen as a way to ensure that these applications meet the necessary ethical and responsible standards.

The comparison of AI with the International Atomic Agency was mentioned but deemed difficult due to the various uses and applications of AI. The discussion acknowledged that AI has vast potential in different domains, making it challenging to compare directly with an established institution like the International Atomic Agency.

Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies that act as infrastructure. This proposal was supported by one of the speakers, Clara, and was seen as a way to enhance responsible governance and decision-making regarding crucial technological developments.

In conclusion, the discussion on responsible AI governance highlighted the significance of technical standards, the need for a combination of top-down and bottom-up approaches, capacity building, protection of vulnerable communities, the effectiveness of voluntary standards, incentives for adoption, the limitations of self-regulatory measures, the role of cooperation and feedback mechanisms in achieving global governance, the importance of transparency and global regulation, the agility of generative AI, and the importance of conformity assessments for high-risk AI applications. Additionally, the proposal for an independent multi-stakeholder panel for crucial technologies was seen as a way to enhance responsible governance.

James Hairston

OpenAI is committed to promoting the safety of AI through collaboration with various stakeholders. They acknowledge the significance of the public sector, civil society, and academia in ensuring the safety of AI and support their work in this regard. OpenAI also recognizes the need to understand the capabilities of new AI technologies and address any unforeseen harms that may arise from their use. They strive to improve their AI tools through an iterative approach, constantly learning and making necessary improvements.

In addition to the public sector and civil society, OpenAI emphasizes the role of the private sector in capacity building for research teams. They work towards building the research capacity of civil society and human rights organizations, realizing the importance of diverse perspectives in addressing AI-related issues.

OpenAI highlights the importance of standardized language and concrete definitions in AI conversations. By promoting a common understanding of AI tools, they aim to facilitate effective and meaningful discussions around their development and use.

The safety of technology use by vulnerable groups is a priority for OpenAI. They stress the need for research-based safety measures, leveraging the expertise of child safety experts and similar institutions. OpenAI recognizes that understanding usage patterns and how different groups interact with technology is crucial in formulating effective safety measures.

The protection of labor involved in the production of AI is a significant concern for OpenAI. They emphasize the need for proper compensation and prompt action against any abuses or harms. OpenAI calls for vigilance to ensure fairness and justice in AI, highlighting the role of companies and monitoring groups in preventing abusive work conditions.

Jurisdictional challenges pose a unique obstacle in AI governance discussions. OpenAI acknowledges the complexity arising from different regulatory frameworks in different jurisdictions. They stress the importance of considering the local context and values in AI system regulation and response.

OpenAI believes in the importance of safety and security testing in different regions to ensure optimal AI performance. They have launched the Red Teaming Network, inviting submissions from various countries, regions, and sectors. By encouraging diverse perspectives and inputs, OpenAI aims to enhance the safety and security of AI systems.

International institutions like the Internet Governance Forum (IGF) play a crucial role in harmonizing discussions about AI regulation and governance. OpenAI recognizes the contributions of such institutions in defining benchmarks and monitoring progress in AI regulations.

While formulating new standards for AI, OpenAI advocates for building on existing conventions, treaties, and areas of law. They believe that these established frameworks should serve as the foundation for developing comprehensive standards for AI usage and safety.

OpenAI is committed to contributing to discussions and future regulations of AI. They are actively involved in various initiatives and encourage collaboration to address challenges and shape the future of AI in a responsible and safe manner.

In terms of emergency response, OpenAI has an emergency shutdown procedure in place for specific dangerous scenarios. This demonstrates their commitment to safety protocols and risk management. They also leverage geographical cutoffs to deal with imminent threats.

OpenAI emphasizes the importance of human involvement in the development and testing of AI systems. They recognize the value of human-in-the-loop approaches, including the role of humans in red teaming processes and ensuring audibility in AI systems.

To address the issue of AI bias, OpenAI suggests the use of synthetic data sets. These data sets can help balance the under-representation of certain regions or genders and fill gaps in language or available information. OpenAI sees the potential in synthetic data sets to tackle some of the challenges associated with AI bias.

Standards bodies, research institutions, and government security testers have a crucial role in developing and monitoring AI. OpenAI acknowledges their importance in ensuring the security and accountability of AI systems.

Public-private collaboration is instrumental in ensuring the safety of digital tools. OpenAI recognizes the significance of working on design, reporting, and research aspects to address potential harms and misuse. They emphasize understanding different communities’ interactions with these tools to develop effective safety measures.

OpenAI recognizes the need to address the harmful effects of new technologies while acknowledging their potential benefits. They emphasize the urgency to build momentum in addressing the negative impacts of emerging technologies and actively contribute to the international regulatory conversation.

In conclusion, OpenAI’s commitment to AI safety is evident through their support for the work of the public sector, civil society, and academia. They emphasize the need to understand new AI capabilities and address unanticipated harms. The private sector has a role to play in capacity building, while standardized language and definitions are crucial in AI conversations. OpenAI stresses the importance of research-based safety measures for technology use by vulnerable groups and protection of labor involved in AI production. They acknowledge the challenges posed by jurisdictional borders in AI governance discussions. OpenAI promotes safety and security testing, encourages public-private collaboration, and advocates for the involvement of humans in AI development and testing. They also highlight the potential of synthetic data sets to address AI bias. International institutions, existing conventions, and standards bodies play a significant role in shaping AI regulations, and OpenAI is actively engaged in contributing to these discussions. Overall, OpenAI’s approach emphasizes the importance of responsible and safe AI development and usage for the benefit of society.

Seth Center

AI technology is often compared to electricity in terms of its transformative power. However, unlike electricity, there is a growing consensus that governance frameworks for AI should be established promptly rather than waiting for several decades. Governments, such as the US, are embracing a multi-stakeholder approach to developing AI principles and governance. The US government has made voluntary commitments in key areas like transparency, security, and trust.

Accountability is a key focus in AI governance, with both hard law and voluntary frameworks being discussed. However, there are concerns and skepticism surrounding the effectiveness of voluntary governance frameworks in ensuring accountability. There is also doubt about the ability of principles alone to achieve accountability.

Despite these challenges, there is broad agreement on the concept of AI governance. Discussions and conversations are viewed as essential and valuable in shaping effective governance frameworks. The aim is for powerful AI developers, whether they are companies or governments, to devote attention to governing AI responsibly. The multi-stakeholder community can play a crucial role in guiding these developers towards addressing society’s greatest challenges.

Implementing safeguards in AI is seen as vital for ensuring safety and security. This includes concepts such as red teaming, strict cybersecurity, third-party audits, and public reporting, all aimed at creating accountability and trust. Developers are encouraged to focus on addressing issues like bias and discrimination in AI, aligning with the goal of using AI to tackle society’s most pressing problems.

The idea of instituting AI global governance requires patience. Drawing a comparison to the establishment of the International Atomic Energy Agency (IAEA), it is recognized that the process can take time. However, there is a need to develop scientific networks for shared risk assessments and agree on shared standards for evaluation and capabilities.

In terms of decision-making, there is a call for careful yet swift action in AI governance. Governments rely on inputs from various stakeholders, including the technical community and standard-setting bodies, to navigate the complex landscape of AI. Decision-making should not be careless, but the momentum towards establishing effective AI governance should not be slowed down.

In conclusion, while AI technology has the potential to be a transformative force, it is crucial to establish governance frameworks promptly. A multi-stakeholder approach, accountability, and the implementation of safeguards are seen as key components of effective AI governance. Discussions and conversations among stakeholders are believed to be vital in shaping AI governance frameworks. Patience is needed in institutionalizing AI global governance, but decision-making should strike a balance between caution and timely action.

Thobekile Matimbe

The analysis of the event session highlights several key points made by the speakers. First, it is noted that the Global South is actively working towards establishing regulatory frameworks for managing artificial intelligence. This demonstrates an effort to ensure that AI technologies are used responsibly and with consideration for ethical and legal implications. However, it is also pointed out that there is a lack of inclusivity in the design and application of AI on a global scale. The speakers highlight the fact that centres of power control the knowledge and design of technology, leading to inadequate representation from the Global South in discussions about AI. This lack of inclusivity raises concerns about the potential for bias and discrimination in AI systems.

The analysis also draws attention to the issues of discriminatory practices and surveillance in the Global South related to the use of AI. It is noted that surveillance targeting human rights defenders is a major concern, and there is evidence to suggest that discriminatory practices are indeed a lived reality. These concerns emphasize the need for proper oversight and safeguards to protect individuals from human rights violations arising from the use of AI.

In terms of internet governance, it is highlighted that inclusive processes and accessible platforms are essential for individuals from the Global South to be actively involved in Internet Governance Forums (IGFs). The importance of ensuring the participation of everyone, including marginalized and vulnerable groups, is emphasized as a means of achieving more equitable and inclusive internet governance.

The analysis also emphasizes the need for continued engagement with critical stakeholders and a victim-centered approach in conversations about AI and technology. This approach is necessary to address the adverse impacts of technology and ensure the promotion and protection of fundamental rights and freedoms. Furthermore, the analysis also underlines the importance of understanding global asymmetries and contexts when discussing AI and technology. Recognizing these differences can lead to more informed and effective decision-making.

Another noteworthy observation is the emphasis on the agency of individuals over their fundamental rights and freedoms. The argument is made that human beings should not cede or forfeit their rights to technology, highlighting the need for responsible and human-centered design and application of AI.

Additionally, the analysis highlights the importance of promoting children’s and women’s rights in the use of AI, as well as centring conversations around environmental rights. These aspects demonstrate the need to consider the broader societal impact of AI beyond just the technical aspects.

In conclusion, the analysis of the event session highlights the ongoing efforts of the Global South in developing regulatory frameworks for AI, but also raises concerns about the lack of inclusivity and potential for discrimination in the design and application of AI globally. The analysis emphasizes the importance of inclusive and participatory internet governance, continued engagement with stakeholders, and a victim-centered approach in conversations about AI. It also underlines the need to understand global asymmetries and contexts and calls for the promotion and protection of fundamental rights and freedoms in the use of AI.

Moderator 1 – Maria Paz Canales Lobel

In her writings, Maria Paz Canales Lobel stresses the crucial importance of shaping the digital transformation to ensure that artificial intelligence (AI) technologies serve the best interests of humanity. She argues that AI governance should be firmly rooted in the international human rights framework, advocating for the application of human rights principles to guide the regulation and oversight of AI systems.

Canales Lobel proposes a risk-based approach to AI design and development, suggesting that potential risks and harms associated with AI technologies should be carefully identified and addressed from the outset. She emphasises the need for transparency in the development and deployment of AI systems to ensure that they are accountable for any adverse impacts or unintended consequences.

Furthermore, Canales Lobel emphasises the importance of open and inclusive design, development, and use of AI technologies. She argues that AI governance should be shaped through a multi-stakeholder conversation, involving diverse perspectives and expertise, in order to foster a holistic approach to decision-making and policy development. By including a wide range of stakeholders, she believes that the needs and concerns of vulnerable communities, such as children, can be adequately addressed in AI governance.

Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless coordination and cooperation between international and local levels. She suggests that the governance of AI should encompass not only technical standards and regulations but also voluntary guidelines and ethical considerations. She emphasizes the necessity of extending discussions beyond the confines of closed rooms and engaging people from various backgrounds and geopolitical contexts to ensure a comprehensive and inclusive approach.

In conclusion, Canales Lobel underscores the importance of responsible and ethical AI governance that places human rights and the well-being of all individuals at its core. Through her arguments for the integration of human rights principles, the adoption of a risk-based approach, and the promotion of open and inclusive design, development, and use of AI technologies, she presents a nuanced and holistic perspective on effective AI governance. Her emphasis on multi-stakeholder conversations, global collaboration, and the needs of vulnerable communities further contributes to the ongoing discourse on AI ethics and regulation.

Audience

The creation of AI involves different types of labor across the globe, each with its own set of standards and regulations. It is important to recognize that AI systems may be technological in nature, but they require significant human input during development. However, the labor involved in creating AI differs between the global south and the Western world. This suggests that there may be disparities in terms of the resources, expertise, and opportunities available for AI development in different regions.

When it comes to AI-generated disinformation, developing countries face particular challenges in countering this issue. With the rise of generative AI, which has become increasingly popular, there has been an increase in the spread of misinformation. This poses a significant challenge for developing countries, as they may not have the resources or infrastructure to effectively counter and mitigate the negative consequences of AI-generated disinformation.

On the other hand, developed economies have a responsibility to help create an inclusive digital ecosystem. While countries like Nepal are striving to enter the digital era, they face obstacles in the form of new technologies like AI. This highlights the importance of developed economies providing support and collaboration to ensure that developing countries can also benefit from and participate in the digital revolution.

In terms of regulation, there is no global consensus on how to govern AI and big data. The International Governance Forum (IGF) has been grappling with the issue of big data regulation for over a decade, without reaching a global agreement. Furthermore, there are differences in the approaches taken by different regions, such as the US and Europe, to deal with the data practices of their respective companies. This lack of consensus presents challenges in establishing consistent and effective regulation for AI and big data across the globe.

When it comes to policy-making, it is crucial to consider the protection of future generations, especially children, in discussions related to AI. Advocacy for children’s rights and the need to safeguard the interests of future generations have been highlighted in discussions around AI and policy-making. It is important not to overlook or underestimate the impact that AI will have on the lives of children and future generations.

It is worth noting that technical discussions should not neglect simple yet significant considerations, such as addressing the concerns of children in policy-making. These considerations can help achieve inclusive designs that take into account the diverse needs and perspectives of different groups. By incorporating the voices and interests of children, policymakers can create policies that are more equitable and beneficial for all.

In conclusion, the creation and regulation of AI present various challenges and considerations. The differing types of labor involved in AI creation, the struggle to counter AI-generated disinformation in developing countries, the need for developed economies to foster an inclusive digital ecosystem, the absence of a global consensus on regulating AI and big data, and the importance of considering the interests of children in policy-making are all crucial aspects that need to be addressed. It is essential to promote collaboration, dialogue, and comprehensive approaches to ensure that AI is developed and regulated in a manner that benefits society as a whole.

Arisa Ema

The global discussions on AI governance need to consider different models and structures used across borders. Arisa Ema suggests that transparency and interoperability are crucial elements in these discussions. This is supported by the fact that framework interoperability has been highlighted in the G7 communique, and different countries have their own policies for AI evaluation.

When it comes to risk-based assessments, it is important to consider various aspects and application areas. For example, the level of risk involved in different usage scenarios, such as the use of facial recognition systems at airports or building entrances. Arisa Ema highlights the need to consider who is using AI, who is benefiting from it, and who is at risk.

Inclusivity is another important aspect of AI governance discussions. Arisa Ema urges the inclusion of physically challenged individuals in forums such as the Internet Governance Forum (IGF). She mentions an example of organizing a session where a person in a wheelchair participated remotely using avatar robots. This highlights the potential of technology to include those who may not be able to physically attend sessions.

Arisa Ema also emphasizes the importance of a human-centric approach in AI discussions. She believes that humans are adaptable and resilient, and they play a key role in AI systems. A human-centric approach ensures that AI benefits humanity and aligns with our values and needs.

Furthermore, Arisa Ema sees AI governance as a shared topic of discussion among technologists, policymakers, and the public. She uses democratic principles to stress her stance, emphasizing the importance of involving all stakeholders in shaping AI governance policies and frameworks.

The discussion on AI governance is an ongoing process, according to Arisa Ema. She believes that it is not the end but rather a starting point for exchanges and discussions. It is important to have a shared philosophy or concept in AI governance to foster collaboration and a common understanding among stakeholders.

Overall, the extended summary highlights the need for transparency, interoperability, risk-based assessments, inclusivity, a human-centric approach, and a shared governance framework in AI discussions. Arisa Ema’s insights and arguments provide valuable perspectives on these important aspects of AI governance.

Session transcript

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Maria, for the opportunity to be here with you today, and I’m very pleased to have you here with us today. Good afternoon, everyone. My name is Maria Pascanales. I’m the head of legal policy and research at Global Partner Digitals, a civil society organisation that works in the issues related with technology governance. I’m very pleased to be here and have the pleasure of being the moderator of this session, which is about the digital transformation, and the digital transformation that we want. So I have the honour to have a distinguished panel of speakers here for enlightening this conversation. I will start to introduce them, and then I will bring some logistics for the unfolding of the session, and we will enter in the substantive discussion. So, first of all, I would like to introduce myself, my name is Maria Pascanales, and I’m a associate professor at the University of Tokyo, and I’m visiting research at the RIKEN Centre for Advancing Intelligence Project in Japan. We have with us Dr Clara Neppel. Dr Neppel is the senior director of the IEEE Europe headquartered in Vienna, and head of the IEEE Technology Centre for Climate. We have with us Dr Yusuf Zahid, who is the director of the IEEE International Policy and Partnership at OpenAI. Thank you very much. We have Dr Seth Senter, who is the deputy envoy for critical and emerging technology, and previous government service include as a member of the State Department policy, the planning staff, where he helped develop the department’s cyberspace and the national security commission, and he is also the director of the National Security Commission on Artificial Intelligence, where he led the writing of the commission’s final report. Finally, but very importantly, from civil society representation, we have Miss Tobikele Matimbe, who is a human rights lawyer, researcher and social justice activist from Zimbabwe, serving at Paradigm Initiative, a senior manager of partnerships and engagement. And finally, we have the panelists, who will be presenting in the next session, and in the next session, we will be organising, it’s like we will post some policy questions that have been at the centre of the designing of the session to the distinguished speakers, and we will have two rounds of questions for the first section, each panelist will intervene for five minutes, and then we will be followed by ten minutes the floor here inside, so for that, I ask you to put in line in front of the microphone if you will want to present some questions to the speakers, and also for our remote participant, please let us, our remote moderator, sorry, Christian Guillen, also part of the panel, to know if you have any questions I would like he provide to the speakers during the session. So with that being said, I will move to setting a little bit the scene of this conversation today. In that sense, I would like to highlight a couple of things that for me are really relevant in the conversation today. The first thing that I want to kind of be provocative with you in terms of the setting of the scene of this session is like to think about how during the last year we have been listening so much about artificial intelligence in our daily life, so people that it was not connected at all with thematic of the artificial intelligence, maybe not even familiar with the name of the technology, now it was interesting to know more about how this technology function and how we will take care of ensuring that the technology will be at the service of the benefit of the exercise of right and the daily life of anyone around the world. So this is the challenge that has been posted by the current reality and by the demands that come from the people and the pressure that is being put in governments and companies to get to find the way in which artificial intelligence will be governed for ensuring particularly that it’s developed, deployed, and used in a way that is beneficial for the human good. What we have been seeing is like for maximizing this artificial intelligence positive aspect in society, there is a fundamental need of agreeing in responsible and ethical principles for the development, and what I bring today as a proposition is that part of the discussion also should be So, in that sense, I have been working with my organization in proposing a number of principles that are linked with how we can be more mindful and be grounded in what is coming from the international framework of human rights as an essential element for guiding this task of thinking about technical standards, regulation, legislation at large, and other kind of voluntary guidance that can be developed for governance of artificial intelligence. So, in that sense, I have been working with my organization in proposing a number of principles that are linked with how we can make the international human rights framework applicable to the conversation of artificial intelligence, and we have come with five principles, the first one, like, to think that any kind of governance discussion about artificial intelligence should be grounded in approaches that have been developed in terms of the promotion and implementation of new and emerging technologies that have preceded artificial intelligence. The second one is that develop a risk-based approach to the design development and deployment of artificial intelligence, and I am pretty sure that part of the conversation with the panelists today will be unpacking what we mean by risk and what we mean for those assessments. The third one, it’s promote an open and inclusive design development and use of the artificial intelligence technology. And then, we also invite you to think about how we need to ensure transparency in the design development and deployment of AI, and hold the designers and deployers of artificial intelligence accountable for risk and harms. So, without more from me, with this proposition, we want to hear from every one of the panelists, and we will start the first round of comments, and to talk about particularly the two policy questions that have been proposed by the organisers of the session in the MAG that invite us to think about in the first round of our conversation on the matter of how the global processes connect that have been proposed by the organisers of the session in the MAG. So, I’m going to start with the first question, which is around governance of international, sorry, the governance of artificial intelligence at the international level, but also at the local level, with a side of regulating or guiding the governance for greater good. My first invitation to intervene is to ask the question, what is the policy of the international governance of artificial intelligence? And what are the principles and technical guidance to operational artificial intelligence governance that is effective in policy across jurisdictions?

Arisa Ema:
Thank you, Maria, for a very nice and kind introduction, and I’m really honoured to be at this panel, and for the question of governance, I think it’s really important to think about the different models that are used, not only like the design, but also like the developed, deployed, and used across the borders. And, for example, here in Japan, it’s the normal cases is that maybe we use this core AI model, for example, from the United States, for example. So, in that sense, it is really important to have the transparency about when we actually see this AI life cycle, and not only the transparency, but also it needs to be interoperable, and it needs to be, you know, it needs to be interoperable. It needs to be interoperable. So, not only the transparency, but also it needs to be interoperable. And this word, the framework interoperability is actually mentioned in the G7 communique in the 2020 at the Takamatsu, but somehow this is kind of like a very tricky word. What does it mean by framework interoperability? It means that we need to know that each country or each organization or maybe even one company has its own policy and their own way of assessing their AI systems, and also evaluating the risk and making the impact assessment. However, the legal system is different from country to country, and so each country’s discipline should be different from country to country, and so each country’s discipline should be respected. And otherwise, you know, this global So, I think it’s very important for us to have a clear understanding of what is happening in the world, because if we don’t, the discussion won’t work, and also, each country has its own context. For example, in Japan, we actually have the guidelines towards this AI, the utilization or the AI development, and not so much on the binding one, so we have to look into the public reputation, and that kind of soft law discipline actually really works, but that might be the Japanese case, and the other country or the other organization might have different aspect. So, it’s really important to know what actually, what companies or what country has its own risk management system, and also, what kind of risk management framework or the risk assessment framework, and with that, transparency, and also, that the exchanging the actual cases is really important, and, as I really appreciate that Maria raised the discussion on the risk-based assessment, so what does it mean by risk? So, we can discuss at the high-level risk or the low-level risk, but, for example, when taking consideration about, like, the facial recognition system, you see the airport, or maybe using the entrance of the building, the usage is totally different, but maybe using the same facial recognition system, so we need to look into the context, and we need to take into the, who is actually using, who is benefiting from it, and who has the risk on it, so exchanging the cases is really important, and, in that way, I think we can put all these kind of abstract principles into more kind of living kind of discussion as the making it to practices. So, maybe I will stop here.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Arisa, and I will want to continue that line of conversation, inviting Clara to also jump in, in her take on how technical standards relate to education, and also, in her take on how technical standards relate to ethical principles and can support effective, responsible AI governance at the global level, and how it’s your experience about how technical standards can account for these challenges, these ethical challenges, and the principles that are posted, but also human rights international standards, according to my provocation at the beginning.

Clara Neppel:
Thank you, and thank you for having me here. So, IEEE is a very old organisation. We were founded more by 140 years ago, co-founded by Edison. So, why would an inventor like Edison, who invented electricity, engage with others? He could have done it alone, and I think it was this realisation that in order to be accepted by society, you have to manage risks, and one risk at that time was clearly safety, and we started by dealing with safety, and since then, we are dealing with safety and security, but now, with AI, we see that we actually need to redefine risk. We have to move away from this, let’s say, more traditional dimensions of risk, like safety and security, and incorporate human rights that you just mentioned. And the question is how to do that. We started very early on, and it’s a bottom-up approach. So, we are also the largest technical organisation in the world, with more than 400,000 members worldwide, and these issues started to come up at the individual level very early on. These issues around what you just mentioned, about bias and so on, the question was how to deal with them. So, we started an initiative called Ethical Aligned Design, which identified the issues, tried to manage them with standards, for instance, but also with engaging with regulators. Now, when it comes to standards, we moved to so-called socio-technical standards. What are they? So, these are from value-based design to common terminology. Value-based design, what does it mean? It means taking values of the stakeholders in that context that you just mentioned into account, and that will be different values. Of course, human rights are always important. but you have different ways of how you are dealing with them. And how you prioritize these values and actually translate them into system requirements and giving that step-by-step methodology for developers proved to be a very efficient standard. Common terminology. What do we mean if we say transparency? It can mean it’s a completely different thing to a developer than for a user. So that’s also one of the standards which deals with finding different levels of transparency. Bias, the same thing. We all want to have eliminating bias from systems, but we actually need bias, for instance, in health care. We need to take into account the differences in symptoms for men and women because they react differently, for instance, when they have a heart attack. So context is very important. So we also complemented these standards with a certification, an ethical certification system. And we tried it out with public and private actors. And what is very important, after all, I think that it was mentioned before, is to start building up capacity in terms of training because we need this combination between technical expertise and expertise in social legal matters and so on. So as part of this certification process, we have a competency framework which defines what are the skills necessary for a trainer, for assessors, for certifiers. And we started working also for certification bodies so to build up this ecosystem which needs to be there in order to make this happen. So this bottom-up approach, of course, needs to be complemented by a top-down approach, the regulatory frameworks. And we engaged with the Council of Europe, from the European Union, and OECD, and so on. From very early on, from the principles, but also how to operationalize this regulation. One example is now with the AI Act, which basically mandates certain standards, where we also engage with the European Commission to see how we can map, let’s say, the regulatory requirements to standards. There is a report from the Joint Research Center that you can download. Thank you, I think.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Clara. We’re going to move now to hear a little bit on the take from James that represents some perspective on the private sector experience, and particularly following the flow of this conversation, Clara mentioned the values, the definitions of the values, but also the definitions of the terms of the frameworks that we will be using. So in that sense, my provocation and question to you is like, aside from the government efforts, the multilateral efforts, the technical standard efforts that we have been hearing, what are the current efforts that the private sector are conducting for reflecting some of these challenges of finding ways to address responsible AI governance, and how those link with the conversation that we are having here around ethical principles, but also human rights protection. So let us know what is your take on that.

James Hairston:
Yeah, thank you. You know, I think one of the places we really began is, you know, to listen and sort of to understand as the tools that we build are used in novel ways, and as we explore sort of the new capabilities, learning from expert communities and academics and standards bodies, experts around the world who are evaluating and testing, what are the new harms that we haven’t anticipated? I mean, we know that we won’t know them all ahead of time, and we try to take a really iterative approach and really explain what we’re building and how we’re building through tools like our system cards, and inviting sort of open red teaming and evaluation of our tools, but really understanding, you know, what is it that we don’t know? Where are the places in which languages are our tools not performing well? Where are the places where definitions, as have been discussed, need sort of stronger concrete backing so that, you know, we know that as we’re building these international conversations that we’re speaking the same language and sort of able to come together. cut through, whether it’s marketing by the private sector or areas that have yet to be fully defined, that we’re building from a common understanding. I think another important role for the private sector and that we really take seriously at OpenAI is just capacity building, and building the capacity for research teams of all types across civil society and human rights organizations and governments to be involved in this testing, to tell us what’s working, what’s not, capabilities that they’d like to see or ones that are not working. And so this is something that’s going to be iterative. We are clear as when we do our disclosures at the release of new tools about all the areas that we’re trying to solve for. There are important research questions about the future of things like hallucinations and understanding where watermarking, how to solve for watermarking questions across text or different types of video or different types of outputs across LLM tools. So our contributions, I think, begin with admitting what we don’t know, many of the places where there’s a lot of work to do, trying to help with the capacity building to work on safety and evaluation of these systems and really supporting work around the world by the public sector, by the private sector, by civil society, by academia, to get the future of these tools right and ensure that the conversations that we’re having around the world really level into concrete action that ensures the long-term safety of artificial intelligence. Thank you.

Moderator 1 – Maria Paz Canales Lobel:
Thank you so much, James. I’m gonna turn now to Dr. Senter that represents the US government in this conversation. And I will be curious, particularly with the frame that I presented at the beginning of the conversation of the pressures that are coming from the broader public now to governments to turn to action in relationship with harnessing the power of artificial intelligence for good. What is right now the perspective and the take of the US government about the more pressing challenges in the global governance of AI and how those also relate with the actions? and the collaborative work that you in the domestic level are taking also with the private sector, with other governments, to effectively address those challenges that you identify as the most pressing ones. Thank you.

Seth Center:
Great. And thanks so much. I think pressure is an interesting word to characterize the situation that we’re all in, not just governments. I think part of the reason why all of us are here and excited about AI and somewhat scared as well is because there’s a sense that we’re in a transformative era. And given that the IEEE was founded by Thomas Edison, I’ll start with a Thomas Edison quote. I wasn’t planning on it because it’s my favorite Thomas Edison quote. He was asked at the turn of the 19th century, about 20 years after the light bulb was developed, what the effect of electricity was going to be on the world. And he said, electricity holds the secrets that are going to reorganize the entire life of the world. You could apply that to artificial intelligence. The problem with that analogy is, at least in the United States, it took several decades to get to a regulatory framework for electricity. And I think no one here thinks we can wait several decades to get to a governance framework that includes regulation for AI because of the pressure. So with that being said, first of all, I commend an organization like IGF for bringing together a diverse group of multi-stakeholders like this to have a conversation about how to accelerate the pace of governance. I thank Japan in particular for hosting us and leading the G7 Hiroshima process, and we saw the effort and the pressure and the way in which speed can create results through that process. And then from that basis, let me just make four points, and I have about 30 seconds to make each of the points. Point one is perspective for all of us on AI governance. I think we have a solid foundation based in a multi-stakeholder approach. to developing the principles for AI, the OECD principles from 2019, the G20 principles as well. Within the United States, in the past couple years, we’ve developed two frameworks that are extremely important, and they touch on the human rights and values component of this as well, both of which were developed with extensive consultation across the multi-stakeholder community. One is the AI Bill of Rights, and the other is the National Institutes of Standards and Technology’s Risk Management Framework that had over 240 consultations over 18 months with the multi-stakeholder community to develop a framework for how to apply a safety and security framework to developing AI. So that’s the kind of perspective that we have to take to the challenges we have. Why then, if we have such a rock-solid foundation, are we having this conversation today? The obvious answer is GPT has created a new socio-cultural, political phenomenon, a new moment. In part, it is the Sputnik that all of us were waiting for when we were talking about AI to grip all of us into action several years ago. But in part, it’s because it’s raised all kinds of profound questions about safety, security, risk. And so we have to take it on in a new and substantial way. And that moves us into two problems or challenges. One is it intensifies and accelerates all of our fears that emerged from the digital era, and the other is it intensifies and accelerates all of our hopes and opportunities that come from a technological revolution. And so we need to get that balance right. I think all of us accept that, and that requires moving quickly. For the United States, speed then meant we have to balance between moving towards a regulatory framework eventually with getting governance action now. Our choice in the interim was to move towards what were called voluntary commitments that touch on a framework of safety, security, and trust, which hold companies accountable for a whole series of efforts to become more transparent, to protect security, to promote transparency, to ensure that their systems work as intended. And that’s basically our overarching architecture for what we’re approaching this era of, where we need clarity, we need speed, and we have to act in this era of pressure.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Dr. Sancher. And I will move in this flow of the conversation with Toby Keeley, and particularly with a reaction from your side in terms of where is the global south perspective in all this conversation. We heard of like alternatives. So, what are the fundamental challenges and opportunities to build effective artificial intelligence governance? What are the alternative paths of dealing with artificial intelligence governance that usually are commonly led by global north governments or global north organizations from the private sector, from the academia, from the industry? So, what are the fundamental challenges and opportunities to build effective artificial intelligence governance that usually are commonly led by global south? And how do you experience the influence of these different trends coming from abroad, from this different sector, the regulatory ones, but also the ones that are related with different frameworks to address these issues of governance? Thank you.

Thobekile Matimbe:
Thank you so much, and it’s a pleasure to be here and part of this panel as well. I will just highlight that from a perspective of, you know, the global south, or rather I’ll just maybe narrow it down to the global south, we have a lot of data protection laws that have been implemented in the past, and I think where we are in terms of regulatory frameworks, we are at a place where we still try to catch up with regards to coming up with national artificial intelligence strategies, and what we have are data protection laws that have sort of, like, you know, just a drop in the ocean when looking at clauses that are not in the artificial intelligence framework, and we have a lot of data protection laws that are still being implemented, and at looking at that kind of context, we are facing a situation where we are trying to catch up with regards to how we can ensure the protection of human rights when we’re looking at artificial intelligence, the design processes, as well as the use. Because of that, you’d find that it’s a very, very difficult situation, and, you know, we have to appreciate that there is a lot of data protection laws that are still being implemented, and, as we look at the use of artificial intelligence, we have to appreciate that there are definitely centres of power. What I mean by centres of power when we’re looking at, you know, who has the knowledge of technology, who has, you know, the technical, you know, design, sort of, like, you know, ownership, you’d see that within the digital world, you’d see that there are a lot of stakeholders, a lot of stakeholders, a lot of voices from the global south, in whichever processes that are there, even at a global stage, or a global scene, there’s a need for inclusivity of not just civil society, but, as well, inclusivity when we’re looking at representation, and member states, as well, and their participation, and I think the inclusion of the digital world, as well, is a very important part of the discussion, and, you know, there’s a lot of discussion around AI, and any other, you know, global framework that can come out of, you know, the global scene, and that is something that can be leveraged. Looking at the regional level, maybe just taking it a level, you know, sort of, like, down from the global scene, you’d find that, from a regional perspective, we have the African Commission on Human and Social Rights, and, you know, we have the African Commission on Human and Social Rights, and, you know, we have the African Commission on Human and Social Rights, and, you know, we’re working with states within the African continent to develop, you know, strategies, and mechanisms, and legislative provisions that ensure that rights are protected when we’re looking at the use of AI, as well, in the context of, you know, of human rights, and, to date, since 2021, I would say, like I highlighted in my earlier remarks, that, you know, we do have, you know, another rights are safeguarded, but the real-lived realities remain within the global South where we find that there is, you know, sort of like lack of trust for use of AI because of inadequate policies. We also see that surveillance, you know, targeting human rights defenders remains a major concern. We do see that, you know, discriminatory practices that come with the use of AI are still a lived reality on the continent, so I think it’s something that needs to be addressed from a global perspective, and I think understanding that context, I will emphasize again that it’s something that is really important. Thank you so

Moderator 1 – Maria Paz Canales Lobel:
much to Ikele. And now we have finished our first round of comments and answers from the panelists in this session, so I open the floor for the questions that can come from the audience inside, but also I look to my colleague for knowing if there’s any question posted online. Yes, Maria, the

Moderator 2 – Christian Guillen:
chat exploding, but only through my comments. People are very shy still, so beautiful crowd out there. Just use this opportunity to actually ask all those questions you don’t dare to ask usually on AI and governance. These are just right people answering your questions. There’s one question, though, and that is very interesting, because it’s posed by a target group you very often forget. It’s a 70-year-old boy, Omar Farooq, from Bangladesh, and basically he’s asking how can we ensure that AI regulation and governance at the multilateral level is inclusive and child-centered, so that children and young people can benefit from AI while being protected from its potential harms? Thank you. Some of

Moderator 1 – Maria Paz Canales Lobel:
the panelists are particularly motivated to take on that question. I think that that question was put in the center the issue of like a specific vulnerable community, so when we design for for policy, for governance around artificial intelligence. This is an example that the children can be considered, but there are other also specific communities. So how we design for being inclusive in the governance, for high accommodation also for particular needs for vulnerable groups in a considered way that effectively provide governance that works for all these different cases. Clara? Yeah, go ahead.

Clara Neppel:
Well, I think that these are certain things which can be addressed both on a voluntary level as well as at a regulatory level. We see examples, for instance, Lego implementing quite a lot of measures to make sure that their online presence and then in an upcoming virtual environments, children are protected. But of course, here especially, I think that there is important to complement these voluntary efforts with regulatory requirements. And one example is actually the UK Children’s Act because we all agree human rights, children needs to need to be protected. But it is another question, how is that implemented in online? And UK code is one example of a regulatory framework setting up the, let’s say the requirements, but when it comes to operationalize it, it was one of the IEEE standards, age-appropriate design, which gives very clear guidance to implementers on what it means to implement this act. So there are already both regulations as well. And this is just one example. This is also discussed in other countries as well. So one example of how, let’s say, standards and regulation can interact to protect children online and other human rights as a matter of fact.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. I don’t know if all of the panelists have reaction. If not, we move to the next one. Do you want to react, James?

James Hairston:
I guess the only thing I’d add is just to base a lot of the work on top of the sort of research that’s being done by child safety experts around the world. There are just so many great institutions. And you mentioned the Lego example. Academics and organizations that are looking at the usage patterns and understanding how children and any number of vulnerable groups interact with these technologies, the harms or their expectations and how they diverge. Prior to working at OpenAI, I worked in virtual and augmented reality. augmented reality. And again, in safe settings, whether it’s doctors or research teams, really go deeper. And we don’t base the work on our understanding as adults. And this is, again, whether we’re talking about children using these tools or elderly populations, vulnerable communities who may have less access, that it’s research-based, that it’s evidence-based, and that I think in these settings it’s possible for organizations to really work with the community we’re trying to build the sort of safety tools and systems around. So I don’t think there’s anything revolutionary about that idea, but these organizations really do such important work. And I think supporting them and advancing their work and putting their research front and center in the development of policies is essential.

Moderator 1 – Maria Paz Canales Lobel:
Definitely. Thank you very much for that answer. Christian, we have another question? Or we have here in the inside? Maybe we could do it like quid pro quo. Yeah. We can take one from here. Yeah.

Audience:
Hi, I’m Viet Vu from Toronto Metropolitan University in Canada. While AI systems are technological in nature, as many of us know, it still involves a lot of human input of various different kinds. And we’ve seen media reports of the kind of labor that is involved in creating AI in the global south to be quite a bit different from the kind of labor that is involved in creating AI tools in the Western world. And so in governing creations of AI, how do we think about sort of international labor work standards and regulations?

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. So anyone from the panelists want to react to that? James.

James Hairston:
Yeah, I’m happy to begin. I mean, I think, again, the importance of protecting labor that’s involved in the production of these tools is essential. And sort of the work that’s been done over the years in advancing the rights of workers in other sectors, I mean, has to be applied in artificial intelligence and making sure people are compensated properly, that when there are abuses or harms that they are addressed. And so, you know, again, this is just an area where everyone is going to have to continue to be vigilant, whether companies inside the private sector, you know, monitoring groups, and just making sure that we’re listening and understanding the production, understanding where voices aren’t being heard, or where, you know, actors at any level of the sort of labor and employment chain and the production, the development of tools are sort of acting improperly. And so, you know, I think if there are places where existing law and policy can’t address those harms, and, you know, we certainly should be vigilant for places where there are gaps, we have to talk about them openly and constructively and sort of move quickly to make sure there aren’t communities and types of work that’s going on that is abusive or harmful.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. Maybe we have time for one last question online, if there is another one?

Moderator 2 – Christian Guillen:
Yes. Now they are popping in, actually. I shouldn’t have said anything. It’s three questions, but I’ll sum them up, okay? There’s one colleague from, one professor from the Afghanistan Kabul University, asking could we apply generative AI in developing countries like Afghanistan in the education system? I’d say why not, but maybe you have a brighter answer. And there are two other questions. One is asking for the accountability aspect, given that AI is not fully understood, and to balance AI values and risks, do you think how should it be dealt with the accountability about AI? And one last question that is rather referring to the ethics. For the moment, AI is providing output based on human-based input data, but with the time it may be processing its own created data. So is it ethically acceptable to have machines decide on humans’ matters based on no human data? This gets complicated now. What’s the plan to make sure once in time we’ll not be left in hand on something out of total control. So, very concrete question on the education system and a very wide question in the ethics. Yeah, for two minutes and a half it will be a little

Moderator 1 – Maria Paz Canales Lobel:
bit of a challenge, but maybe I can invite Dr. Seth to react in the question related to accountability mechanism, how we can build effective accountability

Seth Center:
mechanisms. Sure, I think every single governance question comes down to ultimately accountability. I think skepticism around governance frameworks that are voluntary come back to the question of accountability. I think even a hard law framework comes down to accountability if the challenge is figuring out what to measure in order to apply a hard law. From our approach, as we think about accountability in the context of a voluntary framework, at least as a bridge to something harder, I think it comes back to what you were talking about in part, which is there is a reputational cost that comes along with signing up to voluntary commitments. And James, I think you’ll probably have some views from OpenAI’s point of view as well on what accountability means for a so-called voluntary commitment. I think insofar as volunteerism and accountability are linked to technical action, you can talk about accountability in meaningful ways because it can eventually be measured. And I think that measurement question is extremely important to dive down below the abstract level of principles, where I think there is an increasing amount of skepticism that principles can achieve accountability. Thank you. We have one

Moderator 1 – Maria Paz Canales Lobel:
last question, but now I’m gonna close the queue because we need to move to the next segment, so please. Hello everyone. My name is Ananda, for the record. I’m

Audience:
the chair of USIGF Nepal and I represent our developing economy. And while IGF 2020 is being bombarded with all the topics from AI, we are still struggling to connect the people. 40% of the population in Nepal and APEC region is still unconnected. And if we see the 40%, those are connected are the newly adopters of the Internet. My question is, while developed nations are adopting AI and these technologies, the nations like Nepal are striving to actually counter the disinformation, misinformation that are being held by the generative AI that became so popular in 2022 with the use of it. You can name it Charts of IT or Google So, in this scenario, how does developing economy help these kind of nations in co-entering the digital era? And another thing is we discuss this kind of issues in multi-stakeholder platforms, but these platforms are not capable enough to actually set the policies, because when it comes to policies, multilateral system actually influence the policies across the world. So, how does developed economy co-create the digital ecosystem that is inclusive for all? Thank you.

Moderator 1 – Maria Paz Canales Lobel:
I think it’s a very complex question to answer in just a few minutes, and probably we’ll need to answer it from different panelists here, but I don’t know, for example, James, if you have a take in terms of like the jurisdictional channel, sorry, jurisdictional challenges that has the idea of like implementing this governance mechanism for companies that offer services to different context.

James Hairston:
I’ll maybe start with two projects that I think begin to get it sort of solving for this, but again, are just the beginning. We recently launched a project, a grant program for democratic inputs to AI to sort of give, you know, communities, nations, different domains, the possibility of trying to surface, you know, what are the unique values and the types of outputs that are responsive to sort of local contexts from AI systems that a community sort of expects, and those, you know, acknowledging that those may diverge, and sort of beginning to figure out what is that process that is sort of very locally, regionally, community-driven, and how can we sort of build on that. So I think that’s going to be one important stepping stone. Another is we actually also just announced what’s called our Red Teaming Network, and just the security and safety testing that is very specific, you know, to Nepal, to nations and communities around the world, and sort of, you know, again, encouraging safety and security testing, submitting evaluations, you know, you mentioned mis- and disinformation. If there are types of, whether it’s linguistic failures or ways that, you know, large language models or tools like our are attacked or vulnerable to sort of certain types of outputs, we want to know, you know, we want to really hear where we’re falling short or where, you know, perhaps a gap in understanding or a particular type of action is producing results that are especially harmful. And so I think that practice, building that community of practice, submitting those types of evaluations and growing the community that is doing that in different countries, in different regions, across sectors, is going to be important.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, James. With that intervention, we will move to the next segment of this conversation that is particularly linked and related to the role of IGF. We are all here sitting in this room and participating of this event on internet governance, and there is a particular value on the conversation that happened in this space and have been happening for 18 years, shaping digital technology, shaping the form and the use of the internet. So on that note, what we want to question during this part of the conversation with the intervention of the speakers, the role of the IGF as a convener and facilitator of artificial governance action. And for that conversation, I will turn first to Clara, and I will interrogate about the experience of IEEE working and developing voluntary guidance. What is just your perspective about the opportunities and limitation of self-regulatory effort to ensure responsible AI governance, and what could be the contribution of the IEEE experience in the role of IGF facilitating this international governance of AI discussions? Thank you.

Clara Neppel:
Thank you. So we see our standards being adopted. Actually, you know, once a standard is out, we as a standard-setting organisation, we don’t really need to know who adopted it. We just had a meet-up last week, and I was surprised to see how many people actually say that they know the standards they implemented in different projects, both private as well as from public actors. So I think that this, I would like to bring here, well, one example is, speaking of children, a UNICEF project which really used the value-based design approach to change, let’s say, the initial design of a system to find talent in Africa from, let’s say, a closed system which was intransparent to something which the young people actually have agency now on. So that is actually a proof of concept that you, by having certain methodologies and taking these values and expectations of the community into account, you actually end up with a different system. And I want to discuss here really the incentives of the voluntary engagements and what are the incentives of adopting a standard. Well, one is, and we have also the city of Vienna who is one of our pilot projects for the certification. Of course, if you are discussing with public authorities, one of their incentives is trust. They want the citizen to trust their services. And you probably also have a lot of private actors who have the same incentive. But if we’re talking about sea level people, of course there’s also the discussion, so what is it in for me? And we know from business schools that one way of, well, making money, well, two ways. One is to minimize cost and the other is to differentiate or focus. And we saw actually in the meetup investors who were interested in this standard because one way of doing value-based design or one outcome is that you end up with better value proposition. And I think that this is an important way of moving away only from the risk-based approach to actually thinking what kind of measures of success do we want to have in the future. Do we want to still have performance, which is of course important for us for a technical community, or profit, which is of course important for the private sector. How do we incorporate these other two dimensions, the people and the planet dimension? And I think that this is something that we have to discuss collectively. And of course the other incentive is, of course, to satisfy regulatory requirements. We see that now with the AI Act a lot of people are interested in these standards because they anticipate that these will be required. But here is also something where I want to see say, to very much stress that there is a limit on voluntary measures. So we as a technical organization, I think also as a private actors, our business of private actors is not to maintain human rights and democracy and rule of law. Of course they should, we all should be part of it and we should comply with it, but I think that it is, there are certain red lines which have to be decided in a democratic process. And the only way to do, let’s say, a common approach to this is this kind of feedback mechanism. If we want to have something like a global governance, we need to establish these lines of communications, to have standardized way of reporting incidents, to have benchmarking, testing facilities, and have, being here in Kyoto, having something like the International Panel of Climate Change, which actually has an advisory role to governments, to say where is it where we actually need to do something and see if new regulation is needed or if regulation needs to be adapted. As a matter of fact, we are just doing this with the Council of Europe, with one of the applications of artificial intelligence, immersive realities. We are working with them to see what are the possible impacts of human rights of these new technologies.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. I think that you bring a super relevant point about the role of incentive. I will be very happy to hear the take of the other speaker when they intervene about that, because I think it’s a challenge for everyone to identify and align with those incentives in order to bring the process to the right direction. But for now I will turn to Arisa and ask you and your experience as a social science researcher and your activities in that role include facilitating dialogue with various stakeholders. So I think it’s very important to understand what are some of the challenges in conducting that facilitation of the multi-stakeholder engagement with AI governance that you can share with us and how this can be effectively, for example, this learning integrated in the role that IGF needs to play as a facilitator of these discussions.

Arisa Ema:
Thank you very much. So I think the role of the IGF is really important. So I think the role of the IGF is really important. So I think the role of the IGF is really important. The previous session I organized the session and I invited my friends who are actually in a wheelchair however he or she can’t come. So I actually brought the robots, the avatar robots that they can operate from remotely from their home. So I think the role of the IGF is really important. So I think the role of the IGF is really important. So I think the role of the IGF is really important. And this kind of thing kind of connects to the first 17-year-old boy’s question. So it is really important to be connected to all those other stakeholders or the other people with some challenges. So I think it is very important too explanation what happened to the neighbors but to the people who are physically, you know, they can virtually come to these places and make the presentations, interact with others, but in the air the side, in our session, what we discussed is that although we have that kind of system, on the other side, there are many people who are not actually available, so I think it’s really important to, when we are discussing about AI governance, we need to put humans also into this kind of systems, and humans is kind of the most flexible, you know, or maybe resilient to be kind of adopted. So, what I wanted to say is that we need to be more adaptive to all of those kind of the crisis situation or maybe how to say, to be more creative and to be more active. So, what I wanted to expect, what I expect to this IGF forum is that we can talk about the AI governance, but we need to include human, and human-centered is the very key word. So, we need to be more adaptive to all of those kind of issues, but we need to be more creative and to be more active to all of those kind of issues, and I think it’s really important to repeatedly come out to all of this kind of discussion, and like the democracy, the rule of law, or the human rights, so with this kind of topic shared with people, and then it will be kind of connected or the discussion, and I think it’s really important, and I think it’s really important, and I think it’s really important. So, I think it’s not all the interesting and important things being discussed in this panel session, however, maybe the next-step action is discussed outside this room, so with this over the lunch, or maybe having in-person discussion, or maybe just having the tea, so that kind of forum is really important, and because IGF forum is open to everybody, we can talk with the person just by taking a moment, or offering a small room to have the person having a good time with the trovies. I would want to speak at, or like to the IGF to be inclusive and also this kind of in-person and informal communication is really important, and I really appreciate that many people came to Kyoto and also enjoyed the Kyoto.

Moderator 1 – Maria Paz Canales Lobel:
So, I would like to invite you to react to the IGF, and I would like to invite you to react to the IGF, because it has so many dimensions, it has the dimension of the different stakeholders, the dimension of the particular situation of vulnerable groups or groups in vulnerable conditions, which is more appropriate, but also has a geopolitical and geographic dimension, and on that, I will invite Tobi-Kiri to react in terms and founder of the IGF and how the IGF can contribute to, for example the interesting dimension of AI governance and how IGF can continue contributing for addressing these kinds of challenges at a big international scale, and that is what we were talking about because this technical

Thobekile Matimbe:
that challenge. Thank you so much. I think I’ll start from a premise of just highlighting that I know that a number of colleagues were not able to be here because of visa issues and I think when we’re talking about inclusion, I think it’s something that we need to proactively think about in terms of how we can make sure that we have inclusive processes but also accessible platforms for those from the Global South specifically. And just going beyond that, I’ll highlight that I think within, you know, the Internet Governance Forum there’s need for, you know, continued engagements and engagements with critical stakeholders and a victim-centered approach to the kind of conversations that happen here in the sense of having everybody, vulnerable groups, well represented in terms of the conversations that happen, especially when we’re looking at AI. I will also highlight that I think an understanding of the global asymmetries, I think it’s something that is important to continue to highlight because we do realize that when we’re looking at, you know, global north versus global south, the different contexts and I think it’s something that I highlighted earlier, the importance of context and I think my colleague here as well highlighted the aspect of understanding the different contexts that are represented within the Internet Governance Forum and I think it’s something that will continue to shape processes even better and also to be able to ensure that we come up with, you know, AI focused, you know, solutions or resolutions that ensure that, you know, no one is left behind when we’re looking at fundamental rights and freedoms particularly and I think just to emphasize that I think definitely this is, you know, a forum that we continue to leverage with regards to advancing So, I think, you know, we need to continue to engage in terms of what we are doing, you know, the promotion, protection, as well, of fundamental rights and freedoms, but also we need to continue to engage in terms of remediation for victims who are likely to suffer the adverse impacts, you know, of design, of technology, and that is something that cannot be overstated, and I will, I think, just, you know, round off by just highlighting that I think we need to continue to highlight that there’s a need to break down the walls. Earlier, I highlighted about the center of power, the centers of power, when we’re looking at AI, and I think the IEGF is that, you know, good opportunity to be able to break down the walls that stand in between the centers of power in a multi, a real multi-stakeholder engagement, where, you know, all voices are heard and no one is left behind.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, and in this same line, Dr. Santer, I invite you to react to this very same issue of, like, how to deal with this diversity of realities and the diversity of processes that are ongoing for dealing with this diversity of realities at the national level, at the regional level, in some cases, as the European Union, that Clara had brought before, but also at the global governance system, some propositions coming from the UN, and creating new bodies for overseeing the governance of Artificial Intelligence, how that also can be approaches from the perspective of a government that is conducting its own efforts at the domestic level for, you know, for, you know, for, you know, for, you know, ways to protect privacy, protections, and democracy, So, you know, within this shared work, what are the efforts at the domestic level for finding the most appropriate way to address the governance of Artificial Intelligence and being inclusive in this, and how those efforts and this experience that is inquiring the government in the process of doing that can be used for the IGF for continue making this global Artificial Intelligence Governance discussions connected and interoperable. Thank you.

Seth Center:
Is the answer yes to that? But how? The tricky question is the how. Let me rewind just a minute to the question of accessible platforms and walk into how I think the IGF can play a role. I think if you get to the end of the governance story and you get it all right, you’re still left with the question of why we care about AI. And I think the answer that I think we believe in the United States, and I think most people in this audience believe, is that you should employ the most powerful technologies to address the most important issues. important problems in the world. And how do you get powerful AI developers, whether they’re companies or governments, although it’s usually companies, to devote time and attention to govern AI responsibly and then to direct it towards addressing society’s greatest challenges? And the answer is the multi-stakeholder community, directing them through conversation, delicate pressure, into thinking about those problems in meaningful ways. A few weeks ago at the UN General Assembly’s High-Level Week, there were a series of events that brought together different parts of the multi-stakeholder community and the multilateral community and countries to talk about these issues. The Secretary of State of the United States co-convened one with a whole series of diverse countries and companies, including OpenAI, and we just simply asked these companies what they were doing to address society’s greatest challenges, defined however they wanted to within the context of the SDGs. And if you open up those conversations and you have them at the UN, you have them in the General Assembly, and you have them at the IGF, if you ask questions about the impact on labor, if you ask questions about what we’re doing to protect children’s safety in the AI era, if we ask about inclusive access, it naturally changes the entire conversation. And so the young gentleman who asked a question about whether or not the multi-stakeholder community could make a policy or not, and I think there was a sense of skepticism, I actually am far more optimistic. Policy is made, at least in democracies, including ours in the United States, by listening to the inputs of everyone. Our entire architecture in the United States for our AI governance framework was built on listening to the multi-stakeholder community in a domestic context. context. The entire architecture for thinking about the voluntary commitments, our most recent one, included extensive multi-stakeholder conversations. And these are the way in which governments in democracies actually formulate policy. No government has the hubris to believe, at least the ones that I’ve talked to, that they understand foundation models and generative AI. They need the technical community, the standard-setting bodies to help them. They need companies and the experts in companies to help them. They need civil society, human rights organizations to help them. And out of that input comes an output, and that output is policy. And then you need governments to actually enforce the policies. And that, I think, is actually where we probably have a bigger challenge. But if you take the step back and you say to yourself, how do we ensure accessibility? How do we ensure collaboration? We should encourage the energy in all of the forums, whether it’s the UK Safety Summit, whether it’s the G7 Hiroshima process, whether it’s the UN’s H-Lab, because we are at the early stages of the next era of AI, and we need all of those conversations at this point in time.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. And I turn kind of a similar question now to the private sector, represented here by James, in terms of, like, you don’t have jurisdictional borders in the offering of your services. I mean, you are binded by different regulatory frameworks in different jurisdictions, but you need to deal with this question about artificial intelligence governance in a way in which you can operate as a company and offer your products and services beyond the borders. So what are the challenges in that perspective, in terms of, like, how you are dealing with the discussions of artificial intelligence governance at this different local and domestic level, regional in some cases, global also, and how bringing some of those challenges to the discussions here in the IGF will be useful in terms of address them for the perspective of the industry?

James Hairston:
Yeah, I mean, you know, start with the first challenge that comes to mind, which is sort of size. You know, we are trying to make sure we are in as many of the conversations that, you know, as we can be in, and, you know, in all regions of the world in every country, you know, cities, states, geographies, they’re important discussions. It’s impossible to be in every room, but I think coming off sort of the recent listening tour that we did around the world, have a great respect for sort of the, just the variance in sort of the needs for these tools, the different restraints that are going to be placed on areas where, you know, hard and soft law will differ, and so just making sure that we are, you know, in the right places, that we’re listening fully, you know, that we’re providing the right sorts of research and technical assistance, I think that’s probably one of the sort of threshold challenges, you know, of just sort of making sure we’re participating in the right ways, hearing and learning in the right venues. Then from there, you know, I think, you know, I think sometimes there’s a discussion about the sort of spectrum of, all right, you know, you have these really important short, medium-term risks as well as sort of some of the longer-term, you know, ensuring safety for humanity, sort of on the road to artificial general intelligence, you know, which is sort of seen as a spectrum and sometimes talked about as if, you know, you have to make a binary choice of either sort of addressing short-to-medium-term harms versus, you know, looking sort of further out into the future and being focused on building the international and domestic systems to solve for those. And you know, we don’t think that’s a choice, like we have to work on both, right? And we as the private sector, as a, you know, as a research lab, have to be contributing to those discussions as countries formulate their laws, but also on the other side of the regulatory conversation as, you know, countries, societies decide how they want to use these tools for good. And so, you know, being in enough rooms, contributing the sort of core research and technical understanding, making sure that, you know, the transparency, the work that we’re doing around our tools is aiding those conversations in as many geographies and for as many communities as possible, it’s a challenge, but it’s a responsibility. And so, you know, again, we just welcome and, you know, sort of being in as many of those rooms and as many of those conversations as we can be.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. And now I open the floor again for reactions and comments from the audience here inside, but also online. Do we have any online?

Moderator 2 – Christian Guillen:
Yeah, sure. Please go ahead. There’s a lot going on, Maria. Let me start with one question from Mokaberi from Iran. Could shaping the UN Convention on Artificial Intelligence help to manage its risks? Do geopolitical conflicts and strategic competition between AI powers allow this at all? And what is or could be the role of the IGF in this regard? And if I may, I would like to seize the opportunity to enlarge the question a little bit also to you, James, because I have the great opportunity sitting next to you. And you being a newbie here at the IGF, not just you as an individual, but representing open AI, what do you think could be the added value of the IGF when it comes to the discussions right now on regulations of AI and governance and do you have the impression open AI could kind of contribute in future times as well? So two questions in one.

James Hairston:
Yeah, no, I mean, absolutely. I think one of the comments earlier just on both benchmarking, like defining what does good look like, I think that’s going to be important just as much from a technical perspective as it is in sort of policy development around the world. And so I think there’s a really important role for IGF and international institutions to really harmonize those discussions and say, these are the benchmarks, these are how we’re going to be grading our progress. And that’s probably where I’d start. Similarly, to address sort of the first part of the question of sort of where we can build on existing work that’s gone. I mean, I think for a lot of these technologies and sort of where we’re heading next, it’s important to build on just the important conventions and treaties, areas of law that we already have in place. And that’s not to say that there won’t be new approaches, new gaps as we’ve been talking about today, but we also don’t necessarily need to reinvent the wheel everywhere. And so take the hard work that’s been done in areas like human rights and draw on that as we sort of figure out the places where we want to set new standards going forward.

Moderator 1 – Maria Paz Canales Lobel:
Thank you, James. I don’t know, there is any inside question? No, I don’t see anyone. Ah, there, sorry. I’m going to turn it over to Hossein Mirzapour, who is the co-chair of the IGF, and he’s going to talk a little bit about the IGF. Go ahead.

Audience:
Okay. Hello, everybody. This is Hossein Mirzapour from data for governance lab for the record. Thank you for bringing up this crucial issue of how IGF, whether or how IGF can help to deal with AI, specifically the governance and regulation of big data. I’m very proud to be part of the IGF, and you know better than me, we have discussed many times for more than one decade about the governance and regulation of big data, digital privacy, I don’t know, like data governance, and even finally, we didn’t, we were not able to reach a global consensus, a global framework to deal with big data, and we are not able to reach the same regulatory framework and laws, and you can compare now the DSA and DMA in Europe with the way U.S. is dealing with their companies, so my big question to a bit add a spice to your interesting topic is, as far as we have not been able to reach a global consensus and global framework to deal with big data, how can we be optimistic to reach a global consensus and global framework to deal with AI, and you know, well, AI is rooted in big data as well. And just last but not least, I have very quick yes-no question for James, Mr. James, who is representing the private sector today. Right now, is there any emergency shutdown procedure in your company, like, if you, by the case, you find that there is a, like, very emergency danger coming out from your company and corporation for the fines, for example, a pandemic, or, you know, financial crisis, is there any procedure in place right now for an emergency shutdown or not? Thank you.

James Hairston:
I can take that last one, and, you know, when there are we have harm reporting, and we take, you know, security report, and so, if there are admit this dates, with all the we can turn our tools off by geography in that way. I think there are probably many layers to that question beyond just on-off access, but happy to follow up and understand the types of shutoffs that you have in mind.

Moderator 1 – Maria Paz Canales Lobel:
You want to react to that?

Seth Center:
Maybe because I’ve never come to the IGF before, I’m not as down as you. I think there’s a tremendous amount of consensus on AI governance. I think obviously the challenge of enforcement and what the regimes look like may be a bridge too far at a global level, but I don’t think that’s an existential threat to the value of these conversations or pursuing an AI governance conversation. For instance, if we were to ask ourselves moving into a future in which foundation models and generative AI will likely subsume narrow AI, what are the kinds of safeguards you would want in place as a governance structure? I think everybody would basically agree. You want some kind of internal and external red teaming. I think you’d generally agree that you want information sharing among those who are developing these models. I think you’d generally agree that for finished models, which are potentially profoundly powerful, you would want some sort of cybersecurity to protect model weights. I think you’d generally agree that you can’t solely trust those developing them to be accountable, so you’d want third party discovery and auditability in some way, shape, or form. I think you’d basically want developers to agree on public reporting and capabilities. I think you’d basically agree that they should prioritize research and safety risk, including on issues like bias and discrimination. In my senses, if you get to the end of this, you’d also basically agree that they should employ these to address society’s greatest challenges. At that level, I’m fairly optimistic that we’re at least going in the right direction.

Moderator 1 – Maria Paz Canales Lobel:
Thank you. I see two more speakers lined up here. I don’t know if we have some online tool. Can you read the two and so we can have time for the other speakers?

Moderator 2 – Christian Guillen:
Okay, I have. And we post all of them to the panelists so you react. I’ll be very quick. Two questions actually. One from my side to Seth because it’s really pressing and I’m hacking the system right now. But Seth, you said before we have to get all stakeholders involved. I’d be interested in your opinion on the fact that was uttered somewhere here I think by the steering group UN on kind of the analogy of the we need something like the International Atomic Energy Agency for AI. This idea which sounds kind of rude. Do you think that is an adequate idea or not? And maybe I can pose the other online question already. Yeah. That is a colleague, actually a member of the parliament from South Africa who’s asking Willem Faber. Considering that AI technology was developed by humans, could we not explore the possibility of leveraging AI to establish government regulatory systems instead of relying solely on human efforts to find solutions? More of a technical thing.

Moderator 1 – Maria Paz Canales Lobel:
So AI regulating AI, basically that’s the proposition. So maybe we can turn for that one to James on what are the take and with Seth for the other one. Yeah.

James Hairston:
Well, one area of I think long-term research, this actually gets back to a question that was raised earlier. I think it’s important to have humans in the loop and the development of systems and then their testing. We’ve talked a lot about red team and audibility. And yet there are a lot of research possibilities around the use of say synthetic data in the future. We’ve been talking about bias and sort of what the future avenues for addressing them might be. And there’s one area of sort of work around the world that I think needs a lot more exploration in sort of how we might create high quality data sets that are derivative of research work by domains to sort of generate the ability to perform all sorts of new tasks. And in that way, you would have information not based on sort of a current corpus of the internet or people’s information that of course involves a lot of human training to get to, but that is derivative and is used to sort of build new capabilities. I think that’s going to show up in the some form, you know, in a lot of domains, and there are pieces of that that are going to require a lot of, you know, monitoring and evaluation, but there are other ways in which sort of synthetic data sets help solve some of the problems that we’ve been talking about. It’s not a panacea, of course, and that what you could use in sort of deconstructing and reconstructing information that tries to resolve gaps in, say, language of the available information we have today, or over- or under-representation of certain regions, or genders, or otherwise, that that synthetic data could then be used and applied to, you know, create personal tutors, or to improve genomics research, or advance our understanding of climate. So the synthetic, again, that’s one area of research, there’s a lot to do there, but I do think as we talk about sort of machine-created data, again, with a lot of humans and a lot of important standards bodies, and research institutions, and government security testers in the loop, there are actually some really, I think, interesting possibilities there, but that doesn’t mean we can just sort of step away and let that happen, so I’ll just leave them there.

Moderator 1 – Maria Paz Canales Lobel:
Do you want to react to that, Clara, maybe?

Clara Neppel:
Yeah, well, actually we have a working group on defining the quality of synthetic data, because again, we are coming back to define what is good, what is then ethical synthetic data, and yes, I agree with you that actually it is one of the way of providing, let’s say, scientific data to be used for research, and so if you’re using it in that way, I think it’s okay. But coming back to why it is important to think about the global regulation, or global governance, sorry, of AI, coming back to the analogy of electricity, I think that now we have this moment where it is out in the open, and it’s being used in so different ways and different geographies, so we need, now we are coming to Japan, we use a different plug and socket, so we need to have at least a transparency, what is being used there, where is it where we need to adapt, we need to have a kind of, as I mentioned before, transparency in the sense of what… you know, basic information about how these AI models have been used and what is important for that context. And I think that it is laudable that, of course, we have these private efforts to make AI as trustworthy as possible, but it is still something which is closed. So, I mean, some of the things are made open, but it is, again, voluntary. So, we need to have, like, a certain common ground to understand, you know, what we are talking about, what are the incidents, what are the data sets, where is synthetic data being used, what kind of quality of synthetic data is being used. And I think that once it becomes so everywhere, I think that there is a pressure as well to kind of have this, you know, standardized way of understanding the impact of

Moderator 1 – Maria Paz Canales Lobel:
AI. Thank you very much. So, I turn to Dr. Santer for the other question quickly. And after that, I will take one more question from the audience, and I will ask you, all the speakers, to do a round of final remarks so we can start to close. Thank you. Go ahead. So, I certainly think the

Seth Center:
IAEA is an imperfect analogy for the current technology and the situation we faced for multiple reasons. One being the predominance of private sector developers of AI versus state-based questions about nuclear control. The second being questions of the ease and facilitation of verification and what you’re trying to verify and track, I think, is quite different, at least in the era in which the IAEA was developed versus what we’re talking about in the AI era. I think there is one instructive lesson that comes out of the IAEA, however, and that is between 1945 and 1957, when the IAEA was established, was 12 years. And so, as we pound the table and demand action to institutionalize global governance around AI, we should be a little more patient with how this evolves. And I think I’ll leave it there. Actually, I won’t. I will say, look, we do need scientific networks that span countries that are convened to take on these problems, if for no other reason to build shared assessments of risk, to agree on shared standards for evaluation and capabilities, which I think we will need shared international approaches to. And so, I think we should continue to look for the right kinds of models for international cooperation, even if that’s not the right one.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. Please, your question.

Audience:
Yeah. Thank you very much. I’m Christine Mujumba from Uganda, but I’ll be speaking as a mother in this regard, and really advocating for the seven-year-old boy – I think it was from Bangladesh – who asked a question on children. And there have been follow-up discussions on whether such forums have a place in influencing policy. And being from a technical background and many other backgrounds, sometimes I find that we get lost in the high-tech definitions and all that, and we lose the low-hanging fruit of common denominators, such as what we shall all agree, even in our diversity, that we have all been children before. And even in that session before, when we were talking about cybercrime, it came out clearly that we need to protect the future generations. So I think for me, my ask to experts and partners like you, as you have your elevation pitches wherever you are, is not to sort of have the low-hanging fruits come out. If you all agree that you have been children, and we can find a child in us, let’s at least get there in addressing the AI that you want, and maybe these other things we will learn from there to have the inclusive designs you are talking about, whether to buy us things or not. So for me, it was really that plea of let’s find spaces, even in harmonization, in addressing common denominators such as preserving future generations. Thank you.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. I think that was a question, but a comment also. So I invite you to react in a final round. considering this last question, but your remarks in one and a half minute or less. I invite James to start and I will move this direction. Yeah.

James Hairston:
So, just final remarks here?

Moderator 1 – Maria Paz Canales Lobel:
If you want to address some of the last question, and if not, your final remarks. Yes.

James Hairston:
Yeah, no, I mean, again, I think the sort of public and private collaboration on sort of the safety of these tools, ensuring that both on the design side, on sort of the reporting, and in sort of the research we do about how children, other communities are using these tools, how to protect them, how to make sure, you know, even where, you know, tools like ours are not for use for, you know, anyone under 13. You know, understanding how young people and communities that are vulnerable come to these tools, how they interact with them is just going to be an important part of the work ahead, and being responsive to the new research that comes out of sort of the academic community and civil society, and, you know, being able to action reports of crime or of misuse is going to be key. I think in terms of sort of closing remarks, I mean, I think we’re at this important moment, and it’s just going to be essential that we really build on the momentum that, you know, has been put together, whether the work on the voluntary commitments that we very much see as our responsibility to continue to act on and to contribute to the international regulatory conversation and the promotion of long-term safety, that we just, again, sort of continue to get more and more concrete about, you know, where we’re heading, about the sort of international tools that we want to apply to these new technologies, and that we build the capacity both for identifying harms, reporting those harms, understanding what new capabilities are sort of working or putting communities and people at risk, but also what the really, you know, the unique opportunities are here for these types of tools, and those will be different. They will be adopted at different rates. The analogies to, you know, electricity I think is instructive because, you know, there will be different decisions made in education sectors or health sectors and finance and other areas. But, you know, really getting concrete about how we can take some of these tools and apply them to problems for people while also, you know, trying to solve for the long-term harms and risks, I think is going to be important. So I’m really glad to be here and to participate in this discussion.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. Happy to have you. Thank you. Clara.

Clara Neppel:
Thank you. Well, I think that especially when it comes now to generative AI, what will be important is to be as agile as possible, and this will be important to be from the organizational level to the national level to the global level. And I think that for all these levels, we need feedback mechanisms that work. at the organizational level, we have to make sure that these feedbacks are also taken into account for the further development of this foundational model. I agree with you that, of course, it has to take risk into account and it has to be differentiated, but I think that for certain high-risk applications, we have to have conformity assessments, and this has to be done through independent organizations, because there is, again, a different incentive to self-certification than being compliant. I think, as well, that maybe the International Atomic Agency is really difficult as an analogy because we have so many uses of artificial intelligence. I would like to bring back, again, the idea of more of an independent panel, independent multi-stakeholder panel, which, as a matter of fact, should be implemented also for this important technologies, which are acting, basically, as an infrastructure right now. So if it’s a public infrastructure, we also need to have a multi-stakeholder, let’s say, governance for that. Thank you.

Moderator 1 – Maria Paz Canales Lobel:
Thank you. Maybe more similar to CERN, that Atomic Agency, just another idea. So I will move to Wikileaks for your final remarks.

Thobekile Matimbe:
Thank you so much. I think what is clear from this conversation is that, as human beings, we cannot cede or forfeit our rights to technology, and we need to continue to, I think, emphasize the importance of us remaining with that agency over our fundamental rights and freedoms, and in that way, we will ensure that children’s rights are promoted in the use of AI, women’s rights are promoted in the use of AI. I think we could also center conversations around environmental rights, et cetera, and I think it’s a critical conversation that we need to continue to engage in, and looking at basic concepts such as participatory democracy, I think bringing it into the realm of Internet governance, I think it’s something that we need to also emphasize, that there’s need for participation of everyone, marginalized groups, vulnerable groups. but also ensuring that the processes that we have are actually very inclusive and we have a truthful and meaningful multi-stakeholder approach.

Arisa Ema:
So, thank you. So, I think that the AI governance discussion is really important and also very challenging because the AI itself actually kind of changes and evolves and also the situation changes, the environment changes, and in that sense, the people who we need to involve will kind of expand and never, you know, shrink. So, the more people should be involved in this kind of discussion. And in that sense, in my first remarks, I kind of mentioned that we need some kind of concrete cases and to discuss about what will be the risk, what do you mean by transparency, and what do you mean by how to take the accountability at all. However, as many people as we are going to include, we need some kind of philosophy or, you know, shared concept that we can be united and we can at least collaborate with the same context or the same kind of common understanding or the common concept that we share. So, in that sense, I think these couple of days’ discussion really have kind of come up with various important concepts and the principles, goals, and so I really kind of enjoyed this discussion. And so, the last thing I would like to mention is this is not the end, but this is just like a starting point. And this wouldn’t never end, but I think we can enjoy the process of this kind of exchanges and discussion and we need to be kind of aware of that to involve as many people as we can.

Moderator 1 – Maria Paz Canales Lobel:
Thank you. And the final words from the speaker?

Seth Center:
You did a great job moderating us and keeping us on time. Thank you. I will sum up my take and theme using a quote from a famous basketball coach about AI governance. Be quick. quick, but don’t hurry.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much for that. So we’re running out of time. I suppose to summarize a little bit of this rich discussion, but I only will provide the highlight of the takeaways rather than the full takeaway. I think that we have heard the main takeaway that we have heard here from different perspective of the value of this multistakeholder conversation and the value of like making, continuing making it as much inclusive as possible and enjoying of the participation of the people that is already in this room, but also looking for the people that are still outside of the room and thinking about this as a necessary step in what Dr. Shunter was inviting us to be quick, but not hurry. So take the time for listening, different perspective, and take the time to evaluate the different options to address the different challenges. So we talk purposely about artificial intelligence governance because we think there is a broader concept of just regulation or just voluntary guidance or just ethics. It’s a broader aspect, and this is the value of the Internet Governance Forum that we can reach different aspect of the discussion and bring different levels of expertise and also be mindful of all this level of inclusivity and diversity, the one that refer to vulnerable groups, the one that refer to different fields of expertise, and the one that also refer to different geopolitical realities. So as Arisa was mentioning, this is not the end, it’s the starting. Thank you very much for keeping connected with the process, and thank you all my speakers.

Arisa Ema

Speech speed

191 words per minute

Speech length

1515 words

Speech time

476 secs

Audience

Speech speed

167 words per minute

Speech length

901 words

Speech time

324 secs

Clara Neppel

Speech speed

163 words per minute

Speech length

2258 words

Speech time

830 secs

James Hairston

Speech speed

171 words per minute

Speech length

2940 words

Speech time

1030 secs

Moderator 1 – Maria Paz Canales Lobel

Speech speed

169 words per minute

Speech length

3813 words

Speech time

1354 secs

Moderator 2 – Christian Guillen

Speech speed

166 words per minute

Speech length

711 words

Speech time

256 secs

Seth Center

Speech speed

157 words per minute

Speech length

2174 words

Speech time

829 secs

Thobekile Matimbe

Speech speed

196 words per minute

Speech length

1382 words

Speech time

423 secs