Main Session on Artificial Intelligence | IGF 2023

10 Oct 2023 06:15h - 07:45h UTC

Event report

Speakers

  • Arisa Ema, Associate Professor, Institute for Future Initiatives, The University of Tokyo
  • Clara Neppel, Senior Director, IEEE European Business Operations
  • James Hairston, Head of International Policy and Partnerships, OpenAI
  • Seth Center, Deputy Envoy for Critical and Emerging Technology, U.S. Department of State
  • Thobekile Matimbe, Senior Manager, Partnerships and Engagements, Paradigm Initiative

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Moderator 2

During the discussion, the speakers focused on various aspects of AI regulation and governance. One important point that was emphasized is the need for AI regulation to be inclusive and child-centred. This means that any regulations and governance frameworks should take into account the needs and rights of children. It is crucial to ensure that children are protected and their best interests are considered when it comes to AI technologies.

Furthermore, the audience was encouraged to actively engage in the discussion by asking questions about AI and governance. This shows the importance of public participation and the involvement of various stakeholders in shaping AI policies and regulations. By encouraging questions and dialogue, it allows for a more inclusive and democratic approach to AI governance.

The potential application of generative AI in the educational system of developing countries, such as Afghanistan, was also explored. Generative AI has the potential to revolutionise education by providing innovative and tailored learning experiences for students. This could be particularly beneficial for developing countries where access to quality education is often a challenge.

Challenges regarding accountability in AI were brought to attention as well. It was highlighted that AI is still not fully understood, and this lack of understanding poses challenges in ensuring accountability for AI systems and their outcomes. The ethical implications of AI making decisions based on non-human generated data were also discussed, raising concerns about the biases and fairness of such decision-making processes.

Another significant concern expressed during the discussion was the need for a plan to prevent AI from getting out of control. As AI technologies advance rapidly, there is a risk of AI systems surpassing human control and potentially causing unintended consequences. It is important to establish robust mechanisms to ensure that AI remains within ethical boundaries and aligns with human values.

The importance of a multi-stakeholder approach in AI development and regulation was stressed. This means involving various stakeholders, including industry experts, policymakers and the public, in the decision-making process. By considering different perspectives and involving all stakeholders, it is more likely to achieve inclusive and effective AI regulations.

Lastly, the idea of incorporating AI technology in the development of government regulatory systems was proposed. This suggests using AI to enhance and streamline the processes of government regulation. By leveraging AI technology, regulatory systems can become more efficient, transparent and capable of addressing emerging challenges in a rapidly changing technological landscape.

Overall, the discussion highlighted the importance of inclusive and child-centred AI regulation and the need for active public participation. It explored the potential of generative AI in education, while also addressing various challenges and concerns related to accountability, ethics and control of AI. The multi-stakeholder approach and the incorporation of AI technology in government regulations were also emphasised as key considerations for effective and responsible AI governance.

Clara Neppel, Senior Director, IEEE European Business Operations

During the discussion on responsible AI governance, the importance of technical standards in supporting effective and responsible AI governance was emphasised. It was noted that IEEE initiated the Ethical Aligned Design initiative, which aimed to develop socio-technical standards, value-based design, and an ethical certification system. Collaboration between IEEE and regulatory bodies such as the Council of Europe and OECD was also mentioned to ensure the alignment of technical standards with responsible AI governance.

The implementation of responsible AI governance was seen as a combination of top-down (regulatory frameworks) and bottom-up (individual level) approaches. Engagement with organizations like the Council of Europe, EU, and OECD for regulation was considered crucial. Efforts to map regulatory requirements to technical standards were also highlighted to bridge the gap between regulatory frameworks and responsible AI governance.

Capacity building in technical expertise and understanding of social legal matters was recognised as a key aspect of responsible AI implementation. The necessity of competency frameworks defining the necessary skills for AI implementation was emphasised. Collaboration with certification bodies for developing an ecosystem to support capacity building was also mentioned.

Efforts to protect vulnerable communities online were a key focus. Examples were given, such as the LEGO Group implementing measures to protect children in their online and virtual environments. Regulatory frameworks like the UK Children's Act were also highlighted as measures taken to protect vulnerable communities online.

The discussion acknowledged that voluntary standards for AI can be effective and adopted by a wide range of actors. Examples were provided, such as UNICEF using IEEE's value-based design approach for a talent-searching system in Africa. The City of Vienna was mentioned as a pilot project for IEEE's AI certification, illustrating the potential for voluntary standards to drive responsible AI governance.

In terms of incentives for adopting voluntary standards, they were seen to vary. Some incentives mentioned include trust in services, regulatory compliance, risk minimisation, and the potential for a better value proposition. However, the discussion acknowledged that self-regulatory measures have limitations, and there is a need for democratically-decided boundaries in responsible AI governance.

Cooperation, feedback mechanisms, standardized reporting, and benchmarking/testing facilities were identified as key factors in achieving global governance of AI. These mechanisms were viewed as necessary for ensuring transparency, accountability, and consistency in the implementation of responsible AI governance.

The importance of global regulation or governance of AI was strongly emphasised. It was compared to the widespread usage of electricity, suggesting that AI usage is similarly pervasive and requires global standards and regulations for responsible implementation.

The need for transparency in understanding AI usage was highlighted. The discussion stressed the importance of clarity regarding how AI is used, incidents it may cause, the data sets involved, and the usage of synthetic data.

While private efforts in AI were recognised, it was emphasised that they should be made more trustworthy and open. Current private efforts were described as voluntary and often closed, underscoring the need for greater transparency and accountability in the private sector's contribution to responsible AI governance.

The discussion also touched upon the importance of agility when it comes to generative AI. It was suggested that generative AI at organizational and global levels should be agile to adapt to the evolving landscape of responsible AI governance.

Feedback mechanisms were highlighted as essential for the successful development of foundational models. The discussion emphasised that feedback at all levels is necessary to continuously improve foundational models and align them with responsible AI governance.

High-risk AI applications were identified as needing conformity assessments by independent organizations. This was seen as a way to ensure that these applications meet the necessary ethical and responsible standards.

The comparison of AI with the International Atomic Agency was mentioned but deemed difficult due to the various uses and applications of AI. The discussion acknowledged that AI has vast potential in different domains, making it challenging to compare directly with an established institution like the International Atomic Agency.

Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies that act as infrastructure. This proposal was seen as a way to enhance responsible governance and decision-making regarding crucial technological developments.

In conclusion, the discussion on responsible AI governance highlighted the significance of technical standards, the need for a combination of top-down and bottom-up approaches, capacity building, protection of vulnerable communities, the effectiveness of voluntary standards, incentives for adoption, the limitations of self-regulatory measures, the role of cooperation and feedback mechanisms in achieving global governance, the importance of transparency and global regulation, the agility of generative AI, and the importance of conformity assessments for high-risk AI applications. Additionally, the proposal for an independent multi-stakeholder panel for crucial technologies was seen as a way to enhance responsible governance.

James Hairston, Head of International Policy and Partnerships, OpenAI

OpenAI is committed to promoting the safety of AI through collaboration with various stakeholders. They acknowledge the significance of the public sector, civil society, and academia in ensuring the safety of AI and support their work in this regard. OpenAI also recognizes the need to understand the capabilities of new AI technologies and address any unforeseen harms that may arise from their use. They strive to improve their AI tools through an iterative approach, constantly learning and making necessary improvements.

In addition to the public sector and civil society, OpenAI emphasizes the role of the private sector in capacity building for research teams. They work towards building the research capacity of civil society and human rights organizations, realizing the importance of diverse perspectives in addressing AI-related issues.

OpenAI highlights the importance of standardized language and concrete definitions in AI conversations. By promoting a common understanding of AI tools, they aim to facilitate effective and meaningful discussions around their development and use.

The safety of technology use by vulnerable groups is a priority for OpenAI. They stress the need for research-based safety measures, leveraging the expertise of child safety experts and similar institutions. OpenAI recognizes that understanding usage patterns and how different groups interact with technology is crucial in formulating effective safety measures.

The protection of labor involved in the production of AI is a significant concern for OpenAI. They emphasize the need for proper compensation and prompt action against any abuses or harms. OpenAI calls for vigilance to ensure fairness and justice in AI, highlighting the role of companies and monitoring groups in preventing abusive work conditions.

Jurisdictional challenges pose a unique obstacle in AI governance discussions. OpenAI acknowledges the complexity arising from different regulatory frameworks in different jurisdictions. They stress the importance of considering the local context and values in AI system regulation and response.

OpenAI believes in the importance of safety and security testing in different regions to ensure optimal AI performance. They have launched the Red Teaming Network, inviting submissions from various countries, regions, and sectors. By encouraging diverse perspectives and inputs, OpenAI aims to enhance the safety and security of AI systems.

International institutions like the Internet Governance Forum (IGF) play a crucial role in harmonizing discussions about AI regulation and governance. OpenAI recognizes the contributions of such institutions in defining benchmarks and monitoring progress in AI regulations.

While formulating new standards for AI, OpenAI advocates for building on existing conventions, treaties, and areas of law. They believe that these established frameworks should serve as the foundation for developing comprehensive standards for AI usage and safety.

OpenAI is committed to contributing to discussions and future regulations of AI. They are actively involved in various initiatives and encourage collaboration to address challenges and shape the future of AI in a responsible and safe manner.

In terms of emergency response, OpenAI has an emergency shutdown procedure in place for specific dangerous scenarios. This demonstrates their commitment to safety protocols and risk management. They also leverage geographical cutoffs to deal with imminent threats.

OpenAI emphasizes the importance of human involvement in the development and testing of AI systems. They recognize the value of human-in-the-loop approaches, including the role of humans in red teaming processes and ensuring audibility in AI systems.

To address the issue of AI bias, OpenAI suggests the use of synthetic data sets. These data sets can help balance the under-representation of certain regions or genders and fill gaps in language or available information. OpenAI sees the potential in synthetic data sets to tackle some of the challenges associated with AI bias.

Standards bodies, research institutions, and government security testers have a crucial role in developing and monitoring AI. OpenAI acknowledges their importance in ensuring the security and accountability of AI systems.

Public-private collaboration is instrumental in ensuring the safety of digital tools. OpenAI recognizes the significance of working on design, reporting, and research aspects to address potential harms and misuse. They emphasize understanding different communities' interactions with these tools to develop effective safety measures.

OpenAI recognizes the need to address the harmful effects of new technologies while acknowledging their potential benefits. They emphasize the urgency to build momentum in addressing the negative impacts of emerging technologies and actively contribute to the international regulatory conversation.

In conclusion, OpenAI's commitment to AI safety is evident through their support for the work of the public sector, civil society, and academia. They emphasize the need to understand new AI capabilities and address unanticipated harms. The private sector has a role to play in capacity building, while standardized language and definitions are crucial in AI conversations. OpenAI stresses the importance of research-based safety measures for technology use by vulnerable groups and protection of labor involved in AI production. They acknowledge the challenges posed by jurisdictional borders in AI governance discussions. OpenAI promotes safety and security testing, encourages public-private collaboration, and advocates for the involvement of humans in AI development and testing. They also highlight the potential of synthetic data sets to address AI bias. International institutions, existing conventions, and standards bodies play a significant role in shaping AI regulations, and OpenAI is actively engaged in contributing to these discussions. Overall, OpenAI's approach emphasizes the importance of responsible and safe AI development and usage for the benefit of society.

Seth Center, Deputy Envoy for Critical and Emerging Technology, U.S. Department of State

AI technology is often compared to electricity in terms of its transformative power. However, unlike electricity, there is a growing consensus that governance frameworks for AI should be established promptly rather than waiting for several decades. Governments, such as the US, are embracing a multi-stakeholder approach to developing AI principles and governance. The US government has made voluntary commitments in key areas like transparency, security, and trust.

Accountability is a key focus in AI governance, with both hard law and voluntary frameworks being discussed. However, there are concerns and skepticism surrounding the effectiveness of voluntary governance frameworks in ensuring accountability. There is also doubt about the ability of principles alone to achieve accountability.

Despite these challenges, there is broad agreement on the concept of AI governance. Discussions and conversations are viewed as essential and valuable in shaping effective governance frameworks. The aim is for powerful AI developers, whether they are companies or governments, to devote attention to governing AI responsibly. The multi-stakeholder community can play a crucial role in guiding these developers towards addressing society's greatest challenges.

Implementing safeguards in AI is seen as vital for ensuring safety and security. This includes concepts such as red teaming, strict cybersecurity, third-party audits, and public reporting, all aimed at creating accountability and trust. Developers are encouraged to focus on addressing issues like bias and discrimination in AI, aligning with the goal of using AI to tackle society's most pressing problems.

The idea of instituting AI global governance requires patience. Drawing a comparison to the establishment of the International Atomic Energy Agency (IAEA), it is recognized that the process can take time. However, there is a need to develop scientific networks for shared risk assessments and agree on shared standards for evaluation and capabilities.

In terms of decision-making, there is a call for careful yet swift action in AI governance. Governments rely on inputs from various stakeholders, including the technical community and standard-setting bodies, to navigate the complex landscape of AI. Decision-making should not be careless, but the momentum towards establishing effective AI governance should not be slowed down.

In conclusion, while AI technology has the potential to be a transformative force, it is crucial to establish governance frameworks promptly. A multi-stakeholder approach, accountability, and the implementation of safeguards are seen as key components of effective AI governance. Discussions and conversations among stakeholders are believed to be vital in shaping AI governance frameworks. Patience is needed in institutionalizing AI global governance, but decision-making should strike a balance between caution and timely action.

Thobekile Matimbe, Senior Manager, Partnerships and Engagements, Paradigm Initiative

The analysis of the event session highlights several key points made by the speakers. First, it is noted that the Global South is actively working towards establishing regulatory frameworks for managing artificial intelligence. This demonstrates an effort to ensure that AI technologies are used responsibly and with consideration for ethical and legal implications. However, it is also pointed out that there is a lack of inclusivity in the design and application of AI on a global scale. The speakers highlight the fact that centres of power control the knowledge and design of technology, leading to inadequate representation from the Global South in discussions about AI. This lack of inclusivity raises concerns about the potential for bias and discrimination in AI systems.

The analysis also draws attention to the issues of discriminatory practices and surveillance in the Global South related to the use of AI. It is noted that surveillance targeting human rights defenders is a major concern, and there is evidence to suggest that discriminatory practices are indeed a lived reality. These concerns emphasize the need for proper oversight and safeguards to protect individuals from human rights violations arising from the use of AI.

In terms of internet governance, it is highlighted that inclusive processes and accessible platforms are essential for individuals from the Global South to be actively involved in Internet Governance Forums (IGFs). The importance of ensuring the participation of everyone, including marginalized and vulnerable groups, is emphasized as a means of achieving more equitable and inclusive internet governance.

The analysis also emphasizes the need for continued engagement with critical stakeholders and a victim-centered approach in conversations about AI and technology. This approach is necessary to address the adverse impacts of technology and ensure the promotion and protection of fundamental rights and freedoms. Furthermore, the analysis also underlines the importance of understanding global asymmetries and contexts when discussing AI and technology. Recognizing these differences can lead to more informed and effective decision-making.

Another noteworthy observation is the emphasis on the agency of individuals over their fundamental rights and freedoms. The argument is made that human beings should not cede or forfeit their rights to technology, highlighting the need for responsible and human-centered design and application of AI.

Additionally, the analysis highlights the importance of promoting children's and women's rights in the use of AI, as well as centring conversations around environmental rights. These aspects demonstrate the need to consider the broader societal impact of AI beyond just the technical aspects.

In conclusion, the analysis of the event session highlights the ongoing efforts of the Global South in developing regulatory frameworks for AI, but also raises concerns about the lack of inclusivity and potential for discrimination in the design and application of AI globally. The analysis emphasizes the importance of inclusive and participatory internet governance, continued engagement with stakeholders, and a victim-centered approach in conversations about AI. It also underlines the need to understand global asymmetries and contexts and calls for the promotion and protection of fundamental rights and freedoms in the use of AI.

Moderator 1

In her writings, Maria Paz Canales Lobel stresses the crucial importance of shaping the digital transformation to ensure that artificial intelligence (AI) technologies serve the best interests of humanity. She argues that AI governance should be firmly rooted in the international human rights framework, advocating for the application of human rights principles to guide the regulation and oversight of AI systems.

Canales Lobel proposes a risk-based approach to AI design and development, suggesting that potential risks and harms associated with AI technologies should be carefully identified and addressed from the outset. She emphasises the need for transparency in the development and deployment of AI systems to ensure that they are accountable for any adverse impacts or unintended consequences.

Furthermore, Canales Lobel emphasises the importance of open and inclusive design, development, and use of AI technologies. She argues that AI governance should be shaped through a multi-stakeholder conversation, involving diverse perspectives and expertise, in order to foster a holistic approach to decision-making and policy development. By including a wide range of stakeholders, she believes that the needs and concerns of vulnerable communities, such as children, can be adequately addressed in AI governance.

Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless coordination and cooperation between international and local levels. She suggests that the governance of AI should encompass not only technical standards and regulations but also voluntary guidelines and ethical considerations. She emphasizes the necessity of extending discussions beyond the confines of closed rooms and engaging people from various backgrounds and geopolitical contexts to ensure a comprehensive and inclusive approach.

In conclusion, Canales Lobel underscores the importance of responsible and ethical AI governance that places human rights and the well-being of all individuals at its core. Through her arguments for the integration of human rights principles, the adoption of a risk-based approach, and the promotion of open and inclusive design, development, and use of AI technologies, she presents a nuanced and holistic perspective on effective AI governance. Her emphasis on multi-stakeholder conversations, global collaboration, and the needs of vulnerable communities further contributes to the ongoing discourse on AI ethics and regulation.

Audience

The creation of AI involves different types of labor across the globe, each with its own set of standards and regulations. It is important to recognize that AI systems may be technological in nature, but they require significant human input during development. However, the labor involved in creating AI differs between the global south and the Western world. This suggests that there may be disparities in terms of the resources, expertise, and opportunities available for AI development in different regions.

When it comes to AI-generated disinformation, developing countries face particular challenges in countering this issue. With the rise of generative AI, which has become increasingly popular, there has been an increase in the spread of misinformation. This poses a significant challenge for developing countries, as they may not have the resources or infrastructure to effectively counter and mitigate the negative consequences of AI-generated disinformation.

On the other hand, developed economies have a responsibility to help create an inclusive digital ecosystem. While countries like Nepal are striving to enter the digital era, they face obstacles in the form of new technologies like AI. This highlights the importance of developed economies providing support and collaboration to ensure that developing countries can also benefit from and participate in the digital revolution.

In terms of regulation, there is no global consensus on how to govern AI and big data. The International Governance Forum (IGF) has been grappling with the issue of big data regulation for over a decade, without reaching a global agreement. Furthermore, there are differences in the approaches taken by different regions, such as the US and Europe, to deal with the data practices of their respective companies. This lack of consensus presents challenges in establishing consistent and effective regulation for AI and big data across the globe.

When it comes to policy-making, it is crucial to consider the protection of future generations, especially children, in discussions related to AI. Advocacy for children's rights and the need to safeguard the interests of future generations have been highlighted in discussions around AI and policy-making. It is important not to overlook or underestimate the impact that AI will have on the lives of children and future generations.

It is worth noting that technical discussions should not neglect simple yet significant considerations, such as addressing the concerns of children in policy-making. These considerations can help achieve inclusive designs that take into account the diverse needs and perspectives of different groups. By incorporating the voices and interests of children, policymakers can create policies that are more equitable and beneficial for all.

In conclusion, the creation and regulation of AI present various challenges and considerations. The differing types of labor involved in AI creation, the struggle to counter AI-generated disinformation in developing countries, the need for developed economies to foster an inclusive digital ecosystem, the absence of a global consensus on regulating AI and big data, and the importance of considering the interests of children in policy-making are all crucial aspects that need to be addressed. It is essential to promote collaboration, dialogue, and comprehensive approaches to ensure that AI is developed and regulated in a manner that benefits society as a whole.

Arisa Ema, Associate Professor, Institute for Future Initiatives, The University of Tokyo

The global discussions on AI governance need to consider different models and structures used across borders. Arisa Ema suggests that transparency and interoperability are crucial elements in these discussions. This is supported by the fact that framework interoperability has been highlighted in the G7 communique, and different countries have their own policies for AI evaluation.

When it comes to risk-based assessments, it is important to consider various aspects and application areas. For example, the level of risk involved in different usage scenarios, such as the use of facial recognition systems at airports or building entrances. Arisa Ema highlights the need to consider who is using AI, who is benefiting from it, and who is at risk.

Inclusivity is another important aspect of AI governance discussions. Arisa Ema urges the inclusion of physically challenged individuals in forums such as the Internet Governance Forum (IGF). She mentions an example of organizing a session where a person in a wheelchair participated remotely using avatar robots. This highlights the potential of technology to include those who may not be able to physically attend sessions.

Arisa Ema also emphasizes the importance of a human-centric approach in AI discussions. She believes that humans are adaptable and resilient, and they play a key role in AI systems. A human-centric approach ensures that AI benefits humanity and aligns with our values and needs.

Furthermore, Arisa Ema sees AI governance as a shared topic of discussion among technologists, policymakers, and the public. She uses democratic principles to stress her stance, emphasizing the importance of involving all stakeholders in shaping AI governance policies and frameworks.

The discussion on AI governance is an ongoing process, according to Arisa Ema. She believes that it is not the end but rather a starting point for exchanges and discussions. It is important to have a shared philosophy or concept in AI governance to foster collaboration and a common understanding among stakeholders.

Overall, the extended summary highlights the need for transparency, interoperability, risk-based assessments, inclusivity, a human-centric approach, and a shared governance framework in AI discussions. Arisa Ema's insights and arguments provide valuable perspectives on these important aspects of AI governance.

Speakers

AE

Arisa Ema

Speech speed

191 words per minute

Speech length

1515 words

Speech time

476 secs

Click for more

A

Audience

Speech speed

167 words per minute

Speech length

901 words

Speech time

324 secs

Click for more

CN

Clara Neppel

Speech speed

163 words per minute

Speech length

2258 words

Speech time

830 secs

Click for more

JH

James Hairston

Speech speed

171 words per minute

Speech length

2940 words

Speech time

1030 secs

Click for more

M1

Moderator 1

Speech speed

169 words per minute

Speech length

3813 words

Speech time

1354 secs

Click for more

M2

Moderator 2

Speech speed

166 words per minute

Speech length

711 words

Speech time

256 secs

Click for more

SC

Seth Center

Speech speed

157 words per minute

Speech length

2174 words

Speech time

829 secs

Click for more

TM

Thobekile Matimbe

Speech speed

196 words per minute

Speech length

1382 words

Speech time

423 secs

Click for more