Main Session on Artificial Intelligence | IGF 2023
Event report
Speakers
- Arisa Ema, Associate Professor, Institute for Future Initiatives, The University of Tokyo
- Clara Neppel, Senior Director, IEEE European Business Operations
- James Hairston, Head of International Policy and Partnerships, OpenAI
- Seth Center, Deputy Envoy for Critical and Emerging Technology, U.S. Department of State
- Thobekile Matimbe, Senior Manager, Partnerships and Engagements, Paradigm Initiative
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Moderator 2
During the discussion, the speakers focused on various aspects of AI regulation and governance. One important point that was emphasized is the need for AI regulation to be inclusive and child-centred. This means that any regulations and governance frameworks should take into account the needs and rights of children. It is crucial to ensure that children are protected and their best interests are considered when it comes to AI technologies.
Furthermore, the audience was encouraged to actively engage in the discussion by asking questions about AI and governance. This shows the importance of public participation and the involvement of various stakeholders in shaping AI policies and regulations. By encouraging questions and dialogue, it allows for a more inclusive and democratic approach to AI governance.
The potential application of generative AI in the educational system of developing countries, such as Afghanistan, was also explored. Generative AI has the potential to revolutionise education by providing innovative and tailored learning experiences for students. This could be particularly beneficial for developing countries where access to quality education is often a challenge.
Challenges regarding accountability in AI were brought to attention as well. It was highlighted that AI is still not fully understood, and this lack of understanding poses challenges in ensuring accountability for AI systems and their outcomes. The ethical implications of AI making decisions based on non-human generated data were also discussed, raising concerns about the biases and fairness of such decision-making processes.
Another significant concern expressed during the discussion was the need for a plan to prevent AI from getting out of control. As AI technologies advance rapidly, there is a risk of AI systems surpassing human control and potentially causing unintended consequences. It is important to establish robust mechanisms to ensure that AI remains within ethical boundaries and aligns with human values.
The importance of a multi-stakeholder approach in AI development and regulation was stressed. This means involving various stakeholders, including industry experts, policymakers and the public, in the decision-making process. By considering different perspectives and involving all stakeholders, it is more likely to achieve inclusive and effective AI regulations.
Lastly, the idea of incorporating AI technology in the development of government regulatory systems was proposed. This suggests using AI to enhance and streamline the processes of government regulation. By leveraging AI technology, regulatory systems can become more efficient, transparent and capable of addressing emerging challenges in a rapidly changing technological landscape.
Overall, the discussion highlighted the importance of inclusive and child-centred AI regulation and the need for active public participation. It explored the potential of generative AI in education, while also addressing various challenges and concerns related to accountability, ethics and control of AI. The multi-stakeholder approach and the incorporation of AI technology in government regulations were also emphasised as key considerations for effective and responsible AI governance.
Clara Neppel, Senior Director, IEEE European Business Operations
During the discussion on responsible AI governance, the importance of technical standards in supporting effective and responsible AI governance was emphasised. It was noted that IEEE initiated the Ethical Aligned Design initiative, which aimed to develop socio-technical standards, value-based design, and an ethical certification system. Collaboration between IEEE and regulatory bodies such as the Council of Europe and OECD was also mentioned to ensure the alignment of technical standards with responsible AI governance.
The implementation of responsible AI governance was seen as a combination of top-down (regulatory frameworks) and bottom-up (individual level) approaches. Engagement with organizations like the Council of Europe, EU, and OECD for regulation was considered crucial. Efforts to map regulatory requirements to technical standards were also highlighted to bridge the gap between regulatory frameworks and responsible AI governance.
Capacity building in technical expertise and understanding of social legal matters was recognised as a key aspect of responsible AI implementation. The necessity of competency frameworks defining the necessary skills for AI implementation was emphasised. Collaboration with certification bodies for developing an ecosystem to support capacity building was also mentioned.
Efforts to protect vulnerable communities online were a key focus. Examples were given, such as the LEGO Group implementing measures to protect children in their online and virtual environments. Regulatory frameworks like the UK Children's Act were also highlighted as measures taken to protect vulnerable communities online.
The discussion acknowledged that voluntary standards for AI can be effective and adopted by a wide range of actors. Examples were provided, such as UNICEF using IEEE's value-based design approach for a talent-searching system in Africa. The City of Vienna was mentioned as a pilot project for IEEE's AI certification, illustrating the potential for voluntary standards to drive responsible AI governance.
In terms of incentives for adopting voluntary standards, they were seen to vary. Some incentives mentioned include trust in services, regulatory compliance, risk minimisation, and the potential for a better value proposition. However, the discussion acknowledged that self-regulatory measures have limitations, and there is a need for democratically-decided boundaries in responsible AI governance.
Cooperation, feedback mechanisms, standardized reporting, and benchmarking/testing facilities were identified as key factors in achieving global governance of AI. These mechanisms were viewed as necessary for ensuring transparency, accountability, and consistency in the implementation of responsible AI governance.
The importance of global regulation or governance of AI was strongly emphasised. It was compared to the widespread usage of electricity, suggesting that AI usage is similarly pervasive and requires global standards and regulations for responsible implementation.
The need for transparency in understanding AI usage was highlighted. The discussion stressed the importance of clarity regarding how AI is used, incidents it may cause, the data sets involved, and the usage of synthetic data.
While private efforts in AI were recognised, it was emphasised that they should be made more trustworthy and open. Current private efforts were described as voluntary and often closed, underscoring the need for greater transparency and accountability in the private sector's contribution to responsible AI governance.
The discussion also touched upon the importance of agility when it comes to generative AI. It was suggested that generative AI at organizational and global levels should be agile to adapt to the evolving landscape of responsible AI governance.
Feedback mechanisms were highlighted as essential for the successful development of foundational models. The discussion emphasised that feedback at all levels is necessary to continuously improve foundational models and align them with responsible AI governance.
High-risk AI applications were identified as needing conformity assessments by independent organizations. This was seen as a way to ensure that these applications meet the necessary ethical and responsible standards.
The comparison of AI with the International Atomic Agency was mentioned but deemed difficult due to the various uses and applications of AI. The discussion acknowledged that AI has vast potential in different domains, making it challenging to compare directly with an established institution like the International Atomic Agency.
Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies that act as infrastructure. This proposal was seen as a way to enhance responsible governance and decision-making regarding crucial technological developments.
In conclusion, the discussion on responsible AI governance highlighted the significance of technical standards, the need for a combination of top-down and bottom-up approaches, capacity building, protection of vulnerable communities, the effectiveness of voluntary standards, incentives for adoption, the limitations of self-regulatory measures, the role of cooperation and feedback mechanisms in achieving global governance, the importance of transparency and global regulation, the agility of generative AI, and the importance of conformity assessments for high-risk AI applications. Additionally, the proposal for an independent multi-stakeholder panel for crucial technologies was seen as a way to enhance responsible governance.
James Hairston, Head of International Policy and Partnerships, OpenAI
OpenAI is committed to promoting the safety of AI through collaboration with various stakeholders. They acknowledge the significance of the public sector, civil society, and academia in ensuring the safety of AI and support their work in this regard. OpenAI also recognizes the need to understand the capabilities of new AI technologies and address any unforeseen harms that may arise from their use. They strive to improve their AI tools through an iterative approach, constantly learning and making necessary improvements.
In addition to the public sector and civil society, OpenAI emphasizes the role of the private sector in capacity building for research teams. They work towards building the research capacity of civil society and human rights organizations, realizing the importance of diverse perspectives in addressing AI-related issues.
OpenAI highlights the importance of standardized language and concrete definitions in AI conversations. By promoting a common understanding of AI tools, they aim to facilitate effective and meaningful discussions around their development and use.
The safety of technology use by vulnerable groups is a priority for OpenAI. They stress the need for research-based safety measures, leveraging the expertise of child safety experts and similar institutions. OpenAI recognizes that understanding usage patterns and how different groups interact with technology is crucial in formulating effective safety measures.
The protection of labor involved in the production of AI is a significant concern for OpenAI. They emphasize the need for proper compensation and prompt action against any abuses or harms. OpenAI calls for vigilance to ensure fairness and justice in AI, highlighting the role of companies and monitoring groups in preventing abusive work conditions.
Jurisdictional challenges pose a unique obstacle in AI governance discussions. OpenAI acknowledges the complexity arising from different regulatory frameworks in different jurisdictions. They stress the importance of considering the local context and values in AI system regulation and response.
OpenAI believes in the importance of safety and security testing in different regions to ensure optimal AI performance. They have launched the Red Teaming Network, inviting submissions from various countries, regions, and sectors. By encouraging diverse perspectives and inputs, OpenAI aims to enhance the safety and security of AI systems.
International institutions like the Internet Governance Forum (IGF) play a crucial role in harmonizing discussions about AI regulation and governance. OpenAI recognizes the contributions of such institutions in defining benchmarks and monitoring progress in AI regulations.
While formulating new standards for AI, OpenAI advocates for building on existing conventions, treaties, and areas of law. They believe that these established frameworks should serve as the foundation for developing comprehensive standards for AI usage and safety.
OpenAI is committed to contributing to discussions and future regulations of AI. They are actively involved in various initiatives and encourage collaboration to address challenges and shape the future of AI in a responsible and safe manner.
In terms of emergency response, OpenAI has an emergency shutdown procedure in place for specific dangerous scenarios. This demonstrates their commitment to safety protocols and risk management. They also leverage geographical cutoffs to deal with imminent threats.
OpenAI emphasizes the importance of human involvement in the development and testing of AI systems. They recognize the value of human-in-the-loop approaches, including the role of humans in red teaming processes and ensuring audibility in AI systems.
To address the issue of AI bias, OpenAI suggests the use of synthetic data sets. These data sets can help balance the under-representation of certain regions or genders and fill gaps in language or available information. OpenAI sees the potential in synthetic data sets to tackle some of the challenges associated with AI bias.
Standards bodies, research institutions, and government security testers have a crucial role in developing and monitoring AI. OpenAI acknowledges their importance in ensuring the security and accountability of AI systems.
Public-private collaboration is instrumental in ensuring the safety of digital tools. OpenAI recognizes the significance of working on design, reporting, and research aspects to address potential harms and misuse. They emphasize understanding different communities' interactions with these tools to develop effective safety measures.
OpenAI recognizes the need to address the harmful effects of new technologies while acknowledging their potential benefits. They emphasize the urgency to build momentum in addressing the negative impacts of emerging technologies and actively contribute to the international regulatory conversation.
In conclusion, OpenAI's commitment to AI safety is evident through their support for the work of the public sector, civil society, and academia. They emphasize the need to understand new AI capabilities and address unanticipated harms. The private sector has a role to play in capacity building, while standardized language and definitions are crucial in AI conversations. OpenAI stresses the importance of research-based safety measures for technology use by vulnerable groups and protection of labor involved in AI production. They acknowledge the challenges posed by jurisdictional borders in AI governance discussions. OpenAI promotes safety and security testing, encourages public-private collaboration, and advocates for the involvement of humans in AI development and testing. They also highlight the potential of synthetic data sets to address AI bias. International institutions, existing conventions, and standards bodies play a significant role in shaping AI regulations, and OpenAI is actively engaged in contributing to these discussions. Overall, OpenAI's approach emphasizes the importance of responsible and safe AI development and usage for the benefit of society.
Seth Center, Deputy Envoy for Critical and Emerging Technology, U.S. Department of State
AI technology is often compared to electricity in terms of its transformative power. However, unlike electricity, there is a growing consensus that governance frameworks for AI should be established promptly rather than waiting for several decades. Governments, such as the US, are embracing a multi-stakeholder approach to developing AI principles and governance. The US government has made voluntary commitments in key areas like transparency, security, and trust.
Accountability is a key focus in AI governance, with both hard law and voluntary frameworks being discussed. However, there are concerns and skepticism surrounding the effectiveness of voluntary governance frameworks in ensuring accountability. There is also doubt about the ability of principles alone to achieve accountability.
Despite these challenges, there is broad agreement on the concept of AI governance. Discussions and conversations are viewed as essential and valuable in shaping effective governance frameworks. The aim is for powerful AI developers, whether they are companies or governments, to devote attention to governing AI responsibly. The multi-stakeholder community can play a crucial role in guiding these developers towards addressing society's greatest challenges.
Implementing safeguards in AI is seen as vital for ensuring safety and security. This includes concepts such as red teaming, strict cybersecurity, third-party audits, and public reporting, all aimed at creating accountability and trust. Developers are encouraged to focus on addressing issues like bias and discrimination in AI, aligning with the goal of using AI to tackle society's most pressing problems.
The idea of instituting AI global governance requires patience. Drawing a comparison to the establishment of the International Atomic Energy Agency (IAEA), it is recognized that the process can take time. However, there is a need to develop scientific networks for shared risk assessments and agree on shared standards for evaluation and capabilities.
In terms of decision-making, there is a call for careful yet swift action in AI governance. Governments rely on inputs from various stakeholders, including the technical community and standard-setting bodies, to navigate the complex landscape of AI. Decision-making should not be careless, but the momentum towards establishing effective AI governance should not be slowed down.
In conclusion, while AI technology has the potential to be a transformative force, it is crucial to establish governance frameworks promptly. A multi-stakeholder approach, accountability, and the implementation of safeguards are seen as key components of effective AI governance. Discussions and conversations among stakeholders are believed to be vital in shaping AI governance frameworks. Patience is needed in institutionalizing AI global governance, but decision-making should strike a balance between caution and timely action.
Thobekile Matimbe, Senior Manager, Partnerships and Engagements, Paradigm Initiative
The analysis of the event session highlights several key points made by the speakers. First, it is noted that the Global South is actively working towards establishing regulatory frameworks for managing artificial intelligence. This demonstrates an effort to ensure that AI technologies are used responsibly and with consideration for ethical and legal implications. However, it is also pointed out that there is a lack of inclusivity in the design and application of AI on a global scale. The speakers highlight the fact that centres of power control the knowledge and design of technology, leading to inadequate representation from the Global South in discussions about AI. This lack of inclusivity raises concerns about the potential for bias and discrimination in AI systems.
The analysis also draws attention to the issues of discriminatory practices and surveillance in the Global South related to the use of AI. It is noted that surveillance targeting human rights defenders is a major concern, and there is evidence to suggest that discriminatory practices are indeed a lived reality. These concerns emphasize the need for proper oversight and safeguards to protect individuals from human rights violations arising from the use of AI.
In terms of internet governance, it is highlighted that inclusive processes and accessible platforms are essential for individuals from the Global South to be actively involved in Internet Governance Forums (IGFs). The importance of ensuring the participation of everyone, including marginalized and vulnerable groups, is emphasized as a means of achieving more equitable and inclusive internet governance.
The analysis also emphasizes the need for continued engagement with critical stakeholders and a victim-centered approach in conversations about AI and technology. This approach is necessary to address the adverse impacts of technology and ensure the promotion and protection of fundamental rights and freedoms. Furthermore, the analysis also underlines the importance of understanding global asymmetries and contexts when discussing AI and technology. Recognizing these differences can lead to more informed and effective decision-making.
Another noteworthy observation is the emphasis on the agency of individuals over their fundamental rights and freedoms. The argument is made that human beings should not cede or forfeit their rights to technology, highlighting the need for responsible and human-centered design and application of AI.
Additionally, the analysis highlights the importance of promoting children's and women's rights in the use of AI, as well as centring conversations around environmental rights. These aspects demonstrate the need to consider the broader societal impact of AI beyond just the technical aspects.
In conclusion, the analysis of the event session highlights the ongoing efforts of the Global South in developing regulatory frameworks for AI, but also raises concerns about the lack of inclusivity and potential for discrimination in the design and application of AI globally. The analysis emphasizes the importance of inclusive and participatory internet governance, continued engagement with stakeholders, and a victim-centered approach in conversations about AI. It also underlines the need to understand global asymmetries and contexts and calls for the promotion and protection of fundamental rights and freedoms in the use of AI.
Moderator 1
In her writings, Maria Paz Canales Lobel stresses the crucial importance of shaping the digital transformation to ensure that artificial intelligence (AI) technologies serve the best interests of humanity. She argues that AI governance should be firmly rooted in the international human rights framework, advocating for the application of human rights principles to guide the regulation and oversight of AI systems.
Canales Lobel proposes a risk-based approach to AI design and development, suggesting that potential risks and harms associated with AI technologies should be carefully identified and addressed from the outset. She emphasises the need for transparency in the development and deployment of AI systems to ensure that they are accountable for any adverse impacts or unintended consequences.
Furthermore, Canales Lobel emphasises the importance of open and inclusive design, development, and use of AI technologies. She argues that AI governance should be shaped through a multi-stakeholder conversation, involving diverse perspectives and expertise, in order to foster a holistic approach to decision-making and policy development. By including a wide range of stakeholders, she believes that the needs and concerns of vulnerable communities, such as children, can be adequately addressed in AI governance.
Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless coordination and cooperation between international and local levels. She suggests that the governance of AI should encompass not only technical standards and regulations but also voluntary guidelines and ethical considerations. She emphasizes the necessity of extending discussions beyond the confines of closed rooms and engaging people from various backgrounds and geopolitical contexts to ensure a comprehensive and inclusive approach.
In conclusion, Canales Lobel underscores the importance of responsible and ethical AI governance that places human rights and the well-being of all individuals at its core. Through her arguments for the integration of human rights principles, the adoption of a risk-based approach, and the promotion of open and inclusive design, development, and use of AI technologies, she presents a nuanced and holistic perspective on effective AI governance. Her emphasis on multi-stakeholder conversations, global collaboration, and the needs of vulnerable communities further contributes to the ongoing discourse on AI ethics and regulation.
Audience
The creation of AI involves different types of labor across the globe, each with its own set of standards and regulations. It is important to recognize that AI systems may be technological in nature, but they require significant human input during development. However, the labor involved in creating AI differs between the global south and the Western world. This suggests that there may be disparities in terms of the resources, expertise, and opportunities available for AI development in different regions.
When it comes to AI-generated disinformation, developing countries face particular challenges in countering this issue. With the rise of generative AI, which has become increasingly popular, there has been an increase in the spread of misinformation. This poses a significant challenge for developing countries, as they may not have the resources or infrastructure to effectively counter and mitigate the negative consequences of AI-generated disinformation.
On the other hand, developed economies have a responsibility to help create an inclusive digital ecosystem. While countries like Nepal are striving to enter the digital era, they face obstacles in the form of new technologies like AI. This highlights the importance of developed economies providing support and collaboration to ensure that developing countries can also benefit from and participate in the digital revolution.
In terms of regulation, there is no global consensus on how to govern AI and big data. The International Governance Forum (IGF) has been grappling with the issue of big data regulation for over a decade, without reaching a global agreement. Furthermore, there are differences in the approaches taken by different regions, such as the US and Europe, to deal with the data practices of their respective companies. This lack of consensus presents challenges in establishing consistent and effective regulation for AI and big data across the globe.
When it comes to policy-making, it is crucial to consider the protection of future generations, especially children, in discussions related to AI. Advocacy for children's rights and the need to safeguard the interests of future generations have been highlighted in discussions around AI and policy-making. It is important not to overlook or underestimate the impact that AI will have on the lives of children and future generations.
It is worth noting that technical discussions should not neglect simple yet significant considerations, such as addressing the concerns of children in policy-making. These considerations can help achieve inclusive designs that take into account the diverse needs and perspectives of different groups. By incorporating the voices and interests of children, policymakers can create policies that are more equitable and beneficial for all.
In conclusion, the creation and regulation of AI present various challenges and considerations. The differing types of labor involved in AI creation, the struggle to counter AI-generated disinformation in developing countries, the need for developed economies to foster an inclusive digital ecosystem, the absence of a global consensus on regulating AI and big data, and the importance of considering the interests of children in policy-making are all crucial aspects that need to be addressed. It is essential to promote collaboration, dialogue, and comprehensive approaches to ensure that AI is developed and regulated in a manner that benefits society as a whole.
Arisa Ema, Associate Professor, Institute for Future Initiatives, The University of Tokyo
The global discussions on AI governance need to consider different models and structures used across borders. Arisa Ema suggests that transparency and interoperability are crucial elements in these discussions. This is supported by the fact that framework interoperability has been highlighted in the G7 communique, and different countries have their own policies for AI evaluation.
When it comes to risk-based assessments, it is important to consider various aspects and application areas. For example, the level of risk involved in different usage scenarios, such as the use of facial recognition systems at airports or building entrances. Arisa Ema highlights the need to consider who is using AI, who is benefiting from it, and who is at risk.
Inclusivity is another important aspect of AI governance discussions. Arisa Ema urges the inclusion of physically challenged individuals in forums such as the Internet Governance Forum (IGF). She mentions an example of organizing a session where a person in a wheelchair participated remotely using avatar robots. This highlights the potential of technology to include those who may not be able to physically attend sessions.
Arisa Ema also emphasizes the importance of a human-centric approach in AI discussions. She believes that humans are adaptable and resilient, and they play a key role in AI systems. A human-centric approach ensures that AI benefits humanity and aligns with our values and needs.
Furthermore, Arisa Ema sees AI governance as a shared topic of discussion among technologists, policymakers, and the public. She uses democratic principles to stress her stance, emphasizing the importance of involving all stakeholders in shaping AI governance policies and frameworks.
The discussion on AI governance is an ongoing process, according to Arisa Ema. She believes that it is not the end but rather a starting point for exchanges and discussions. It is important to have a shared philosophy or concept in AI governance to foster collaboration and a common understanding among stakeholders.
Overall, the extended summary highlights the need for transparency, interoperability, risk-based assessments, inclusivity, a human-centric approach, and a shared governance framework in AI discussions. Arisa Ema's insights and arguments provide valuable perspectives on these important aspects of AI governance.
Speakers
AE
Arisa Ema
Speech speed
191 words per minute
Speech length
1515 words
Speech time
476 secs
Arguments
Global discussions on AI governance need to consider different models and structures used across borders
Supporting facts:
- Arisa Ema suggests that transparency and interoperability are crucial elements
Topics: AI governance, Global Discussions
Risk-based assessments should consider various aspects and application areas
Supporting facts:
- Different levels of risk at different usage scenarios, e.g., facial recognition system at the airport or building entrance
- Needs to consider who is using, who is benefiting from it, and who has the risk
Topics: Risk-Based Assessment, Artificial Intelligence Application
Arisa Ema sees the role of the Internet Governance Forum (IGF) as highly crucial
Supporting facts:
- Arisa organized a session with a person on a wheelchair who could participate remotely using avatar robots
- IGF facilitates discussions about AI governance
Topics: Internet Governance Forum, AI governance, inclusive communication
She urges inclusivity in forums such as IGF, particularly the inclusion of physically challenged individuals
Supporting facts:
- Use of avatar robots to include those who may not physically attend sessions
Topics: Inclusivity, AI governance, Accessible technology
Emphasizes on the importance of human-centric approach in AI discussions
Supporting facts:
- Humans are adaptable and resilient, they are key in AI systems
Topics: AI Governance, Human-centric approach
AI governance discussion is important and challenging
Supporting facts:
- AI itself actually kind of changes and evolves and also the situation changes, the environment changes
Topics: AI Governance, AI Evolution
Need for a shared philosophy or concept in AI governance
Supporting facts:
- we need some kind of philosophy or shared concept that we can be united and we can at least collaborate with the same context or the same common understanding
Topics: Shared Understanding, Collaboration, AI Governance
Report
The global discussions on AI governance need to consider different models and structures used across borders. Arisa Ema suggests that transparency and interoperability are crucial elements in these discussions. This is supported by the fact that framework interoperability has been highlighted in the G7 communique, and different countries have their own policies for AI evaluation.
When it comes to risk-based assessments, it is important to consider various aspects and application areas. For example, the level of risk involved in different usage scenarios, such as the use of facial recognition systems at airports or building entrances.
Arisa Ema highlights the need to consider who is using AI, who is benefiting from it, and who is at risk. Inclusivity is another important aspect of AI governance discussions. Arisa Ema urges the inclusion of physically challenged individuals in forums such as the Internet Governance Forum (IGF).
She mentions an example of organizing a session where a person in a wheelchair participated remotely using avatar robots. This highlights the potential of technology to include those who may not be able to physically attend sessions. Arisa Ema also emphasizes the importance of a human-centric approach in AI discussions.
She believes that humans are adaptable and resilient, and they play a key role in AI systems. A human-centric approach ensures that AI benefits humanity and aligns with our values and needs. Furthermore, Arisa Ema sees AI governance as a shared topic of discussion among technologists, policymakers, and the public.
She uses democratic principles to stress her stance, emphasizing the importance of involving all stakeholders in shaping AI governance policies and frameworks. The discussion on AI governance is an ongoing process, according to Arisa Ema. She believes that it is not the end but rather a starting point for exchanges and discussions.
It is important to have a shared philosophy or concept in AI governance to foster collaboration and a common understanding among stakeholders. Overall, the extended summary highlights the need for transparency, interoperability, risk-based assessments, inclusivity, a human-centric approach, and a shared governance framework in AI discussions.
Arisa Ema's insights and arguments provide valuable perspectives on these important aspects of AI governance.
A
Audience
Speech speed
167 words per minute
Speech length
901 words
Speech time
324 secs
Arguments
The creation of AI involves different kinds of labor across the globe
Supporting facts:
- AI systems are technological in nature but involves a lot of human input.
- Labor involved in creating AI is different in the global south and the Western world.
Topics: Artificial Intelligence, Labor, International Work Standards
40% of the population in Nepal and APEC region is still unconnected.
Supporting facts:
- Developed nations are adopting AI technologies while countries like Nepal are struggling to connect people.
Topics: Internet connectivity, Digital divide
Developing countries are struggling to counter disinformation and misinformation generated by AI.
Supporting facts:
- Generative AI became popular in 2022 leading to issues with misinformation.
Topics: Misinformation, Artificial intelligence
Multi-stakeholder platforms are not capable enough to actually set the policies.
Supporting facts:
- Policy influence often comes from multilateral systems.
Topics: Policy making, Multi-stakeholder platforms
There hasn't been a global consensus on how to deal with big data regulation
Supporting facts:
- IGF has been dealing with the issue of big data regulation for over a decade without reaching a global consensus.
- Differences exist between the ways different regions, like the U.S and Europe, are dealing with their companies' data.
Topics: IGF, AI, Big Data, Data Governance, Regulation
Questioning whether companies have an emergency shutdown procedure in place for unforeseen dangers
Supporting facts:
- The audience queried if there's a procedure in place for emergency shutdown in case of imminent danger from corporations.
Topics: Corporate Responsibility, AI, Emergency Procedure
Influencing policy decisions should take into account the consideration of the future generation, especially children, with regards to Artificial Intelligence
Supporting facts:
- Advocate for the seven-year-old boy from Bangladesh who asked a question on children
- The need for protection of future generations was discussed in a session about cybercrime
Topics: Policy Making, Artificial Intelligence, Child Rights, Future Generations
Report
The creation of AI involves different types of labor across the globe, each with its own set of standards and regulations. It is important to recognize that AI systems may be technological in nature, but they require significant human input during development.
However, the labor involved in creating AI differs between the global south and the Western world. This suggests that there may be disparities in terms of the resources, expertise, and opportunities available for AI development in different regions. When it comes to AI-generated disinformation, developing countries face particular challenges in countering this issue.
With the rise of generative AI, which has become increasingly popular, there has been an increase in the spread of misinformation. This poses a significant challenge for developing countries, as they may not have the resources or infrastructure to effectively counter and mitigate the negative consequences of AI-generated disinformation.
On the other hand, developed economies have a responsibility to help create an inclusive digital ecosystem. While countries like Nepal are striving to enter the digital era, they face obstacles in the form of new technologies like AI. This highlights the importance of developed economies providing support and collaboration to ensure that developing countries can also benefit from and participate in the digital revolution.
In terms of regulation, there is no global consensus on how to govern AI and big data. The International Governance Forum (IGF) has been grappling with the issue of big data regulation for over a decade, without reaching a global agreement.
Furthermore, there are differences in the approaches taken by different regions, such as the US and Europe, to deal with the data practices of their respective companies. This lack of consensus presents challenges in establishing consistent and effective regulation for AI and big data across the globe.
When it comes to policy-making, it is crucial to consider the protection of future generations, especially children, in discussions related to AI. Advocacy for children's rights and the need to safeguard the interests of future generations have been highlighted in discussions around AI and policy-making.
It is important not to overlook or underestimate the impact that AI will have on the lives of children and future generations. It is worth noting that technical discussions should not neglect simple yet significant considerations, such as addressing the concerns of children in policy-making.
These considerations can help achieve inclusive designs that take into account the diverse needs and perspectives of different groups. By incorporating the voices and interests of children, policymakers can create policies that are more equitable and beneficial for all. In conclusion, the creation and regulation of AI present various challenges and considerations.
The differing types of labor involved in AI creation, the struggle to counter AI-generated disinformation in developing countries, the need for developed economies to foster an inclusive digital ecosystem, the absence of a global consensus on regulating AI and big data, and the importance of considering the interests of children in policy-making are all crucial aspects that need to be addressed.
It is essential to promote collaboration, dialogue, and comprehensive approaches to ensure that AI is developed and regulated in a manner that benefits society as a whole.
CN
Clara Neppel
Speech speed
163 words per minute
Speech length
2258 words
Speech time
830 secs
Arguments
Technical standards can support effective, responsible AI governance
Supporting facts:
- IEEE started an Ethical Aligned Design initiative
- IEEE developed socio-technical standards, value-based design, and a certification, an ethical certification system
- Collaboration between IEEE and regulatory bodies like Council of Europe and OECD
Topics: Technical standards, Ethical challenges, AI governance
It's important to build capacity in technical expertise and understanding of social legal matters for responsible implementation of AI
Supporting facts:
- The necessity of competency framework which defines necessary skills
- Collaboration with certification bodies for developing an ecosystem
Topics: Education, Capacity building, AI implementation
Efforts should be made on voluntary and regulatory levels to protect vulnerable communities online
Supporting facts:
- Lego has implemented measures to protect children in their online and virtual environments
- Example of UK Children's Act: a regulatory framework setting up requirements
Topics: Regulation, Artificial Intelligence, Online protection, Vulnerable Communities
Voluntary standards for AI can be effective and adopted by a wide array of actors
Supporting facts:
- Example of UNICEF using IEEE's value-based design approach for a talent-searching system in Africa (changed initial closed and nontransparent system to an open, agency-giving one)
- City of Vienna is a pilot project for IEEE's AI certification based on trust in services
Topics: AI governance, IEEE standards
Incentives to adopt voluntary standards vary, but can include trust in services, regulatory compliance, risk minimisation, and potential for better value proposition
Topics: AI governance, Incentives
Nonetheless, there is a limit to what self-regulatory measures can achieve and there's a need for democratically-decided boundaries
Topics: AI governance, Regulation, Democracy
Cooperation, feedback mechanisms, standardized reporting, and benchmarking/testing facilities can help in the process of achieving global governance of AI
Topics: AI governance, Cooperation, Standardization, Feedback mechanisms
Need for working groups on defining the quality of synthetic data
Supporting facts:
- Clara Neppel mentioned the establishment of a working group for defining the quality and ethical use of synthetic data
Topics: Synthetic data, AI
Importance of global regulation or global governance of AI
Supporting facts:
- Clara compared AI usage to electricity, suggesting AI usage is similarly widespread, hence requiring global standards and regulations
Topics: AI, Global Regulation
Generative AI must be agile from organizational to global levels
Topics: Generative AI, Agility
Feedback mechanisms at all levels are needed for successful development of foundational models.
Topics: Feedback mechanisms, Foundational models
High-risk AI applications should undergo conformity assessments by independent organizations
Topics: Artificial Intelligence, Risk Management, Assessments
It's difficult to compare AI with the International Atomic Agency due to the various uses of AI
Topics: Artificial Intelligence, International Atomic Agency
A multi-stakeholder, independent panel should be implemented for important technologies which act as an infrastructure
Topics: Infrastructure, Multi-stakeholder Panel, Governance
Report
During the discussion on responsible AI governance, the importance of technical standards in supporting effective and responsible AI governance was emphasised. It was noted that IEEE initiated the Ethical Aligned Design initiative, which aimed to develop socio-technical standards, value-based design, and an ethical certification system.
Collaboration between IEEE and regulatory bodies such as the Council of Europe and OECD was also mentioned to ensure the alignment of technical standards with responsible AI governance. The implementation of responsible AI governance was seen as a combination of top-down (regulatory frameworks) and bottom-up (individual level) approaches.
Engagement with organizations like the Council of Europe, EU, and OECD for regulation was considered crucial. Efforts to map regulatory requirements to technical standards were also highlighted to bridge the gap between regulatory frameworks and responsible AI governance. Capacity building in technical expertise and understanding of social legal matters was recognised as a key aspect of responsible AI implementation.
The necessity of competency frameworks defining the necessary skills for AI implementation was emphasised. Collaboration with certification bodies for developing an ecosystem to support capacity building was also mentioned. Efforts to protect vulnerable communities online were a key focus. Examples were given, such as the LEGO Group implementing measures to protect children in their online and virtual environments.
Regulatory frameworks like the UK Children's Act were also highlighted as measures taken to protect vulnerable communities online. The discussion acknowledged that voluntary standards for AI can be effective and adopted by a wide range of actors. Examples were provided, such as UNICEF using IEEE's value-based design approach for a talent-searching system in Africa.
The City of Vienna was mentioned as a pilot project for IEEE's AI certification, illustrating the potential for voluntary standards to drive responsible AI governance. In terms of incentives for adopting voluntary standards, they were seen to vary. Some incentives mentioned include trust in services, regulatory compliance, risk minimisation, and the potential for a better value proposition.
However, the discussion acknowledged that self-regulatory measures have limitations, and there is a need for democratically-decided boundaries in responsible AI governance. Cooperation, feedback mechanisms, standardized reporting, and benchmarking/testing facilities were identified as key factors in achieving global governance of AI.
These mechanisms were viewed as necessary for ensuring transparency, accountability, and consistency in the implementation of responsible AI governance. The importance of global regulation or governance of AI was strongly emphasised. It was compared to the widespread usage of electricity, suggesting that AI usage is similarly pervasive and requires global standards and regulations for responsible implementation.
The need for transparency in understanding AI usage was highlighted. The discussion stressed the importance of clarity regarding how AI is used, incidents it may cause, the data sets involved, and the usage of synthetic data. While private efforts in AI were recognised, it was emphasised that they should be made more trustworthy and open.
Current private efforts were described as voluntary and often closed, underscoring the need for greater transparency and accountability in the private sector's contribution to responsible AI governance. The discussion also touched upon the importance of agility when it comes to generative AI.
It was suggested that generative AI at organizational and global levels should be agile to adapt to the evolving landscape of responsible AI governance. Feedback mechanisms were highlighted as essential for the successful development of foundational models. The discussion emphasised that feedback at all levels is necessary to continuously improve foundational models and align them with responsible AI governance.
High-risk AI applications were identified as needing conformity assessments by independent organizations. This was seen as a way to ensure that these applications meet the necessary ethical and responsible standards. The comparison of AI with the International Atomic Agency was mentioned but deemed difficult due to the various uses and applications of AI.
The discussion acknowledged that AI has vast potential in different domains, making it challenging to compare directly with an established institution like the International Atomic Agency. Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies that act as infrastructure.
This proposal was supported by one of the speakers, Clara, and was seen as a way to enhance responsible governance and decision-making regarding crucial technological developments. In conclusion, the discussion on responsible AI governance highlighted the significance of technical standards, the need for a combination of top-down and bottom-up approaches, capacity building, protection of vulnerable communities, the effectiveness of voluntary standards, incentives for adoption, the limitations of self-regulatory measures, the role of cooperation and feedback mechanisms in achieving global governance, the importance of transparency and global regulation, the agility of generative AI, and the importance of conformity assessments for high-risk AI applications.
Additionally, the proposal for an independent multi-stakeholder panel for crucial technologies was seen as a way to enhance responsible governance.
JH
James Hairston
Speech speed
171 words per minute
Speech length
2940 words
Speech time
1030 secs
Arguments
Understanding new AI capabilities and addressing unanticipated harms is crucial
Supporting facts:
- OpenAI uses tools like system cards for open red teaming and evaluation.
- They take an iterative approach in learning and improving AI tools.
Topics: Artificial Intelligence, Harms
Private sector plays a role in capacity building for research teams
Supporting facts:
- OpenAI works on building capacity for research teams across civil society and human rights organizations.
Topics: Private sector, Research, Capacity Building
Importance of standardized language and concrete definitions in AI conversations
Supporting facts:
- OpenAI highlights the need for a common understanding and language in discussing AI tools.
Topics: Artificial Intelligence, Language Standardization
Work on safety measures for technology use by vulnerable groups needs to be based on research by child safety experts and similar institutions
Supporting facts:
- James Hairston's experience with OpenAI and virtual and augmented reality
- Mention of usage patterns and understanding of how different groups interact with these technologies
Topics: Child safety, Technology use, Vulnerable groups, Research
Importance of protecting labor involved in the production of AI
Supporting facts:
- Pointing out the need for ensuring proper compensation
- Emphasized on taking prompt actions against abuses and harms
Topics: Artificial Intelligence, Labor Protection, Employment Rights
Consideration of local context and values is crucial in AI system regulation and response.
Supporting facts:
- Launch of a project for democratic inputs to AI
- Concept of unique outputs responsive to local contexts
Topics: Artificial Intelligence, Regulation, Jurisdiction
Challenges faced by the private sector in terms of jurisdictional borders in the offering of services
Supporting facts:
- Different regulatory frameworks in different jurisdictions
- Needs to deal with AI governance discussions at different local, domestic, regional, and global levels
Topics: Artificial Intelligence, Governance, Regulation
Need for participation in discussions on artificial intelligence governance and understanding the variance in the needs for AI tools
Supporting facts:
- Participation in discussions in all regions
- Understanding the differing needs and restraints in different geographies
Topics: Artificial Intelligence, Governance, Regulations
Private sector must contribute to discussions as countries formulate laws on AI usage and safety
Supporting facts:
- Addressing both short-to-medium-term risks and long-term risks
- Ensuring transparency and aiding conversations in as many geographies and communities
Topics: Artificial Intelligence, Law, Governance
International institutions like IGF play a crucial role in harmonizing discussions about what good looks like in terms of AI regulation and governance.
Supporting facts:
- IGF as an international institution can help in defining benchmarks and grading progress for AI regulations
Topics: IGF, AI regulation, AI governance
James Hairston confirmed the presence of an emergency shutdown procedure in his company for specific dangerous scenarios.
Supporting facts:
- Security report and harm reporting are used to ascertain threats.
- The company can turn their tools off by geography if required.
Topics: Data Governance, AI Regulation, Emergency Response
Long-term research in AI should involve humans in the loop for developing and testing AI systems
Supporting facts:
- Importance of Human involvement in development of AI was stressed
- Role of humans in red teaming and audibility was discussed
Topics: AI research, Human-in-the-loop, AI testing, AI development
Use of synthetic data sets might solve some problems of AI bias
Supporting facts:
- Synthetic data sets can help to balance under-representation of certain regions or genders
- They can be used to fill gaps in language or available information
Topics: Synthetic Data sets, AI bias
Synthetic data can be used for various applications like genomics research or understanding climate
Supporting facts:
- Synthetic data has good potential for wide range of applications
Topics: Synthetic Data, Genomics research, Climate understanding
Importance of public and private collaboration for the safety of digital tools
Supporting facts:
- Working on the design, reporting, and research sides is important
- It's essential to understand different communities' interactions with the tools
- Responsive to new research and reports of crime or misuse
Topics: Public-Private Collaboration, Digital Safety, New Technologies
The urgency to build momentum on addressing harmful effects of new technologies while exploiting the unique opportunities they bring
Supporting facts:
- Reference to work on voluntary commitments
- Significance of contributing to the international regulatory conversation
- The importance of identifying potential harms and dangers, while also recognizing the potential benefits of these new capacities
Topics: Digital Safety, New Technologies, Digital Opportunities
Report
OpenAI is committed to promoting the safety of AI through collaboration with various stakeholders. They acknowledge the significance of the public sector, civil society, and academia in ensuring the safety of AI and support their work in this regard. OpenAI also recognizes the need to understand the capabilities of new AI technologies and address any unforeseen harms that may arise from their use.
They strive to improve their AI tools through an iterative approach, constantly learning and making necessary improvements. In addition to the public sector and civil society, OpenAI emphasizes the role of the private sector in capacity building for research teams.
They work towards building the research capacity of civil society and human rights organizations, realizing the importance of diverse perspectives in addressing AI-related issues. OpenAI highlights the importance of standardized language and concrete definitions in AI conversations. By promoting a common understanding of AI tools, they aim to facilitate effective and meaningful discussions around their development and use.
The safety of technology use by vulnerable groups is a priority for OpenAI. They stress the need for research-based safety measures, leveraging the expertise of child safety experts and similar institutions. OpenAI recognizes that understanding usage patterns and how different groups interact with technology is crucial in formulating effective safety measures.
The protection of labor involved in the production of AI is a significant concern for OpenAI. They emphasize the need for proper compensation and prompt action against any abuses or harms. OpenAI calls for vigilance to ensure fairness and justice in AI, highlighting the role of companies and monitoring groups in preventing abusive work conditions.
Jurisdictional challenges pose a unique obstacle in AI governance discussions. OpenAI acknowledges the complexity arising from different regulatory frameworks in different jurisdictions. They stress the importance of considering the local context and values in AI system regulation and response. OpenAI believes in the importance of safety and security testing in different regions to ensure optimal AI performance.
They have launched the Red Teaming Network, inviting submissions from various countries, regions, and sectors. By encouraging diverse perspectives and inputs, OpenAI aims to enhance the safety and security of AI systems. International institutions like the Internet Governance Forum (IGF) play a crucial role in harmonizing discussions about AI regulation and governance.
OpenAI recognizes the contributions of such institutions in defining benchmarks and monitoring progress in AI regulations. While formulating new standards for AI, OpenAI advocates for building on existing conventions, treaties, and areas of law. They believe that these established frameworks should serve as the foundation for developing comprehensive standards for AI usage and safety.
OpenAI is committed to contributing to discussions and future regulations of AI. They are actively involved in various initiatives and encourage collaboration to address challenges and shape the future of AI in a responsible and safe manner. In terms of emergency response, OpenAI has an emergency shutdown procedure in place for specific dangerous scenarios.
This demonstrates their commitment to safety protocols and risk management. They also leverage geographical cutoffs to deal with imminent threats. OpenAI emphasizes the importance of human involvement in the development and testing of AI systems. They recognize the value of human-in-the-loop approaches, including the role of humans in red teaming processes and ensuring audibility in AI systems.
To address the issue of AI bias, OpenAI suggests the use of synthetic data sets. These data sets can help balance the under-representation of certain regions or genders and fill gaps in language or available information. OpenAI sees the potential in synthetic data sets to tackle some of the challenges associated with AI bias.
Standards bodies, research institutions, and government security testers have a crucial role in developing and monitoring AI. OpenAI acknowledges their importance in ensuring the security and accountability of AI systems. Public-private collaboration is instrumental in ensuring the safety of digital tools.
OpenAI recognizes the significance of working on design, reporting, and research aspects to address potential harms and misuse. They emphasize understanding different communities' interactions with these tools to develop effective safety measures. OpenAI recognizes the need to address the harmful effects of new technologies while acknowledging their potential benefits.
They emphasize the urgency to build momentum in addressing the negative impacts of emerging technologies and actively contribute to the international regulatory conversation. In conclusion, OpenAI's commitment to AI safety is evident through their support for the work of the public sector, civil society, and academia.
They emphasize the need to understand new AI capabilities and address unanticipated harms. The private sector has a role to play in capacity building, while standardized language and definitions are crucial in AI conversations. OpenAI stresses the importance of research-based safety measures for technology use by vulnerable groups and protection of labor involved in AI production.
They acknowledge the challenges posed by jurisdictional borders in AI governance discussions. OpenAI promotes safety and security testing, encourages public-private collaboration, and advocates for the involvement of humans in AI development and testing. They also highlight the potential of synthetic data sets to address AI bias.
International institutions, existing conventions, and standards bodies play a significant role in shaping AI regulations, and OpenAI is actively engaged in contributing to these discussions. Overall, OpenAI's approach emphasizes the importance of responsible and safe AI development and usage for the benefit of society.
M1
Moderator 1
Speech speed
169 words per minute
Speech length
3813 words
Speech time
1354 secs
Arguments
Maria Paz Canales Lobel emphasizes the importance of shaping the digital transformation to ensure that artificial intelligence technologies work for the good of humanity
Supporting facts:
- AI has become increasingly prevalent in daily life
- The societal impact of AI requires that its use and deployment is handled responsibly and ethically
Topics: Artificial Intelligence, Digital Transformation
Maria Paz Canales Lobel supports open and inclusive design, development, and use of AI technologies, with accountability for risk and harm
Supporting facts:
- She identifies these as part of the proposed five principles for guiding AI governance discussions
- She stresses the need for transparency in AI development and deployment
Topics: Artificial Intelligence, Accountability, Inclusive Design
Maria Paz Canales Lobel stresses the importance of inclusivity in AI governance, particularly for vulnerable communities such as children
Supporting facts:
- Young boy Omar from Bangladesh asked a question about child-centered AI governance
- She suggests to consider the needs of different vulnerable communities while designing policies
Topics: AI Governance, Inclusivity, Children, Vulnerable Communities
Value of a multi-stakeholder conversation in AI governance
Supporting facts:
- Inclusive participation of different stakeholders
- Engage people outside the room
- Different perspectives to evaluate different options
Topics: AI governance, Inclusivity, Diversity
Call for a measured approach to AI governance
Supporting facts:
- Not to hurry in AI governance
- Importance of listening different perspective
- Evaluation of different options for AI governance
Topics: AI governance, Regulation, Ethics
Report
In her writings, Maria Paz Canales Lobel stresses the crucial importance of shaping the digital transformation to ensure that artificial intelligence (AI) technologies serve the best interests of humanity. She argues that AI governance should be firmly rooted in the international human rights framework, advocating for the application of human rights principles to guide the regulation and oversight of AI systems.
Canales Lobel proposes a risk-based approach to AI design and development, suggesting that potential risks and harms associated with AI technologies should be carefully identified and addressed from the outset. She emphasises the need for transparency in the development and deployment of AI systems to ensure that they are accountable for any adverse impacts or unintended consequences.
Furthermore, Canales Lobel emphasises the importance of open and inclusive design, development, and use of AI technologies. She argues that AI governance should be shaped through a multi-stakeholder conversation, involving diverse perspectives and expertise, in order to foster a holistic approach to decision-making and policy development.
By including a wide range of stakeholders, she believes that the needs and concerns of vulnerable communities, such as children, can be adequately addressed in AI governance. Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless coordination and cooperation between international and local levels.
She suggests that the governance of AI should encompass not only technical standards and regulations but also voluntary guidelines and ethical considerations. She emphasizes the necessity of extending discussions beyond the confines of closed rooms and engaging people from various backgrounds and geopolitical contexts to ensure a comprehensive and inclusive approach.
In conclusion, Canales Lobel underscores the importance of responsible and ethical AI governance that places human rights and the well-being of all individuals at its core. Through her arguments for the integration of human rights principles, the adoption of a risk-based approach, and the promotion of open and inclusive design, development, and use of AI technologies, she presents a nuanced and holistic perspective on effective AI governance.
Her emphasis on multi-stakeholder conversations, global collaboration, and the needs of vulnerable communities further contributes to the ongoing discourse on AI ethics and regulation.
M2
Moderator 2
Speech speed
166 words per minute
Speech length
711 words
Speech time
256 secs
Arguments
AI regulation and governance should be inclusive and child-centered
Supporting facts:
- Question was asked by Omar Farooq, a 70-year-old boy from Bangladesh.
Topics: AI, Regulation, Governance, Inclusiveness, Children's rights
Generative AI could potentially be applied in the educational system of developing countries like Afghanistan
Topics: Artificial Intelligence, Education, Developing countries
There are challenges concerning accountability in AI as it is not fully understood
Topics: Artificial Intelligence, Accountability
We need a plan to ensure AI does not get out of control
Topics: Artificial Intelligence, Control
Seth should consider all stakeholders in AI development and regulation
Supporting facts:
- Christian Guillen questioned Seth about stakeholder involvement
Topics: AI governance, Multi-stakeholder approach
Report
During the discussion, the speakers focused on various aspects of AI regulation and governance. One important point that was emphasized is the need for AI regulation to be inclusive and child-centred. This means that any regulations and governance frameworks should take into account the needs and rights of children.
It is crucial to ensure that children are protected and their best interests are considered when it comes to AI technologies. Furthermore, the audience was encouraged to actively engage in the discussion by asking questions about AI and governance. This shows the importance of public participation and the involvement of various stakeholders in shaping AI policies and regulations.
By encouraging questions and dialogue, it allows for a more inclusive and democratic approach to AI governance. The potential application of generative AI in the educational system of developing countries, such as Afghanistan, was also explored. Generative AI has the potential to revolutionise education by providing innovative and tailored learning experiences for students.
This could be particularly beneficial for developing countries where access to quality education is often a challenge. Challenges regarding accountability in AI were brought to attention as well. It was highlighted that AI is still not fully understood, and this lack of understanding poses challenges in ensuring accountability for AI systems and their outcomes.
The ethical implications of AI making decisions based on non-human generated data were also discussed, raising concerns about the biases and fairness of such decision-making processes. Another significant concern expressed during the discussion was the need for a plan to prevent AI from getting out of control.
As AI technologies advance rapidly, there is a risk of AI systems surpassing human control and potentially causing unintended consequences. It is important to establish robust mechanisms to ensure that AI remains within ethical boundaries and aligns with human values.
The importance of a multi-stakeholder approach in AI development and regulation was stressed. This means involving various stakeholders, including industry experts, policymakers and the public, in the decision-making process. By considering different perspectives and involving all stakeholders, it is more likely to achieve inclusive and effective AI regulations.
Lastly, the idea of incorporating AI technology in the development of government regulatory systems was proposed. This suggests using AI to enhance and streamline the processes of government regulation. By leveraging AI technology, regulatory systems can become more efficient, transparent and capable of addressing emerging challenges in a rapidly changing technological landscape.
Overall, the discussion highlighted the importance of inclusive and child-centred AI regulation and the need for active public participation. It explored the potential of generative AI in education, while also addressing various challenges and concerns related to accountability, ethics and control of AI.
The multi-stakeholder approach and the incorporation of AI technology in government regulations were also emphasised as key considerations for effective and responsible AI governance.
SC
Seth Center
Speech speed
157 words per minute
Speech length
2174 words
Speech time
829 secs
Arguments
AI technology is a transformative force but governance framework should not wait for several decades like in the case of electricity
Supporting facts:
- The analogy of AI to electricity as a transformative force
- The delay in regulatory framework for electricity after its discovery
Topics: Artificial Intelligence, Regulation, Governance
GPT has raised new, profound questions about safety, security, and risk in AI
Topics: Safety, Security, Risk, AI
Every single governance question comes down to ultimately accountability.
Supporting facts:
- A hard law framework comes down to accountability if the challenge is figuring out what to measure in order to apply a hard law.
Topics: Governance, Accountability
Skepticism around voluntary governance frameworks due to the subject of accountability.
Supporting facts:
- Skepticism comes back to the question of accountability in governance frameworks that are voluntary.
Topics: Voluntary Governance, Accountability, Skepticism
Principles, with increasing skepticism, may not achieve accountability.
Topics: Principles, Accountability, Skepticism
AI technologies should be employed to address society's most important issues
Supporting facts:
- Powerful AI developers, whether they're companies or governments, should be encouraged to devote time and attention to govern AI responsibly
- The multi-stakeholder community can direct these developers through conversation and pressure to direct their efforts towards society's greatest challenges
Topics: Artificial Intelligence, Societal Challenges, Government
We should encourage the energy in all of the forums, whether it's the UK Safety Summit, whether it's the G7 Hiroshima process, whether it's the UN's H-Lab.
Supporting facts:
- We are at the early stages of the next era of AI, and we need all of these conversations at this point in time
Topics: Artificial Intelligence, Discussion forums, Multi-stakeholder conversation, UK Safety Summit, G7 Hiroshima process, UN's H-Lab
Consensus exists on AI governance
Supporting facts:
- Seth Center believes there is a broad agreement on the concept of AI governance, despite challenges in enforcement mechanisms.
- The idea of AI governance includes the establishment of safeguards, red teaming, information sharing, cybersecurity, third-party audits, public reporting, and research prioritisation.
Topics: AI governance, Enforcement
Implementation of safeguards in AI
Supporting facts:
- Seth Center suggests that internal and external red teaming should be a part of AI governance.
- There should be a shared understanding among developers and strict cybersecurity for model weights.
- There also needs to be provision for third-party audits and public reporting for the sake of accountability.
Topics: AI safeguards, Accountability
IAEA is an imperfect analogy for the AI technology situation
Supporting facts:
- Predominance of private sector developers of AI vs state-based questions about nuclear control
- Ease and facilitation of verification and what to verify and track is quite different
Topics: AI, Nuclear control, Global governance, IAEA
Need for patience in institutionalizing AI global governance
Supporting facts:
- Time taken between 1945 and 1957 for establishment of IAEA
Topics: AI, Global governance
AI governance should be handled quickly but not carelessly
Supporting facts:
- Seth Center quoting a famous basketball coach
Topics: AI governance
Report
AI technology is often compared to electricity in terms of its transformative power. However, unlike electricity, there is a growing consensus that governance frameworks for AI should be established promptly rather than waiting for several decades. Governments, such as the US, are embracing a multi-stakeholder approach to developing AI principles and governance.
The US government has made voluntary commitments in key areas like transparency, security, and trust. Accountability is a key focus in AI governance, with both hard law and voluntary frameworks being discussed. However, there are concerns and skepticism surrounding the effectiveness of voluntary governance frameworks in ensuring accountability.
There is also doubt about the ability of principles alone to achieve accountability. Despite these challenges, there is broad agreement on the concept of AI governance. Discussions and conversations are viewed as essential and valuable in shaping effective governance frameworks.
The aim is for powerful AI developers, whether they are companies or governments, to devote attention to governing AI responsibly. The multi-stakeholder community can play a crucial role in guiding these developers towards addressing society's greatest challenges. Implementing safeguards in AI is seen as vital for ensuring safety and security.
This includes concepts such as red teaming, strict cybersecurity, third-party audits, and public reporting, all aimed at creating accountability and trust. Developers are encouraged to focus on addressing issues like bias and discrimination in AI, aligning with the goal of using AI to tackle society's most pressing problems.
The idea of instituting AI global governance requires patience. Drawing a comparison to the establishment of the International Atomic Energy Agency (IAEA), it is recognized that the process can take time. However, there is a need to develop scientific networks for shared risk assessments and agree on shared standards for evaluation and capabilities.
In terms of decision-making, there is a call for careful yet swift action in AI governance. Governments rely on inputs from various stakeholders, including the technical community and standard-setting bodies, to navigate the complex landscape of AI. Decision-making should not be careless, but the momentum towards establishing effective AI governance should not be slowed down.
In conclusion, while AI technology has the potential to be a transformative force, it is crucial to establish governance frameworks promptly. A multi-stakeholder approach, accountability, and the implementation of safeguards are seen as key components of effective AI governance. Discussions and conversations among stakeholders are believed to be vital in shaping AI governance frameworks.
Patience is needed in institutionalizing AI global governance, but decision-making should strike a balance between caution and timely action.
TM
Thobekile Matimbe
Speech speed
196 words per minute
Speech length
1382 words
Speech time
423 secs
Arguments
Global South is making efforts to develop regulatory frameworks for managing artificial intelligence
Supporting facts:
- Data protection laws have been implemented
- Work is ongoing at the national level to develop artificial intelligence strategies
Topics: Artificial Intelligence, Regulatory Frameworks, Global South
There is a lack of inclusivity in the design and application of artificial intelligence globally
Supporting facts:
- Centres of power control the knowledge and design of technology
- There is inadequate representation from the Global South in these discussions
Topics: Inclusivity, Artificial Intelligence
Discriminatory practices and surveillance are observed with the use of AI in the Global South
Supporting facts:
- Surveillance targeting human rights defenders is a major concern
- Discriminatory practices coming with the use of AI is a lived reality
Topics: AI, Discrimination, Surveillance, Global South
Need for inclusive processes and accessible platforms for individuals from the Global South in IGF
Supporting facts:
- Some colleagues not being able to attend due to visa issues
Topics: IGF, Inclusion, Global South
Continued engagement with critical stakeholders and victim-centered approach in conversations
Topics: IGF, Engagement, Stakeholders
Understanding of the global asymmetries and context is vital
Supporting facts:
- Different contexts between Global North and Global South
Topics: Global asymmetries, Context
Human beings should not cede or forfeit their rights to technology
Supporting facts:
- Emphasized the importance of agency over fundamental rights and freedoms
Topics: AI, Human Rights, Technology
Conversation around environmental rights should also be centred
Topics: AI, Environmental Rights
Report
The analysis of the event session highlights several key points made by the speakers. First, it is noted that the Global South is actively working towards establishing regulatory frameworks for managing artificial intelligence. This demonstrates an effort to ensure that AI technologies are used responsibly and with consideration for ethical and legal implications.
However, it is also pointed out that there is a lack of inclusivity in the design and application of AI on a global scale. The speakers highlight the fact that centres of power control the knowledge and design of technology, leading to inadequate representation from the Global South in discussions about AI.
This lack of inclusivity raises concerns about the potential for bias and discrimination in AI systems. The analysis also draws attention to the issues of discriminatory practices and surveillance in the Global South related to the use of AI. It is noted that surveillance targeting human rights defenders is a major concern, and there is evidence to suggest that discriminatory practices are indeed a lived reality.
These concerns emphasize the need for proper oversight and safeguards to protect individuals from human rights violations arising from the use of AI. In terms of internet governance, it is highlighted that inclusive processes and accessible platforms are essential for individuals from the Global South to be actively involved in Internet Governance Forums (IGFs).
The importance of ensuring the participation of everyone, including marginalized and vulnerable groups, is emphasized as a means of achieving more equitable and inclusive internet governance. The analysis also emphasizes the need for continued engagement with critical stakeholders and a victim-centered approach in conversations about AI and technology.
This approach is necessary to address the adverse impacts of technology and ensure the promotion and protection of fundamental rights and freedoms. Furthermore, the analysis also underlines the importance of understanding global asymmetries and contexts when discussing AI and technology.
Recognizing these differences can lead to more informed and effective decision-making. Another noteworthy observation is the emphasis on the agency of individuals over their fundamental rights and freedoms. The argument is made that human beings should not cede or forfeit their rights to technology, highlighting the need for responsible and human-centered design and application of AI.
Additionally, the analysis highlights the importance of promoting children's and women's rights in the use of AI, as well as centring conversations around environmental rights. These aspects demonstrate the need to consider the broader societal impact of AI beyond just the technical aspects.
In conclusion, the analysis of the event session highlights the ongoing efforts of the Global South in developing regulatory frameworks for AI, but also raises concerns about the lack of inclusivity and potential for discrimination in the design and application of AI globally.
The analysis emphasizes the importance of inclusive and participatory internet governance, continued engagement with stakeholders, and a victim-centered approach in conversations about AI. It also underlines the need to understand global asymmetries and contexts and calls for the promotion and protection of fundamental rights and freedoms in the use of AI.