Policy Network on Artificial Intelligence | IGF 2023
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Sarayu Natarajan
Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society as false information can be easily created and spread through the internet and digital platforms. However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively.
Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation. This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated and implementing legal measures accordingly. This approach acknowledges the importance of tailored solutions based on a solid legal framework.
The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south. These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.
Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI. Smaller organizations that work within specific communities are actively involved in creating these language models. This represents a positive step towards inclusivity and accessibility in the field of AI, particularly in underrepresented communities and non-mainstream languages.
Mutual understanding and engagement between AI technology and policy domains are crucial for effective governance. It is essential for these two disciplines to communicate with each other in a meaningful way. Creating forums that facilitate non-judgmental discussions and acknowledge the diverse empirical starting points is critical. This allows for a more integrated and collaborative approach towards harnessing the benefits of AI technology while addressing its ethical and societal implications.
While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs. Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. It is important to consider the potential benefits and challenges associated with AI technology and strive for humane conditions for workers in different parts of the world.
In conclusion, the advent of generative AI has made it easier and cheaper to produce and disseminate misinformation and disinformation, posing negative effects on society. However, the rule of law, through proper legal protections, plays a significant role in curbing the spread of false information. A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue. Inclusivity, diversity, and mutual understanding between AI technology and policy domains are crucial considerations in the development and governance of AI. It is essential to closely monitor the impact of AI on job loss and ensure fair working conditions for all.
Shamira Ahmed
The analysis focuses on several key themes related to AI and its impact on various aspects, including the environment, data governance, geopolitical power dynamics, and historical injustices. It begins by highlighting the importance of data governance in the intersection of AI and the environment. This aspect is considered to be quite broad and requires attention and effective management.
Moving on, the analysis advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. It emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI. By adopting a decolonial approach, it is believed that these injustices can be addressed and a more equitable and just AI landscape can be achieved.
Furthermore, the analysis highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future. This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion. It emphasizes the need for a balanced approach that takes into account the needs of the environment and all stakeholders involved.
In addition, the analysis underscores the importance of addressing historical injustices and promoting interoperable AI governance innovations. It emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard. By doing so, it aims to create an AI governance framework that is inclusive, fair, and capable of addressing the challenges associated with historical injustices.
Overall, the analysis provides important insights into the complex relationship between AI and various domains. It highlights the need to consider historical injustices, power imbalances, and environmental concerns in the development and deployment of AI technologies. The conclusions drawn from this analysis serve as a call to action for policymakers, stakeholders, and researchers to work towards a more responsible, equitable, and sustainable AI landscape.
Audience
The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge highlighted was the difficulty in capacity building due to the rapidly changing nature of AI. It was observed that AI is more of an empirical science than an engineering product, meaning that researchers and designers often don’t know what to expect due to continual testing and experimentation. Misinformation and the abundance of information sources further exacerbate the challenges in capacity building.
The importance of providing education to a diverse range of demographics, from school children to the elderly, was also emphasised. It was recognised that ensuring high-quality education in the field of AI is vital in equipping individuals with the knowledge and skills required to navigate the rapidly evolving technological landscape. This education should be accessible to all, regardless of their age or background.
Additionally, the panel discussion shed light on the blurring boundaries between regulatory development and technical development in AI and other digital technologies. It was noted that the political domain of regulatory development and the technical domain of standards development are increasingly overlapping in the field of AI. This convergence presents unique challenges that necessitate a thoughtful approach to ensure both regulatory compliance and technical excellence.
Furthermore, the role of standards in executing regulations in the context of AI was discussed. The panel emphasised that standards are becoming an essential tool for implementing and enforcing regulations. Developing and adhering to standards can help address challenges such as interoperability, transparency, and accountability in AI systems.
The need for capacity building was also emphasised, allowing a broader stakeholder community to engage in the technical aspects of AI, which have become integral to major policy tools. The panel acknowledged that empowering a diverse and inclusive group of stakeholders, including policymakers, experts, civil society representatives, academics, and industry professionals, is crucial for the development and governance of AI technology.
The process of contributing to AI training and education through UNESCO was discussed, highlighting the involvement of a UNESCO member who distributes AI research materials and textbooks to universities, particularly in developing countries. This partnership and knowledge-sharing initiative aim to bridge the global education gap and ensure that AI education is accessible to all.
The assessment of AI systems was deemed crucial, with recognition that assessing non-technical aspects is as important as evaluating technical performance. This includes considering the wider societal impact, such as potential consequences on workers and the categorisation of people. The panel emphasised the need for assessment processes to go beyond technical measures and include potential unintended consequences and ethical considerations.
Furthermore, it was acknowledged that the assessment of AI systems should extend beyond their current context and consider performance in future or “unbuilt” scenarios. This reflects the need to anticipate and mitigate potential negative outcomes resulting from the deployment of AI technology and to ensure its responsible development and use.
In conclusion, the panel discussion provided valuable insights into the challenges and opportunities associated with AI technology. The rapidly changing nature of AI necessitates continuous capacity building, particularly in the education sector, to equip individuals with the necessary skills and knowledge. Moreover, the convergence of regulatory and technical development in AI requires a thoughtful and inclusive approach, with standards playing a critical role in regulatory compliance. The assessment of AI systems was identified as a key area, underscoring the importance of considering non-technical aspects and potential societal impacts. Overall, the discussion emphasised the need for responsible development, governance, and stakeholder engagement to harness the potential of AI technology while mitigating its risks.
Nobuo Nishigata
The analysis reveals several key points regarding AI governance. Firstly, it emphasizes the importance of striking a balance between regulation and innovation in AI initiatives. This suggests that while regulations are necessary to address concerns and ensure ethical practices, there should also be room for innovation and advancement in the field.
Furthermore, the report highlights the need for AI policy development to take into consideration perspectives and experiences from the Global South. This acknowledges the diverse challenges and opportunities that different regions face in relation to AI adoption and governance.
The analysis also discusses the dual nature of AI technology, presenting both risks and opportunities. It underscores the significance of discussing uncertainties and potential risks associated with AI, alongside the numerous opportunities it presents. Additionally, it highlights the potential of AI to significantly contribute to addressing economic and labor issues, as evidenced by Japan considering AI as a solution to its declining labour force and sustaining its economy.
Another noteworthy point raised in the analysis is the recommendation to view AI governance through the Global South lens. This suggests that the perspectives and experiences of developing nations should be taken into account to ensure a more inclusive and equitable approach to AI governance.
The analysis also provides insights into the ongoing Hiroshima process focused on generative AI. It highlights that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Notably, the report suggests support for this approach, emphasising the importance of a code of conduct in addressing concerns such as misinformation and disinformation.
Flexibility and adaptability in global AI governance are advocated for in the analysis. It argues that AI is a rapidly evolving field, necessitating governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.
Collaboration and coordination between organisations and governments are seen as crucial in AI policy-making, skills development, and creating AI ecosystems. The analysis suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building.
The importance of respecting human rights, ensuring safety, and fostering accountability and explainability in AI systems are also highlighted. These aspects are considered fundamental in mitigating potential harms and ensuring that AI technologies are used responsibly and ethically.
In addition to these main points, the analysis touches upon the significance of education and harmonisation. It suggests that education plays a key role in the AI governance discourse, and harmonisation is seen as important for the future.
Overall, the analysis brings attention to the multifaceted nature of AI governance, advocating for a balanced approach that takes into account various perspectives, fosters innovation, and ensures ethical and responsible practices. It underscores the need for inclusive and collaborative efforts to create effective AI policies and systems that can address the challenges and harness the opportunities presented by AI technology.
Jose
The analysis of the speakers’ points highlights several important issues. Representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance.
A significant concern raised in the analysis is the intricate link between labour issues and advancements in the tech industry. In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times. This highlights the need to address the adverse effects of tech advancements on workers’ well-being.
The impact of the tech industry on sustainability is another topic of debate in the analysis. There are concerns about the interest shown by tech leaders in Bolivia’s minerals, particularly lithium, following political instability. This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction.
The use of biometric systems for surveillance purposes comes under scrutiny as well. In Brazil, the analysis reveals that the criminal system’s structural racism is being automated and accelerated by these technologies. This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance.
There is a notable push for banning certain systems in Brazil, as civil society advocates for regulations to protect individuals’ rights and privacy in the face of advancing technology. This highlights the need for robust governance and regulation measures in the tech industry to prevent harmful impacts.
The global governance of AI is also a point of concern. The analysis highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives. This emphasizes the importance of global collaboration and cooperation to ensure ethical and responsible use of AI technologies.
Countries from the global south argue for the need to actively participate and push forward their interests in the governance of AI technologies. Forums like BRICS and G20 are suggested as platforms to voice these concerns and advocate for more inclusive decision-making processes.
The analysis also sheds light on the issue of inequality in the global governance of technology. It is observed that certain groups seem to matter more than others, indicating the presence of power imbalances in decision-making processes. This highlights the need for addressing these inequalities and ensuring that all voices are heard and considered in the governance of technology.
Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on indigenous groups. The example of the Anamames ethnicity in the Brazilian Amazon suffering due to the activities of illegal gold miners underscores the need for responsible and sustainable practices in resource extraction to protect the rights and well-being of indigenous populations.
Lastly, tech workers from the global south advocate for better working conditions and a greater say in the algorithms and decisions made by tech companies. This emphasizes the need for empowering workers and ensuring their rights are protected in the rapidly evolving tech industry.
In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, governance, and the impacts on various stakeholders. It underscores the need for deeper understanding, robust regulation, and inclusive decision-making processes to tackle challenges and ensure that technology benefits all.
Moderator – Prateek
The Policy and Analysis Initiative (P&AI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2020-2022 in Addis Ababa. The P&AI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.
The first report produced by the P&AI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.
One of the noteworthy aspects of the P&AI is its working spirit and commitment to a multi-stakeholder approach. The working group of P&AI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.
Prateek, an individual interested in understanding the connectivity between AI governance and internet governance, sought insights on the interoperability of these two domains. To gain a better understanding of the implications of internet governance on AI governance, Prateek engaged Professor Xing Li and requested a comparison between the two in terms of interoperability.
During discussions, Jose highlighted the need for a deeper understanding of local challenges faced in the Global South in relation to AI. This includes issues concerning labor, such as the impacts of the tech industry on workers, as well as concerns surrounding biometric surveillance and race-related issues. Jose called for more extensive debates on sustainability and the potential risks associated with over-reliance on technological solutions. Additionally, Jose stressed the underrepresentation of the Global South in AI discussions and emphasized the importance of addressing their specific challenges.
In the realm of AI training and education, Pradeep mentioned UNESCO’s interest in expanding its initiatives in this area. This focus on AI education aligns with SDG 4: Quality Education, and UNESCO aims to contribute to this goal by providing enhanced training programs in AI.
In a positive gesture of collaboration and sharing information, Prateek offered to connect with an audience member and provide relevant information about UNESCO’s education work. This willingness to offer support and share knowledge highlights the importance of partnerships and collaboration in achieving the goals set forth by the SDGs.
In conclusion, the Policy and Analysis Initiative (P&AI) is a policy network that aims to address AI and data governance matters. Their first report focuses on various aspects of AI governance, including gender and race inclusion and a just twin transition. Their multi-stakeholder approach ensures diverse perspectives are considered. Discussions during the analysis highlighted the need to understand local challenges in the Global South, the significance of AI education, and the connectivity between AI and internet governance. Collaboration and information sharing were also observed, reflecting the importance of partnerships in achieving the SDGs.
Maikki Sipinen
The Policymakers’ Network on Artificial Intelligence (P&AI) is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. It emerged from discussions held at the IGF 2020-2022 meeting in Addis Ababa last year, where the importance of these topics was emphasised. The P&AI report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like P&AI.
One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions. The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. The report points to the success of the Finnish AI strategy, highlighting how it managed to train over 2% of the Finnish population in the basics of AI within a year. This serves as strong evidence for the feasibility and impact of introducing AI education in schools and universities.
Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance. The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions. By enhancing the knowledge and understanding of those responsible for making policy decisions, there is an opportunity to shape effective and responsible AI governance frameworks.
Diversity and inclusion also feature prominently in the report’s arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.
Additionally, the report consistently underscores the significance of capacity building throughout all aspects of AI and data governance. It is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.
In conclusion, the P&AI report serves as a valuable resource in highlighting the importance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader goal of establishing responsible and fair global AI governance frameworks.
Owen Larter
The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner. This approach is aligned with SDG 9 (Industry, Innovation and Infrastructure), SDG 10 (Reduced Inequalities), and SDG 17 (Partnerships for the Goals). Microsoft has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices.
Owen, another speaker in the analysis, emphasises the importance of transparency, fairness, and inclusivity in AI development. He advocates for involving diverse representation in technology design and implementation. To this end, Microsoft has established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.
Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology. Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community. By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI.
However, it is crucial to strike a balance between openness and safety/security in AI development. Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models. The analysis suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse.
Furthermore, the need for a globally coherent framework for AI governance is emphasised. The advancement of AI technology necessitates establishing robust regulations to ensure its responsible and ethical use. The conversation around global governance has made considerable progress, and the G7 code of conduct, under Japanese leadership, plays a crucial role in shaping the future of AI governance.
Standards setting is proposed as an integral part of the future governance framework. Establishing standards is essential for creating a cohesive global framework that promotes responsible AI development. The International Civil Aviation Organization (ICAO) is highlighted as a potential model, demonstrating the effective implementation of standards in a complex and globally interconnected sector.
Understanding and reaching consensus on the risks associated with AI is also deemed critical. The analysis draws attention to the successful efforts of the Intergovernmental Panel on Climate Change in advancing understanding of risks related to climate change. Similarly, efforts should be made to comprehensively evaluate and address the risks associated with AI, facilitating informed decision-making and effective risk mitigation strategies.
Investment in AI infrastructure is identified as crucial for promoting the growth and development of AI capabilities. Proposals exist for the creation of public AI resources, such as the National AI Research Resource, to foster innovation and ensure equitable access to AI technology.
Evaluation is recognised as an important aspect of AI development. Currently, there is a lack of clarity in evaluating AI technologies. Developing robust evaluation frameworks is crucial for assessing the effectiveness, reliability, and ethical implications of AI systems, enabling informed decision-making and responsible deployment.
Furthermore, the analysis highlights the importance of social infrastructure development for AI. This entails the establishment of globally representative discussions to track AI technology progress and ensure that the benefits of AI are shared equitably among different regions and communities.
The analysis also underscores the significance of capacity building and actions in driving AI development forward. Concrete measures should be taken to bridge the gap between technical and non-technical stakeholders, enabling a comprehensive understanding of the socio-technical challenges associated with AI.
In conclusion, responsible AI development requires a multi-faceted approach. It involves developing AI in a sustainable, inclusive, and globally governed manner, promoting transparency and fairness, and striking a balance between openness and safety/security. It also necessitates the establishment of a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure. Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the socio-technical challenges posed by AI. By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.
Xing Li
The analysis explores various aspects of AI governance, regulations for generative AI, the impact of generic AI on the global south, and the need for new educational systems in the AI age. In terms of AI governance, the study suggests that it can learn from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments. The shift from a US-centric model to a global model in internet governance is viewed positively and can serve as an example for AI governance.
The discussion on generative AI regulations focuses on concerns that early regulations may hinder innovation. It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance.
The analysis also highlights the opportunities and challenges presented by generic AI for the global south. Generic AI, consisting of algorithms, computing power, and data, has the potential to create new opportunities for development. However, it also poses challenges that need to be addressed to fully leverage its benefits.
Regarding education, the study emphasises the need for new educational systems that can adapt to the AI age. Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world.
Finally, the analysis supports the establishment of a global AI-related education system. This proposal, advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.
In conclusion, the analysis highlights the importance of drawing lessons from internet governance, balancing regulations to foster innovation in generative AI, addressing the opportunities and challenges of generic AI in the global south, and reimagining education systems for the AI age. These insights provide valuable considerations for policymakers and stakeholders shaping the future of AI governance and its impact on various aspects of society.
Jean Francois ODJEBA BONBHEL
The analysis provides different perspectives on the development and implementation of artificial intelligence (AI). One viewpoint emphasizes the need to balance the benefits and risks of AI. It argues for the importance of considering and mitigating potential risks while maximizing the advantages offered by AI.
Another perspective highlights the significance of accountability in AI control. It stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior.
Education is also emphasized as a key aspect of AI development and understanding. The establishment of a specialized AI school in Congo at all educational levels is cited as evidence of the importance placed on educating individuals about AI. This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape.
The analysis suggests that AI development should be approached with careful consideration of risks and benefits, control mechanisms, and education. By adopting a comprehensive approach that addresses these elements, AI can be developed and implemented responsibly and sustainably.
A notable observation from the analysis is the emphasis on AI education for children. A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI. The program’s focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.
Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future generations. The proposed solution involves providing options, solutions, and education on technology to empower young people and prepare them for the technologically advanced world they will inhabit.
In conclusion, the analysis underscores the importance of striking a balance between the benefits and risks of AI, ensuring accountability in AI control, and promoting education for a better understanding and access to AI innovations. By considering these facets, the responsible and empowering development and implementation of AI can be achieved to navigate the evolving technological landscape effectively.
Session transcript
Moderator – Prateek:
Good morning, everyone. To those who have made it early in the morning, after long days and long karaoke nights that all of us have been having here in Kyoto, welcome to this session on the launch of the report of the Policymakers’ Network on Artificial Intelligence, which was set up by the IGF. I would briefly mention the names of the esteemed panelists here before handing the floor to Mikey to introduce the Policymakers’ Policy Network a bit. We have Mr. Nobuo Nishigata from the Japanese Ministry as the representative of the host country with us. We have Mikey Sipinen, who is the editor for, and the, how do you say, editor for this report. With us we have Jose Renato, who is joining us from Brazil. We have Sarayu Natarajan, who is the co-founder of Apti Institute in India. We have Professor Xing Li from Tsinghua University in China. We have Mr. Owen Larter from Microsoft. And we have Jean-Francois Bombel, who is an expert on artificial intelligence and capacity building, I must say. And I am Pratik Sibal. I’m a program specialist at UNESCO. And I would also like to recognize our online moderator, Ms. Shamira. We don’t see her yet, but she will be also joining us in the discussion, especially on the work that she’s been doing on environment. And the two co-facilitators for this work that we have with us, we have Amrita Chaudhary and we have Odas with us, who are here. So Mikey, I’ll pass on the floor to you first to introduce what was the reasons for setting up this working group. How did the work progress? What is it that the multi-stakeholder community at the IGF was able to achieve? So, over to you.
Maikki Sipinen:
Thanks, Prateek, and a warm welcome to this early morning session to all of you also on behalf of the P&AI community. My name is Maikki Sipinen, and I’m the coordinator of P&AI. I’m not going to take too much time away from our expert panelists describing the process that led us here, but important to know that the P&AI is a really new thing. It’s only about six months old policy network, a toddler, should we say, and the P&AI was actually born from the messages of IGF 2020-2022 that was held in Addis Ababa last year. So, this is a nice example that the discussions we have here at the IGF meeting actually are very important and can result in concrete new things like the P&AI, so that’s quite inspiring. So, P&AI addresses policy matters related to AI and data governance, and we have today gathered here to discuss and debate and maybe even later challenge P&AI’s very first report. And for those of you who didn’t have yet the chance to have a look at the report, you can find a link to it in this session’s information page in the agenda. And what else? Well, many, many, many people have worked super hard to make this session and… especially to make this P&AI report come into existence, and especially our excellent drafting team leads, and I know they are listening in and joining this session online from different parts of the world. Some of them have woken up at 1 a.m. or 2 a.m. to tune in, so that’s a really nice example of the P&AI spirit. But I would like to hand it back to you, Prateek, to get us started with our expert speakers.
Moderator – Prateek:
Thanks, Maike. I can definitely attest to the fact that the working group, it’s in the true spirit of multi-stakeholderism at the IGF that this working group was formed, and then the way they’ve worked through open, first identifying what themes to cover through an open consultation, and then they’ve worked through streamlining through information meetings, inviting speakers to talk about different topics, and then collaboratively drafted this report. So congratulations to the authors, the lead authors, the team leads, and the others who contributed. So the report is available on the website of the IGF. I would encourage you to go through it. It’s a fantastic product of collaborative effort. In the first report that we have launched today, we have three themes. The first theme is talking about interoperability of AI governance, and this is primarily focusing on convergence and divergence among different regulatory initiatives with respect to artificial intelligence. So the group has mapped various initiatives in AI governance from the EU to China to the US to Latin America to Africa, and their intention has been to put forward countries or discourse that has not been so represented in the global discussions on AI forward. So they’ve centered a lot of the Global South initiatives in this report. The second theme covered by the report is they basically tried to frame the AI life cycle for gender and race inclusion. Some of the questions that they’re asking over there are, do AI systems and harmful biases reinforce racism, sexism, homophobia, transphobia in societies? These are particularly important questions that the researchers have focused on. And then finally, the third section of the report really talks about governing AI for a just twin transition. And when we say twin transition, it’s the digital and the environmental transition. And this section really explores the intersection of AI data governance and the environment. So having talked briefly about the report, I would first invite our host country representative, Mr. Nobuo Nishigata, to say some opening remarks and also perhaps contextualize a little bit the discussions around generative AI, which kind of prompted this reflection on artificial intelligence governance through a multi-stakeholder perspective. Over to you, sir.
Nobuo Nishigata:
Good morning, good afternoon, good evening to the online participants wherever you are. My name, thanks for the kind introduction. My name is Nobuo Nishigata from the Japanese government. I work at the Ministry of Internal Affairs and Communications. And I’m doing the division director there. And I joined this network in maybe July this year. And first of all, congratulations to you all to launch the report. Just say it’s a very young organization to do this, but frankly, I was very much impressed by the content of the report. So, and I understand that this work continues beyond this IGF. So we are looking forward to working with you together further. And then just a couple of things I’d like to mention, maybe just from the content of the current report. So I understand that this report is not a comprehensive analysis type of report. Rather, this is more like having the fresh angle to what we have for AI and what we have to do for AI policy development, those kind of things. Then just let me compare that with my previous work, since I used to work at the OECD in Paris, and I was in the team who developed the council recommendation on artificial intelligence in 2019. You are the first intergovernmental kind of policy standard at that time. Then I had four years of experience out there, and then compared to that report, for example, just Pratik introduced the three main themes on the report. The first one was the interoperability in AI governance. This kind of resonates with what we had at the G7. Japan hosted the G7 meeting this year. Then we had, in April, the ministerial meeting for digital and tech ministers. And then one of the major topics out there was the interoperability of the AI governance. Then for the G7 members, interoperability means that we know that in Europe, the negotiation for the agreement for the AI Act is taking place. On the other hand, actually Japan is the first country to propose the AI policy discussion in the G7 in 2016. It has been kind of getting a long history right now. but the G7 members continue to discuss on what we should do for the better AI, trustworthy AI, those kind of things. So then, getting back to the point of interoperability, on one side of this planet, the countries are working hard to establish new legislation on AI, but on the other hand, for Japan, we don’t think we need the legislation right now on AI. We need more innovation, we want to look at the possibility of what AI can do for us, because, for example, Japan is facing a severe problem of losing the population. So, we’ve already seen the decrease in the labor force in our country, so we need more machines to sustain our economy. So that was a point back in 2016, and we asked the G7 members to discuss further on AI, because we already knew that there could be some uncertainty or risks brought by that technology, while looking at the many, many opportunities there. So that’s the reason that we wanted to start the discussion, and it goes to the OECD, UNESCO, and many organizations right now. So, it’s a great turnaround, actually. Then, for this report, I would say it’s a much wider focus. I mean, not a focus, but a wider perspective. Many different perspectives on interoperability, and we can see some commonalities, but also the differences. This is a great point, but this network discusses AI policies through the global south lens, and this is a point that the G7 doesn’t have, actually. So, to me, it’s a very refreshing thing. And maybe about the third topic of this report, this is about, of course, it’s on the environment, but of course, it deals with the data governance, right? And then while I was at the OECD, then my colleague was just launching the recommendation on enhanced access to data and the sharing of the data. And to me, that recommendation made sense a lot, but on the other hand, the same thing, once we got the real case study within this report, then of course, we saw some similarity between the report and the case study and the council recommendation from the OECD, but on the other hand, we saw some difference. And this is just brought again by the Global South Lens, so this is great, and then maybe I should stop here. So then maybe just touch on, maybe I should be back on this point, but just flagging that this year, the G7 leaders, actually, I talked about the ministerial meeting, but the ministerial meeting, the declaration went escalate up to the leaders summit this year, and then the G7 agreed to establish what we call the Hiroshima AI process, and this is more focused on the generative AI, and just taking stocks. And as well as try to identify the challenges and risks, of course, as well as the opportunities brought by this new technology, so.
Moderator – Prateek:
Thank you, sir. So one key takeaway before we come to other panelists is that this report can inform some of the G7’s work, which is coming up from a Global South perspective. I think that would be a fantastic outcome for the work that has been done here. And I just wanted to, I’ll come back to the Hiroshima process in a bit. I wanted to open the floor a little bit on generative AI, and one of the issues the report talks about is around potential monopolization around this technology. And they raise questions around how can we make generative AI systems and development more open, transparent, accountable. And I wanted to come to you, Owen, to hear your perspective on how can generative AI systems be developed in a more open, transparent way, and what is Microsoft doing in this domain? For about three minutes.
Owen Larter:
Thank you very much, and great to be here. So I’m Owen Lata from Microsoft. It’s a pleasure to be here, and congratulations on such a thoughtful report, which I think really does hit on three of the really important issues that we need to get right with artificial intelligence. We need to make sure that we’re governing this globally. We need to make sure that we’re doing this in a sustainable way, and we need to make sure that we’re doing it in an inclusive fashion. We’re very enthusiastic about AI at Microsoft, as you can probably imagine. So the generative AI that you talk about we think is gonna be very powerful in helping people be more productive in their day-to-day lives. So we have our Microsoft co-pilots, which are helping people be more productive in using our Microsoft Office technologies. We also think that this technology is just gonna be a huge opportunity in helping people better understand and manage complex systems. I think you ask a really good question about how to make sure we’re building this technology in an inclusive fashion. And one of the things that we’re really mindful of at Microsoft is hitting these fairness goals, doing things in an inclusive fashion. So that starts by having really diverse teams at Microsoft that are building these technologies. So part of our Responsible AI program at Microsoft is our Responsible AI Standard. We have three goals in there, which are our fairness goals, F1, F2, F3. People can go and see our Responsible AI Standard, which is a public document that we’ve shared so that others can critique it and build on it. And a key part of these goals is making sure that we’re bringing together people from a diversity of backgrounds to build these systems. So people with research backgrounds, people with engineering expertise, people that have worked on products, people with legal and policy backgrounds, and people that have worked on issues like sociology, like anthropology, so we have a really diverse set of inputs into how a technology or a system is being designed. I think more broadly, beyond that, there’s a really big question on how we make sure that we’re having a sort of representative conversation around governance as well. So we’ve been doing some work at Microsoft to try and broaden the range of inputs that we get into our Responsible AI program. We have a Responsible AI Fellowship program that we’ve set up. We’ve been running this for about a year now, and this is really pulling together some of the brightest minds from across the global South working on responsible AI issues to help inform the way that we are designing the technology, but also designing our governance program. So we have fellows from Nigeria, Sri Lanka, India, Kyrgyzstan. These have been really rich conversations to hear about how others across the world are thinking about these technologies and how to use them responsibly, and so we look forward to taking that work forward.
Moderator – Prateek:
If I can press you a little bit on this point about openness versus closed AI development, we have seen several open-source initiatives, and there are several which are not. How would you weigh in on this debate?
Owen Larter:
Yeah, it’s a great question, and I think open-source is really important. I think open-source is gonna be really important to helping advance an understanding of how to use this technology safely. I think it’s also gonna be really, really important in making sure that we’re distributing the benefits of this technology in a broad way. I think open-source can play a really important role there. So we’re very supportive of open-source. We’re a big contributor to the open-source community. We open-source a number of our models. GitHub, which people might be familiar with, is a Microsoft company that has been a big open-source ethos in its spirit. I think there are some questions around the trade-off between openness and safety and security at a certain level, and I think really highly capable models, what sometimes people refer to as frontier models, which are sort of at the highest end of capabilities of what we have today or beyond, I think there are real questions there around whether it makes sense to open source those, or if you are gonna open source them, or to explore different ways of making these models available, perhaps having some kind of middle path where you don’t necessarily release the model weights, but you advance greater access to the technology. So I think attention there, but I think it’s really important that we appreciate that open source will be a really important part of the discussion going forward.
Moderator – Prateek:
Thanks, Irwin, for those thoughts. And I think perhaps this is something that is food for thought also for the next work plan of this group to think about open source models and how can that be integrated in the policy discussions further. Soraya, I wanted to turn to you also on this question around generative AI. And the report also talks about some of the potential risks and harms to democracy, human rights, rule of law, and so on. And one that I would like to focus on is disinformation and misinformation. Can you share with us what are the ways in which generative AI systems can be used towards spreading disinformation? And what could be some ways in which we could address this?
Sarayu Natarajan:
Thank you very much. Thank you to the audience for being here and the online audience. I’m assuming there are a range of competing factors for you, dinner, lunch, sleep. So thank you very much for being a part of this conversation. Congratulations also to the team that’s written the report. It’s been through most of it and it’s a fantastic report. It has a cadence and a thoughtfulness that comes from collaborative work. and it was absolutely wonderful to read, so congratulations on that. Delving into the specific question on misinformation and disinformation, I think it’s critical to understand first how generative AI can enable the creation of misinfo and disinfo. What generative AI does, or what the capabilities of generative AI can imply is that the cost of generating content, which is the base of misinfo or disinfo, is basically zero. So if you are capable of writing the right kind of query or code, it’s quite easy to generate information. As language models are available in very many other languages, this capability is also therefore available in several languages. So what has happened through generative AI is the reduction of the cost of content production to zero. The internet and digital transmission in general has reduced the cost of transmission to zero. So when you put those two together, there’s absolutely no friction to the production and dissemination of problematic content. And problematic content, I mean there are several typologies in the literature, I mean between misinfo and disinfo, there are differences, and the consequences of these can also be manifold. In terms of reduction, I think, or stemming or curbing misinformation and disinformation, I think the report, while may not specifically focusing on these areas, does put in place or talk about several approaches that could be used. One of course is thinking about AI or AI, generative AI is embedded in specific contexts and taking a very context-specific lens to the stemming of misinfo, disinfo. Which means understanding both the context in which this is generated, so is it corporate, is it the state? who’s responsible for the generation of information, and then also spending time to understand how this disinformation process works. But all of this, the takeaway I would say is that in the absence of broader protections embedded within the law, and I say this carefully, are conscious that misinformation, disinformation is a polemical topic in its own right. But the rule of law as a guiding frame within which any inquiry about how to stem this problem might be the right approach to start with.
Moderator – Prateek:
Thanks, Rahim, for that. Nobu, you’ve mentioned briefly the Hiroshima process and that it is going to focus on generative AI. We’ve heard that quite a bit over the past three, four days. Can you give us some specifics of what is it that it is looking at? What are the kind of principles? I don’t know if it’s advanced enough for you to share that, but can you shed some more light on what they’re going to come up with?
Nobuo Nishigata:
Maybe a couple points until now. I mean, that the process has not ended yet, and actually the G7 delegation taskforce teams, the very hard negotiation they engaged in and then tried to finalize a report back to the leaders by the end of this year. So, but still, though, as interim, maybe I can introduce that the G7 ministers agreed on the ministerial declaration for the interim. It just published in early September, I think the 7th of September this year. Then, like a couple of things, I mean, the discussion is more having more focus on the code of conduct from the private sector. That’s the first one. So it’s more like a voluntary things. but on the other hand, we have some discussion particularly just we are talking about the misinformation, disinformation type of things and then we have some discussion about the watermarking. Maybe they’re very aligned with what she said about the proof or just made by AI, those kind of things. So maybe we are in a good alignment, I would say.
Moderator – Prateek:
Thanks. Those specifics help. So I wanted to move to this major part in the report which is focusing on interoperability of AI governance and I wanted to turn to Professor Xing Li who has worked very closely on internet governance and Professor Li wanted to understand what can we learn from internet governance to inform AI governance when it comes to interoperability.
Xing Li:
Okay, thank you very much for inviting me in this panel and Professor Xing Li from China, from Tsinghua University and actually 30 years ago, China connected to internet and we tried to participate and get into the different level of management or governance and at a higher level actually, the government should need to permit this kind of access and from technical layer actually, a lot of things. For internet, if we took look after evolution of the internet technology, there is IETF, Internet Engineering Task Force. That’s exactly for the technical interoperability works and engineers as individual work on that. So and then there are some other things, for example, the number assignment which is regional internet registries and the names that’s ICANN and a couple years ago, there is IANA transition so that make it from US centric to the global playground. So actually now it’s chart GPT and AI and I’m actually very excited on that and I believe Generative AI is something maybe even bigger than TCP IP. However Take a look of this area. We don’t have IETF. We don’t have this kind of organization So maybe that’s something we should take a look at that and work on that So and another thing actually I feel for internet from original technology and the evolute and the invention of the WWW and other technologies and the people try to understand things and there is no blueprints for that for generative AI I have a feeling probably the regulation Get into too early. We need to have innovation space at least for academics and the technical group we have some innovation space. Otherwise, it’s very difficult to move forward and Inside the country probably is okay for create a innovation place Actually, I really want global as a global village academics can work together and to make things more exciting. Thank you very much
Moderator – Prateek:
Thank you, sir Touching a briefly upon some of the the ways of governing internet that you mentioned this report actually Also talks about under the interoperability dimensions three things interoperability at the level of substantive tools Guidelines norms and so on then interoperability at the level of mechanisms for multi-stakeholder engagement And finally also talks about agreed ways of communication which really talks about agree agreeing on definitions concepts semantic interoperability Jose I wanted to turn to you and When you read the report, what were some of the key aspects around, say, the recommendations around interoperability that struck out for you, and what do you think about that?
Jose:
Hello, well, thank you very much, Pratik. It’s a great pleasure to be here. Looking at the report, one thing that I identified is that I think that we need to, considering that it is a report made by Global South representatives for the Global South, I think that one thing that we need to advance is in understanding what are the movements that we have in our own region, and within this, understand exactly what are the policies that we are driving forward, what are the narratives that we are pushing forward. And I think that when we look into regulation, it is, I think that it is a great thing that we are not focusing just on what’s going on in the EU and other countries which, and regions, bloc countries which are leading this debate, but also to those within the Global South. And I think that the main thing that we need to advance is understanding what are the points that are missing in the discussion, and I think that the report touches upon some of these issues, but I think that, especially when we look at our region, the main thing is that we need to understand what are specific challenges that we have, and I would like to mention, for instance, issues related to labor. We are not touching upon yet the impacts that the development of these systems, that the industry, the tech industry as a whole is having on labor, and I’m not talking about just what will be the future of work, let’s say, what people working now in offices will need in the next few years so that they are not left behind, let’s say, in this. in the advancement of, after the advancement of these technologies, but also what is happening with those so-called gig workers. In Brazil, at least, I can say that there is an intricate link between what’s happening with the people working in delivery platforms and issues related to race, and this relates also to the survival, let’s say. In Brazil, the increase in the deaths by drivers, motorcycle drivers, has increased in, if I’m not mistaken, it was like 80% in the last 10 years, and one of the reasons that many scholars have been debating is the fact that we have new platforms that demand other kinds of times of delivery, which pressure these workers in an extremely different way. So I think that this is an issue that we need to tackle, especially in our region, and also I think that we need to go deeper in the debates regarding sustainability. We’re gonna, the report is just upon this theme, and I think that it advances a lot in issues like tackling techno-solutionism, techno-optimism, because this discussion goal was against, not to say merely, but the strict issue on energy consumption, greenhouse gas emissions, and et cetera. It is upon politics. We have seen, for instance, some leaders, let’s say CEOs of tech companies, say, talking about the issues in Bolivia as that happened with, after President Evo Morales was out, and supposedly interests regarding the minerals that Bolivia has, especially lithium. So I think that we also need to advance this. And maybe one last point that I would touch upon in this question so I don’t take more time is the debate on biometric surveillance or the use of biometric systems. especially for surveillance purposes within and in the borders of countries is another issue that we need to take seriously. And considering issues related to, for instance, talking about Brazil once again, the structural racism that pervades, that passes through the criminal system is just being automated and accelerated with the development of these technologies. And a tech fix on them won’t solve it. We need to start thinking seriously about whether we are going to establish moratoriums for the systems or especially banning, which is one agenda that we are having very strongly in Brazil that I think that we could talk about it in the next report, how civil society is pushing forward for the banning of the systems over there. And yeah, I would say that it is one of the main issues that we are currently debating there. Thank you.
Moderator – Prateek:
Thanks, Jose. So I think you put it three points, right? There is the data, which is coming a lot from the global south. There’s the workers who are working on that data. And then there is the natural resources. And these three elements also come out quite strongly in the report or the case studies that we have seen in the report. And important for global discourse. Now I would like to turn to you, Mr. Jean-Francois. You read the report and you’ve seen some of the key challenges that are being mentioned. One of the things that the report talks about is also capacity building. And how do you see, how can we strengthen capacities in the global south, for instance, for engaging? first with processes, multi-stakeholder processes on governance, but also on the development use of AI. Over to you.
Jean Francois ODJEBA BONBHEL:
Okay, thank you so much. My name is Jean-Francois Bomben. I come from Congo, Brazzaville, and I’m AI and emerging technologies expert in regulatory. I’m working with RPC, which is the authority of regulatory in Congo, and we are expecting many things from AI and generative AI, but also fear about what is inside the box, you know? AI seem like a black box with many things inside, and so we are pushing that by three points. So the first one is benefits versus risk, and the second one is about accountability with the controller, and the last one is about education. I mean, education is a big part in our strategy. We created a school specialized in AI, from elementary to graduate one in Congo to make sure that our kids to educate population in general and make sure that everyone can access to the technologies and to know what is coming in, and all innovation can change life and bring developments. So make sure that no one will be keep outside. of that technology, that’s all.
Moderator – Prateek:
Thank you so much. I would now turn a bit to the second section of the report, which is really focusing on gender and race. But before that, I would also let the participants here know I’ll open the floor in about five minutes for questions, so feel free to, if you have something in mind as well. So the report cites the UN Human Rights Council, which said technology is a product of society, its values, and its priorities, and even its inequities, including those related to racism and intolerance. Sorayu, I wanted to turn to you and understand, do you have examples of gender or racial biases in AI systems that have impacted individuals or communities, and at the same time, if you can also give another example where, which also, the report also talks about some of those, where AI systems have been used to actually combat gender bias that we have in society. So, over to you.
Sarayu Natarajan:
Thank you for that question. I think it’s a broad and difficult one, and I’ll try my best to do as much as I can. I mean, before we jump into the question of gender bias and gender in AI systems, and along with gender, other forms of biases, such as racial, language, et cetera, do creep into AI systems, and it’s hard to talk about them in aggregates, because they do have their own specific politics. But having said that, there are some commonalities in these forms of intersections. But if you want to pick one, go with one and go with the specifics. Sure, sure, thank you. Okay, so the, before delving into the question of bias itself, I think it’s important to tackle. very briefly the forms of injustice that generative AI systems might have. One of course is the notion of data, data injustice that emerges in the context of gender, race, language, et cetera. And I’ll probably tackle language. There is also the injustice of labor and Jose here did refer to it. But to imagine generative AI without talking about the labor of annotators who make generative AI in the sense that they label data, they annotate data, they categorize data in ways that are accessible to researchers and scholars and builders of AI is very critical. Rather than delving into specific examples of language or gender bias that AI systems perpetuate, let’s talk about generative AI and the labor of generative AI. A lot of this labor is done in the global south. So millions of workers through various forms, platforms, sometimes in the form of large contracted organizations, work on data sets and labeling data sets and this is applicable to generative AI and several other forms of AI. Now in order to label, and let’s say you’re labeling a car or a bus or a vehicle or language or gender or race, the categories within which you label are often created in the west, which is that the company that’s getting AI made is the one that’s asking you to label, I don’t know, like a llama or a cow, objects which are often unfamiliar to the people that are labeling. So the origin of bias in a certain way is of course the larger politics of how AI is made, but it also is mediated by very, very different but it also is mediated by very, very specific practices. Around language, around even English language, language as being an input into large language models. So I think in order to talk about bias, it’s important to talk about labor, labor supply chains, the way in which AI itself is made, the way in which labeling and labeling categories are created. Jumping into how AI might enable or mitigate bias, I think there are several examples, but one specific example, and rather much more a concerted effort that has happened over time is in the Indian context that our efforts to develop large language models in non-mainstream languages. Several of these efforts, fortunately or unfortunately, have been spearheaded by small organizations who work in specific communities. And these efforts might make some of the benefits of generative AI accessible to wider communities in the languages that they speak. So I’ll pause here and hand back to you.
Moderator – Prateek:
Thanks, and I’ll add to that. There are some questions online. I’ll add to that also that in, for instance, in Africa, there’s a research group working on low resource African languages called the Masakhane community. So if anyone is interested to work with them or join them or support them, please do check out, they’re doing some fantastic work to create data sets in African languages as well. I would like to also turn to folks online. I don’t see you here, but if there are some questions from our online moderator, Shamira. Yeah, if Shamira, you can pick one or two questions online.
Shamira Ahmed:
Yes, sure. I will go to the first question we got. Thank you, Pratik. Can you hear me?
Moderator – Prateek:
Yes, very well.
Shamira Ahmed:
So the first question we got, I’m just going there quickly. The first question we got. was what, from Prince Andrew Livingston, what international collaborations and agreements are needed to govern AI on a global scale?
Moderator – Prateek:
OK, thank you. Can we collect a few questions?
Shamira Ahmed:
Yeah. And then the next question, I’m not sure if virtual attendees can raise their hand and pose their questions directly to you as well. Let’s see if there’s a question in the chat, and then we come back for people who may want to take the floor later. OK. And the next question was from Ayalo Shebeshi. And they had two questions. The first one was, how can we approach both the negative and positive impacts of AI, especially in global South and developing countries that are replacing human jobs? And then the next question was, how can we manage standards and international regulation of AI initiated by international bodies, as part of the UN and other agencies, and make sure that there is full agreement by all countries and nations?
Moderator – Prateek:
Thanks. Thanks, Shamira. So we have three questions. Two are quite similar. But I’d request you to hold on a bit, because I want to collect at least two questions from the room as well. And then we address everything at the same time. Anyone in the room would like to take the floor, please? Yes, please, our colleague from the UN University.
Audience:
Good morning. Good morning. Jingbo from UN University. Actually, this is much more intimate so we can communicate. So my question is related to capacity building. We know that AI is less to do with engineering science and more to do with empirical science, which means AI is not like an engineering product that you design and you know what’s gonna happen. Even the researchers, even the designer don’t necessarily know what’s gonna happen. So along the way, they have to test, experiment to find out what are the risks, what are the potential benefits extra. So my question is related to the difficulty in capacity building because things keep changing. Even the designer don’t know where to go and meanwhile, there’s misinformation, there’s different sources of information and how do we build capacity? How do we teach, for example, the school children or even our peers, my grandmother, for example. How do we let them know, inform them of what’s going on? Thank you.
Moderator – Prateek:
Thank you so much. We have our colleague from EY. Sir.
Audience:
Hi, Ansgar Kuna from EY. In AI, as with a number of these digital technologies that are arising, we’re seeing a blurring between the lines of sort of the more political space of regulatory development and the technical space of standards development. Standards are becoming increasingly an instrument in the implementation side of the regulations. And so my question is around the capacity building of enabling a wider community of stakeholders to engage in that sort of technical side that has become an important part of the bigger policy instruments.
Moderator – Prateek:
Thank you so much. So we have five questions. I will not go to each one of you to answer because that will take us ages. Who wants to take the questions around governance and the governance framework? So there’s one set around governance and how to make AI governance globally works. So I see Oven wants that. And then there’s one set around capacity building, both at the technical level, but also going to schools and so on. And so first we go to Oven then. Yeah, over to you.
Owen Larter:
Sounds good, thank you. So I think this is a really important question to ask. How do we build a coherent global governance framework for AI? And I think it’s important to realize that there is a difference between having a sort of globally coherent framework and having identical regulation in every single country. I don’t think we wanna get to the latter. I think what we want to have is a set of principles, probably a code of conduct that sets a high bar globally, but then allows individual countries to take those standards and implement them in a way that makes sense for them. I really think on this global governance conversation, we’ve made an enormous amount of progress over the last year. We’re coming up actually to quite a significant milestone. On the 30th of November, 2022, you had the launch of ChatGPT, which really did change the conversation, it seems, amongst the public and amongst lawmakers around the use of these technologies and their impacts on society. I think the progress that has been made is really quite significant. I think you’ve seen that this week in the types of conversations that we’re having here. Very importantly, you see that in the reports that has been put together. I think the G7 code of conduct through the Hiroshima process under the Japanese leadership, very, very important in terms of advancing this global conversation around how to develop and use AI. I do think you’ve sort of got the building blocks in place now for a sort of longer term conversation around what global governance should look like. I think as we have that conversation, we should sort of take a step back and think about a couple of. things. The first is what ultimately do we want a global governance framework to do? And secondly, what can we learn from existing global governance regimes? And I think there’s probably at least three things that we want this framework of the future to do. The first is around standards setting. I think standards are going to be really important in terms of advancing this coherent global regime. I think there are great lessons to be drawn from organizations like ICAO, the International Civil Aviation Organization, part of the UN family, where you have a really broad global representative conversation with pretty much every country in the world participating in it to set global safety and security standards that are then implemented by domestic government. So I think that kind of standard setting piece is really important. I also think having a conversation, this was addressed in the report as well, around advancing an understanding and consensus around risks. I think this is a really important piece of the global discussion. You can look at organizations like the Intergovernmental Panel on Climate Change, for example, again, part of the UN family that I think has done a really good job of advancing an evidence-based understanding of the risks around climate. I think we should be looking to do something similar when it comes to AI. Then maybe the final piece that I’ll mention, and this sort of bleeds over into the capacity building conversation, is around building out infrastructure. So this technology is moving so quickly, it’s easy to forget sometimes that it is still relatively new. So the transformer architecture that underpins these large language models that are causing a lot of excitement and enthusiasm at the moment is only six years old, developed in 2017. There are an enormous amount of open research questions that we really need to continue to invest in and tackle. We need to provide the infrastructure to academics and researchers to be able to do that. So there are interesting proposals in the US, for example, for something called the National AI Research Resource. This is an idea of developing publicly available compute data and model that academics would be able to use to study these technologies more and advance our understanding around them. So, technology investment in the infrastructure. One piece I would really emphasize there is the importance of developing evaluations for these technologies. It’s a very difficult space with a lot of open gaps at the moment. We need to make some progress there. Then the final point I’ll make, and then I’ll stop talking, is around the social infrastructure as well. So, we need to be able to find sustained ways of having global conversations, building on, I think, the great progress that we’ve made this year and conversations like this, to have a really globally representative discussion around these issues that allows us to, quite frankly, monitor how the technology goes, keep track of things as technology progresses, and be able to adjust and be nimble with how we’re approaching these things as a global community.
Moderator – Prateek:
Thanks, Sobhan. I saw that, Jose, you wanted to comment also on the governance part, and then Nobu, I’ll turn quickly to you as well on how you see the global governance landscape evolving as well.
Jose:
Thank you, Pratik. When we discuss the global governance of AI, I have to admit that, at the moment in which we are, I am quite skeptical that we are, at this moment, going to advance something in this regard, in a way that does not mean a race to the bottom regarding what are the parameters that we have to govern these technologies and their impact. I mean this because we have an interplay of many narratives going on, which lead many countries, and like this, I’m talking about geopolitics, the geopolitics of these technologies and issues related to competition, to the supposedly AI race that exists, and which has been framed as something quite similar to what we had during the Cold War. So, I think that this is a huge barrier for us to overcome. if we want to have a reasonable sort of global AI governance regulation or whatever way we can frame this. And I think that especially if we consider the countries from our region, and I’m talking about the majority world, global south, I think that there are forum, there are some forum that we need to push forward this agenda in order to have our interests in play. And I’m talking about the BRICS, G20 to some degree as we have the presence of many countries from Latin America, from the African continent, from South Asia and et cetera. And this means a pressure on the global north because they are the ones who have developed these technologies who are pushing them forward and who are having their, especially their companies dictate the agenda of what is a tech worth of our attention or not. But if I were to pinpoint two points, and now I’m gonna make reference once again to the issues of labor and of the extraction of natural resources as it’s commonly called. I think that first of all, maybe I’ll tell you a story to illustrate this. In the beginning of the year, there was a genocide in the Brazilian Amazon of a specific ethnicity called the Anamames. And these people was being killed and had their territories invaded by gold miners, illegal gold miners. And afterwards, after a while, our federal police identified that one of the companies which was dealing with these illegal gold miners was selling gold to companies like Amazon, Apple, Google, and Microsoft. And so this is one thing, we need to deal with the issue. related to the expression of these resources and the impacts that they have in groups, which, and of course, when we’re talking about global governance of technology, we’re saying also that there are groups that seem to matter more than others. And I’d say that the anomalies in this case seem to be in the side of the ones that are less worried about. And so, in this point, I think that we need to start seriously thinking on how to deal with the materials that we are doing, with the lives that we’re impacting. And here, once again, and I think that the point related to also the clique workers and the ones who are helping develop these technologies who come from the global south, I think this is also a necessary discussion that we need to have in a global governance, be it through the control by workers of algorithms and of the decisions being made by these companies, or, and of course, better working conditions for them, and having the due responsibility of the companies who are in the higher edge of the chain to be responsible for what’s going on in these situations.
Moderator – Prateek:
Thanks, Jose. So, in a way, the point that Erwin was also making around accountability across the value chain and evaluations and so on should be part of the global governance frameworks and these evidence-based processes that are being talked about. Nobu, coming from a government, where do you see is the global governance of AI going? And what is your perspective on that?
Nobuo Nishigata:
Let me say that from the global governance, of course, that’s one single country’s government, and of course, we are looking at what is taking place in the global sphere, like here, OECD, UN, of course, and UNESCO, et cetera, or ITU. Then, like, to me… as a government person, I mean, I cannot write a code, honestly, but on the other hand, I can write the legislation in Japanese. This is my job, right? So then, you know, like we want to have some room to have our own, like, you know, for example, once we have the treaty in the top, of course, we respect the treaty, we sign it, and then we have to do something to be aligned with the treaty, right? So maybe, like, for this case, I mean, it’s, I’m not sure, it’s maybe too early, you know, I recognize that some, particularly the Council of Europe people are for, you know, working hard to get a framework treaty convention, and I was in some negotiation in the same way, but, so once we, for example, once we got some treaty on AI, but we don’t want to have the very strict treaty, because it’s a moving target. I mean, when it comes to the, like, a human right, or like, maybe for the war, etc., then it could be that way, but on the other hand, for this case of AI, then we don’t want to have the very strict upper hand, and then so that we don’t have any room to do our own, I mean, it’s not only for Japan, I would say it’s for every government, and every government workers out there. Then, so then, they’re getting some growth, so from the point of the OECD, then, I mean, I’m not studying the OECD anymore, because I graduated from there, but still, I mean, their principles are very simple, like, five value-based principles, like, you know, like some people said about accountability, yes, there is one, and explainability, and safety, security, robustness, those kind of things, and then this comes first, I mean, you know, the advanced AI systems dealing with the human, then we need a safety, right? Then, of course, the principle touches on the privacy, and fairness, and human rights issues, and then that’s it. Then, on the other hand, we have five more principles, but it’s more like a guidance to the government, like, you know, government has to work on the ecosystem, the government has to work on some skills or capacity building, or etc, like a regulation if needed, or maybe creating some testbed to facilitate their own work in every place in the world. And of course, in the end, as we see this perspective, then we want to have more collaborations between the countries, so then the last principle is about talking about international collaboration, both in the policies and the techniques and standards, etc. So, I mean, you know, there are a bunch of different international organizations, and each organization has a different membership, different mandate, etc. So, it should be there, you know, it’s natural to have the very various type of for example, recommendation principles, guidelines, etc. But still, I mean, the bottom line is not very different, I would say. I mean, just the thing that OECD was about only the first one, but still, I mean, you know, I don’t see that there are too many different things. I mean, you know, it’s more like a version of each organization, and they have to do it, because, you know, each organization is a different body.
Moderator – Prateek:
I can say that all the international organizations working on this are mostly coordinated, so, I mean, from UNESCO, we do work with the OECD, with the Council of Europe, the African Union, with the European Commission, to at least exchange where the work is going, because at the end of the day…
Nobuo Nishigata:
I mean, we can share the episode of the Pratik and I, I used to have the lunch over the river of the Seine in Paris, right?
Moderator – Prateek:
Exactly. So thanks for that. I want to turn now to the second kind of set of questions that we had around capacity building. So I’ll turn first to to Mikey, then to Sorayu, Professor Ching-Li, and to Jean-Francois. What is it that we need to strengthen capacities across different levels and for different things from development of technical standards to development of governance to just using AI in our daily life to detecting this information. We had a wide variety of capacities that were mentioned. So maybe each one of you can pick on some. Over to you, Mikey.
Maikki Sipinen:
So the audience questions were about capacity building as well as how might be enabled a wider community to take part in this AI dialogues and debates. And from the way I see it, they are sort of parts of one and same. So of course, we need to improve our efforts in introducing AI and data governance topics in schools and universities and training citizens and the labor force, at least in basics of AI. And there are many amazing initiatives to be found everywhere in the world. For example, in Finland, where I’m from, I think Finnish AI strategy managed to train more than 2% of the Finnish population in basics of AI just in under one year. So that’s a good benchmark on what’s possible if there is a well. And something else I’d like to highlight here is the capacity building of civil servants and policy makers since this is an area that would really deserve and require even more space in the AI governance discussion. I like to know what just the moment that gets said, like I can’t write the. I can write the regulation for Japan and this is exactly what we should all understand and appreciate that we need different kinds of AI expertise to come in and work together so that we can make this global AI governance happen so that it’s inclusive and fair for us. Maybe you already guessed that capacity building is my personal favorite topic under AI and earlier this spring we were kind of brainstorming and discussing with the P&AI community like which topics to select for a report because it’s quite obvious that not not all AI and data governance related thing can be included and covered in one report and I was kind of secretly hoping that someone would suggest this capacity building and I was a bit bummed when that didn’t happen, but over the past month I realized that this is sort of naturally interwoven in all of our report topics as well as for government. All the groups in the end navigated towards capacity building and included some recommendations or sentences of that. It’s really in the core of all our topics and I trust that in the coming years we will have more focus on capacity building global dialogues as well.
Moderator – Prateek:
Thanks, Maiki. Sorayu, you would like to take that?
Sarayu Natarajan:
Thank you, you’re absolutely right. There are multiple categories of capacity building, multiple groups that need to engage with AI technology, different types of AI technology in different contexts. So you’re absolutely right. right, there’s the ability of the population, citizens at large, to engage with AI and get the best out of it in a certain way, or at least not be harmed by it. Then there is the question of how do technical communities from different domains, the legal, policy, governance community on the one hand, the technical community on the other, how do they talk to each other? And maybe I’ll focus on that very briefly. I mean, my starting point on capacity building, adult education, whatever you want to call it, is the idea of mutuality, which is that both of these disciplines, both of these sort of empirical starting points need to be able to talk to each other in a meaningful way. Just as much as I have benefited from learning about embedding in large language models, I do think technical communities would benefit from understanding, for example, the politics of category creation. Given the empirical emphasis of gender to AI. Understanding non-negotiable human rights, the role of state vis-a-vis citizen rights. So having a sense of these is a mutual expectation and a mutual process. And I think that having various fora that enable these in a non-judgmental way, in a recognition of various empirical starting points is critical. The other, I thought I heard a question on the gains and harms of AI, including specifically on job loss. And right away, or? Please feel free to take that as well. Right, thank you. I think there was a question on some of the gains and harms of AI, and specifically focused, if I understood it correctly, on job loss. I do think it’s a genuine challenge, particularly from some forms of generative AI. I do think as a society, we are still starting to gather the evidence and understand how different forms of, different applications of generative AI might cause different. different consequences, particularly around jobs. The legal community, the tech community, the coding community particularly, are likely to be affected by the easy availability of the capabilities of generative AI. That is understood, but I think the degree and extent is to be better imagined. There is an ILO report that does say that the impacts of job loss are more likely to be felt in the global north. Consequently, the global south will actually gain from very specific types of jobs that generative AI will generate. And I think we’ll have to be careful and observe this a little bit more. It shouldn’t be that, in a certain way, the capacities of generative AI ends up further ossifying the barriers and the types of jobs that exist in different parts of the world, that it’s not some forms of click work alone that remain, and then some of the skilled jobs, particularly ones that relate to category creation, we want to go back to that point, remain with existing powers. So I think we have to keep watching this job loss question. And then, of course, humane conditions, just conditions for workers in different parts of the world. So I’ll pause there.
Moderator – Prateek:
Thanks. Professor Xing Li.
Xing Li:
Oh, okay. Capability beauty is also my favorite topic. Actually, people, I believe this creates, generic AI create opportunities, but also the challenges to the global south. And the people refer genetic AI to three factors, usually that’s the algorithm, the computing power, and the data. And actually, I would like to add another thing that’s more important, that’s education. That’s very, very important, that’s the human resource. And the traditional education sometimes need to be changed. I believe in this AI age, yes. Four things very important, the first is critical thinking. In the old days, student just follow what teacher said. However, if it’s a rabbit or AI, then you have to have the ability for critical thinking, that’s first thing. Second, everything should based on fact. Third, logical thinking. And the finally, but also very important, global collaboration. The people need, okay, the youngsters need to have ability in these four areas. I believe it’s important. So I really like to see the global AI related education system. That may be as many years ago, I mean hundreds year ago, that’s creation of the modern university. We need some kind of new educational systems in the AI age. One of the lady professor from Stanford University, Fei-Fei Li said, we need Newton and Einstein in AI age. Thank you.
Moderator – Prateek:
Thank you, sir. A plug for UNESCO colleagues working on education and AI. They’ve launched some guidelines on generative AI and education. So if you’re interested, feel free to check that out as well. I’ll move to Jean-Francois.
Jean Francois ODJEBA BONBHEL:
Okay, thank you. So I will switch to French, I’m more proficient in French. So I will speak to you in French. I would switch to French because I want to make sure that I can express myself correctly, my thoughts. In terms of this capability, skill sets, I think this is overall, and it is global. I think that this is something that we can implement, that we could put in place different training sessions, and then everybody would be on the same footing. And there are various perspectives which we could put forward as a teacher, as an educator, and also as a software developer. We have devised different processes for AI. I work with computers and I work specifically on governance. So what I do see, I understand the world that I live in, that I work in. I am working on a world, I am devising that world where my children are going to be living. I work with researchers, developers, and I also take a step back and I ask myself a question. Is it actually that world, that specific world that I want to think about, that I want to create design for my children, for them to be able to live there? To make sure, how am I going to be certain of that? Now with regard to capacity building, we have implemented a program specifically designed for kids, for children which work with technology at the age of 6 until the age of 17. And we thought about numerous questions and ideas. We said to ourselves, what do we want? Do we want these youngsters to become experts? Do we want all of them to become experts in technology? Or simply, are we going to be preparing them, are we going to be endowing them with the necessary skill sets for the life, for the world where they are going to be living in? Now with regard to the solutions that we are going to be providing for them, we want to have a multi-faceted solution for them. What is the environment that they are going to be living? Is it the teacher or the parent? What are the options that we are going to be bringing about? Of course, while there are sanctions or punishments of course, if you don’t learn this or that, you’re going to be punished. Now our children and us, ourselves, we live in a world where these solutions, there are multiple solutions. We are developing their cognitive skill sets so that they have options, they have solutions so that they can have different ways of resolving all of these problems. There are multiple solutions. So this is a facet in developing or the skill building that we are talking about here today. And it is part of it. It is part and parcel. We need to be able to equip our children with the necessary skill sets of AI that is aligned with numerous and multiple solutions. And this is the focus that we have been developing, devising and working together. I feel that this is the appropriate approach within the educational school environment.
Moderator – Prateek:
On the final segment, and I see we have only 10 minutes left. So we have Shamira, our colleague online, who has worked on the environmental aspects of the report. Shamira, would you like to share some of the key insights from the discussions that you’ve had around environment and data governance and some of the case studies that you’ve presented in the report?
Shamira Ahmed:
Sure. AI and the environment and data governance is quite broad. But because the focus of the report was on the data governance aspects, we focused on the data governance aspects at the nexus of AI and the environment. And in summary, our recommendations collectively offer a multi-stakeholder perspective from the global south. And as mentioned by the other speakers, to promote interoperable AI governance innovations that harness the potential of AI, we focus on the multi-dimensional aspects of AI. of data for sustainable digital development. And sustainable digital development is basically a way forward of leveraging digital technologies that also considers environmental aspects, economic aspects, and also societal aspects in one comprehensive Venn diagram, let’s say. And we also discussed addressing historical injustices. We advocated for a decolonial informed approach to the geopolitical power dynamic that some people have mentioned in, for example, the materiality of AI when we consider the value chains and of the materials that go into AI. When we consider the multi-stakeholder process, is it really representative? And are the standards made in consideration of innovation ecosystems or global self-institutional mechanisms and situations? So we also talked about inclusion and minimizing environmental harms as many of the other speakers have highlighted as well. So in summary, we highlighted that a just green digital transition is vital for achieving a sustainable and equitable future and the way that leverages AI in order to drive responsible practices for the environment that promotes economic growth, social inclusion, and essentially provide a pathway toward a more resilient and sustainable world that actually meets the contextual realities of the global South. Most of the panelists have summarized and captured the report quite succinctly. And as Mikey mentioned, we mentioned capacity building. We talked about geopolitics. We talk about. the environmental aspects of AI. We talked about the data governance aspects. We talked about interoperability and defining key terms. So I think it’s a comprehensive report and I learned a lot during the making of the writing report and it was a truly bottom-up multi-stakeholder process. Thank you.
Moderator – Prateek:
Thank you, Shamira, for sharing that summary and also some of the important work that you’ve been doing on environment. I would now really open the floor here, not for questions, but for any recommendations that we have from the audience here on what could be other issues that this group would explore and this is also an invitation to you to join. This is a multi-stakeholder policy network to join this group. So are there any folks who would like to share any recommendations or thoughts? Yes, sir.
Audience:
Oh, hi. My name’s Yeo Lee from World Digital Technology Academy. We do research, also training and material for education on AI. And Pradeep, you mentioned UNESCO will do more for AI training and education. Right now, for example, for our published book and textbook, we provide it to a lot of universities, particularly in China, developing countries, global thoughts. So in the future, UNESCO will have any process for us to contribute or will you do training just yourself?
Moderator – Prateek:
I will definitely come back to you on that bilaterally because the session is not on UNESCO, it’s on. on the Policymakers Network, but happy to share what we are doing and link you with colleagues in the education work, for sure. Thank you. Anyone else who would like to share any recommendations? And perhaps if there’s someone online as well, Shamira.
Shamira Ahmed:
Yes, there is someone online.
Moderator – Prateek:
Okay. So if there’s someone who would like to take the floor online, I believe.
Shamira Ahmed:
Yes, I think the host should give rights and the person who’s raised their hand should put their video on.
Moderator – Prateek:
Okay, while we wait, we can go to the floor here.
Audience:
Hi, Ansgar Kuhne from EY. I think an important aspect is going to be the question about how we do the assessment to test whether the systems are achieving what we want them to be able to achieve. And specifically the question that is often raised is how do we assess the non-strictly technical aspects around the performance, such as the challenges around how the system is actually operating in the context of places where they haven’t been built and whether they are having unintended consequences in that kind of context on, for instance, the workers in these environments or the way in which people are being categorized through these systems. So thinking through the assessment assurance process of how these systems are operating, especially on those non-technical properties of the systems.
Moderator – Prateek:
Thank you so much. So I’ll now turn back to the panelists for three keywords which are future oriented and what this group should look at. You’ll see the screens, we have four minutes, so you have only three keywords each. Maybe we start with Nobu on the other side.
Nobuo Nishigata:
Three keywords, maybe we have to continue about the picture for this forum, the global south, education, and then maybe harmonization.
Moderator – Prateek:
Thank you. Mikey.
Maikki Sipinen:
I choose the keywords inclusive, future, and ENAI.
Moderator – Prateek:
Thanks. Joseph. Oh, they’ve got, I would say, let’s keep up with the initiative, let’s include other things, let’s go further in the ones that we have already debated, and this kind of initiative in this forum is fundamental, and thanks for the IGF for that, I guess it was in this opportunity that this was all possible. Thank you. Soraya.
Sarayu Natarajan:
Global, not global south, workers, and rights.
Moderator – Prateek:
Thanks. Professor Shingley.
Xing Li:
Critical thinking and global collaboration. And global? Collaboration. Collaboration.
Moderator – Prateek:
Owen.
Owen Larter:
Three thoughts, I guess. One, sort of get concrete on capacity building and what we can be doing to drive things forward. Two, invest in evaluations, invest in evaluations, invest in evaluations, it’s a major gap. And across all of this, continue to bring together technical audiences with non-technical people that understand the socio-technical challenges of these systems as well.
Moderator – Prateek:
Thanks. Jean-Francois.
Jean Francois ODJEBA BONBHEL:
I would say innovation, education, and accountability.
Moderator – Prateek:
Thank you so much to our panelists here for your insightful thoughts, to the participants both online and in person here. We invite you to look at the report, which was, as again, mentioned before, developed in a multi-stakeholder manner. It’s available on the website of the IGF under Policy Network on Artificial Intelligence. This work will continue and you are invited to join and expand this community going forward. Thank you so much and have a good day. Thank you. Thank you. Thank you. Thank you.
Speakers
Audience
Speech speed
148 words per minute
Speech length
535 words
Speech time
217 secs
Arguments
Difficulty in capacity building due to rapidly changing AI technology
Supporting facts:
- AI is more of an empirical science than an engineering product.
- Researchers and designers often don’t know what’s gonna happen due to continued testing and experimenting
- Misinformation and various sources of information add to the difficulty
Topics: AI, Capacity Building, Education
The lines between political space of regulatory development and technical space of standards development are blurring in AI and other digital technologies
Topics: AI, Digital Technologies, Regulations, Standards
Standards are increasingly becoming a tool in the execution of regulations
Topics: Regulations, Standards
There is a need for capacity building to allow a broader stakeholder community to engage in the technical aspect that has become an integral part of major policy tools
Topics: Capacity Building, Policy Tools, Stakeholder Engagement
The audience member enquires about the process through UNESCO to contribute to AI training and education.
Supporting facts:
- The member represents World Digital Technology Academy, which provides research and material for AI education.
- They distribute their published books and textbooks to universities, particularly in developing countries.
Topics: AI education, UNESCO, contribution process
The assessment of AI systems needs to consider their non-technical properties such as their impact on workers and categorization of people.
Supporting facts:
- The question regarding the assessment of non-strictly technical aspects around the performance of AI systems was raised.
- Some of these non-technical aspects imply how AI systems operate in unknown contexts, the potential unintended consequences, and their effects on workers and categorization of people
Topics: AI assessment, AI implications on society, unintended consequences of AI
Report
The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge highlighted was the difficulty in capacity building due to the rapidly changing nature of AI. It was observed that AI is more of an empirical science than an engineering product, meaning that researchers and designers often don’t know what to expect due to continual testing and experimentation.
Misinformation and the abundance of information sources further exacerbate the challenges in capacity building. The importance of providing education to a diverse range of demographics, from school children to the elderly, was also emphasised. It was recognised that ensuring high-quality education in the field of AI is vital in equipping individuals with the knowledge and skills required to navigate the rapidly evolving technological landscape.
This education should be accessible to all, regardless of their age or background. Additionally, the panel discussion shed light on the blurring boundaries between regulatory development and technical development in AI and other digital technologies. It was noted that the political domain of regulatory development and the technical domain of standards development are increasingly overlapping in the field of AI.
This convergence presents unique challenges that necessitate a thoughtful approach to ensure both regulatory compliance and technical excellence. Furthermore, the role of standards in executing regulations in the context of AI was discussed. The panel emphasised that standards are becoming an essential tool for implementing and enforcing regulations.
Developing and adhering to standards can help address challenges such as interoperability, transparency, and accountability in AI systems. The need for capacity building was also emphasised, allowing a broader stakeholder community to engage in the technical aspects of AI, which have become integral to major policy tools.
The panel acknowledged that empowering a diverse and inclusive group of stakeholders, including policymakers, experts, civil society representatives, academics, and industry professionals, is crucial for the development and governance of AI technology. The process of contributing to AI training and education through UNESCO was discussed, highlighting the involvement of a UNESCO member who distributes AI research materials and textbooks to universities, particularly in developing countries.
This partnership and knowledge-sharing initiative aim to bridge the global education gap and ensure that AI education is accessible to all. The assessment of AI systems was deemed crucial, with recognition that assessing non-technical aspects is as important as evaluating technical performance.
This includes considering the wider societal impact, such as potential consequences on workers and the categorisation of people. The panel emphasised the need for assessment processes to go beyond technical measures and include potential unintended consequences and ethical considerations. Furthermore, it was acknowledged that the assessment of AI systems should extend beyond their current context and consider performance in future or “unbuilt” scenarios.
This reflects the need to anticipate and mitigate potential negative outcomes resulting from the deployment of AI technology and to ensure its responsible development and use. In conclusion, the panel discussion provided valuable insights into the challenges and opportunities associated with AI technology.
The rapidly changing nature of AI necessitates continuous capacity building, particularly in the education sector, to equip individuals with the necessary skills and knowledge. Moreover, the convergence of regulatory and technical development in AI requires a thoughtful and inclusive approach, with standards playing a critical role in regulatory compliance.
The assessment of AI systems was identified as a key area, underscoring the importance of considering non-technical aspects and potential societal impacts. Overall, the discussion emphasised the need for responsible development, governance, and stakeholder engagement to harness the potential of AI technology while mitigating its risks.
Jean Francois ODJEBA BONBHEL
Speech speed
149 words per minute
Speech length
783 words
Speech time
314 secs
Arguments
AI should balance benefits versus risks
Supporting facts:
- Jean Francois Bomben is an AI and emerging technologies expert in Congo
Topics: AI, Risk assessment, Technology Impact
Accountability should be ensured in AI control
Topics: AI, Technology control, Accountability
Education is crucial for understanding and access to innovations in AI
Supporting facts:
- A specialized school in AI has been created for all levels of education in Congo
Topics: Education, AI, Technology Access
Jean Francois Odjeba Bonbhel emphasizes the importance of multi-faceted AI education for children, preparing them for the technologically advanced world they would be living in.
Supporting facts:
- A program designed specifically for children ages 6 to 17 is implemented to develop their cognitive skills with technology and AI.
- The focus is not just to make children experts in technology but also to equip them with necessary understanding and skills to navigate the fast-changing world efficiently.
Topics: AI education, skill building, capacity building, technology
Report
The analysis provides different perspectives on the development and implementation of artificial intelligence (AI). One viewpoint emphasizes the need to balance the benefits and risks of AI. It argues for the importance of considering and mitigating potential risks while maximizing the advantages offered by AI.
Another perspective highlights the significance of accountability in AI control. It stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior. Education is also emphasized as a key aspect of AI development and understanding.
The establishment of a specialized AI school in Congo at all educational levels is cited as evidence of the importance placed on educating individuals about AI. This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape.
The analysis suggests that AI development should be approached with careful consideration of risks and benefits, control mechanisms, and education. By adopting a comprehensive approach that addresses these elements, AI can be developed and implemented responsibly and sustainably. A notable observation from the analysis is the emphasis on AI education for children.
A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI. The program’s focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.
Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future generations. The proposed solution involves providing options, solutions, and education on technology to empower young people and prepare them for the technologically advanced world they will inhabit.
In conclusion, the analysis underscores the importance of striking a balance between the benefits and risks of AI, ensuring accountability in AI control, and promoting education for a better understanding and access to AI innovations. By considering these facets, the responsible and empowering development and implementation of AI can be achieved to navigate the evolving technological landscape effectively.
Jose
Speech speed
172 words per minute
Speech length
1487 words
Speech time
518 secs
Arguments
More understanding is needed of the movements and policies within the Global South
Supporting facts:
- Report made by Global South representatives for the Global South
Topics: Internet Governance, Global South, Regulations
Need to deepen debates around sustainability and the impact of tech industry on it
Supporting facts:
- Tech leaders have spoken of the interest in Bolivia’s minerals, particularly lithium, following political instability
Topics: Sustainability, Tech Industry, Tech-Optimism
There is a push for banning certain systems in Brazil
Supporting facts:
- Civil society in Brazil is pushing for the banning of certain systems
Topics: Regulations, Tech Ban
The global governance of AI could translate into a race to the bottom due to geopolitical competition and narratives from various countries.
Topics: Global Governance, AI, Geopolitics
Countries from the global south need to push forward their interests in governing AI technologies.
Supporting facts:
- Jose suggests forums like BRICS, G20
Topics: Global South, AI, Global Governance
The extraction of resources for technology development has significant impacts on indigenous groups.
Supporting facts:
- A specific ethnicity called the Anamames in the Brazilian Amazon suffered due to the activities of illegal gold miners.
Topics: Resource Extraction, Indigenous Groups, Technology
Report
The analysis of the speakers’ points highlights several important issues. Representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance.
A significant concern raised in the analysis is the intricate link between labour issues and advancements in the tech industry. In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times.
This highlights the need to address the adverse effects of tech advancements on workers’ well-being. The impact of the tech industry on sustainability is another topic of debate in the analysis. There are concerns about the interest shown by tech leaders in Bolivia’s minerals, particularly lithium, following political instability.
This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction. The use of biometric systems for surveillance purposes comes under scrutiny as well. In Brazil, the analysis reveals that the criminal system’s structural racism is being automated and accelerated by these technologies.
This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance. There is a notable push for banning certain systems in Brazil, as civil society advocates for regulations to protect individuals’ rights and privacy in the face of advancing technology.
This highlights the need for robust governance and regulation measures in the tech industry to prevent harmful impacts. The global governance of AI is also a point of concern. The analysis highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives.
This emphasizes the importance of global collaboration and cooperation to ensure ethical and responsible use of AI technologies. Countries from the global south argue for the need to actively participate and push forward their interests in the governance of AI technologies.
Forums like BRICS and G20 are suggested as platforms to voice these concerns and advocate for more inclusive decision-making processes. The analysis also sheds light on the issue of inequality in the global governance of technology. It is observed that certain groups seem to matter more than others, indicating the presence of power imbalances in decision-making processes.
This highlights the need for addressing these inequalities and ensuring that all voices are heard and considered in the governance of technology. Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on indigenous groups.
The example of the Anamames ethnicity in the Brazilian Amazon suffering due to the activities of illegal gold miners underscores the need for responsible and sustainable practices in resource extraction to protect the rights and well-being of indigenous populations. Lastly, tech workers from the global south advocate for better working conditions and a greater say in the algorithms and decisions made by tech companies.
This emphasizes the need for empowering workers and ensuring their rights are protected in the rapidly evolving tech industry. In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, governance, and the impacts on various stakeholders.
It underscores the need for deeper understanding, robust regulation, and inclusive decision-making processes to tackle challenges and ensure that technology benefits all.
Maikki Sipinen
Speech speed
148 words per minute
Speech length
772 words
Speech time
313 secs
Arguments
The Policymakers’ Network on Artificial Intelligence (P&AI) is new, only about six months old
Supporting facts:
- P&AI was born from the messages of IGF 2020-2022 in Addis Ababa last year
- P&AI addresses policy matters related to AI and data governance
Topics: Artificial Intelligence, Policy Making
Many people, including the drafting team leaders, have worked hard to make the P&AI report
Topics: Collaborative work, Artificial Intelligence, Policy Making
AI and data governance topics need to be introduced in schools and universities to train citizens and the labor force in basics of AI.
Supporting facts:
- Finnish AI strategy managed to train more than 2% of the Finnish population in basics of AI just in under one year.
Topics: AI education, Data Governance
Capacity building of civil servants and policy makers deserves more focus in the AI governance discussion.
Topics: AI education, AI Governance, Public Sector
We need different kinds of AI expertise working together to ensure inclusive and fair global AI governance.
Topics: Diversity and Inclusion, AI Governance
Capacity building is intrinsically linked to all aspects of AI and data governance.
Supporting facts:
- All the groups in the end navigated towards capacity building and included some recommendations or sentences on that.
Topics: AI Education, Data Governance
Report
The Policymakers’ Network on Artificial Intelligence (P&AI) is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. It emerged from discussions held at the IGF 2020-2022 meeting in Addis Ababa last year, where the importance of these topics was emphasised.
The P&AI report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like P&AI. One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions.
The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. The report points to the success of the Finnish AI strategy, highlighting how it managed to train over 2% of the Finnish population in the basics of AI within a year.
This serves as strong evidence for the feasibility and impact of introducing AI education in schools and universities. Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance.
The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions. By enhancing the knowledge and understanding of those responsible for making policy decisions, there is an opportunity to shape effective and responsible AI governance frameworks.
Diversity and inclusion also feature prominently in the report’s arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.
Additionally, the report consistently underscores the significance of capacity building throughout all aspects of AI and data governance. It is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.
In conclusion, the P&AI report serves as a valuable resource in highlighting the importance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions.
These recommendations contribute to the broader goal of establishing responsible and fair global AI governance frameworks.
Moderator – Prateek
Speech speed
164 words per minute
Speech length
2907 words
Speech time
1061 secs
Arguments
The P&AI is a new policy network addressing matters related to AI and data governance, based out of discussions at IGF.
Supporting facts:
- The P&AI is only about six months old, born from the messages of IGF 2020-2022 that was held in Addis Ababa.
- The P&AI addresses policy matters related to AI and data governance.
Topics: AI policy, Data governance
P&AI’s first report, a collaborative effort, focused on AI governance, AI lifecycle for gender and race inclusion, and governing AI for a just twin transition.
Supporting facts:
- The working group identified themes to cover through an open consultation.
- The report takes into account different regulatory initiatives with respect to artificial intelligence from various regions including Global South initiatives.
Topics: AI governance, AI lifecycle, Inclusion, Gender, Race, Environment
Prateek asked Professor Xing Li to draw parallels between internet governance and AI governance in terms of interoperability
Supporting facts:
- Prateek is interested in understanding the implications of internet governance on AI governance
Topics: Interoperability of AI governance, Internet governance
Prateek summarises Jose’s points regarding challenges from the Global South in relation to AI.
Supporting facts:
- Jose highlighted that there’s a lack of understanding of local challenges and movements within the Global South, especially related to labour and the impacts of the tech industry.
- Jose touched upon the issue of biometric surveillance and the race-related issues intertwined with it.
- He also suggested the need for deeper debates on sustainability and the concerns of techno-solutionism.
- Jose discussed the underrepresentation of the Global South in AI discussions and the need for more focus on their specific challenges.
Topics: Global South, Artificial Intelligence, Data, Workers, Natural Resources
UNESCO to expand AI training and education
Supporting facts:
- Pradeep mentioned UNESCO’s intention to expand in the area of AI training and education
Topics: AI, Education, Training
Report
The Policy and Analysis Initiative (P&AI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2020-2022 in Addis Ababa. The P&AI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.
The first report produced by the P&AI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition.
The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South. One of the noteworthy aspects of the P&AI is its working spirit and commitment to a multi-stakeholder approach. The working group of P&AI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report.
This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI. Prateek, an individual interested in understanding the connectivity between AI governance and internet governance, sought insights on the interoperability of these two domains.
To gain a better understanding of the implications of internet governance on AI governance, Prateek engaged Professor Xing Li and requested a comparison between the two in terms of interoperability. During discussions, Jose highlighted the need for a deeper understanding of local challenges faced in the Global South in relation to AI.
This includes issues concerning labor, such as the impacts of the tech industry on workers, as well as concerns surrounding biometric surveillance and race-related issues. Jose called for more extensive debates on sustainability and the potential risks associated with over-reliance on technological solutions.
Additionally, Jose stressed the underrepresentation of the Global South in AI discussions and emphasized the importance of addressing their specific challenges. In the realm of AI training and education, Pradeep mentioned UNESCO’s interest in expanding its initiatives in this area.
This focus on AI education aligns with SDG 4: Quality Education, and UNESCO aims to contribute to this goal by providing enhanced training programs in AI. In a positive gesture of collaboration and sharing information, Prateek offered to connect with an audience member and provide relevant information about UNESCO’s education work.
This willingness to offer support and share knowledge highlights the importance of partnerships and collaboration in achieving the goals set forth by the SDGs. In conclusion, the Policy and Analysis Initiative (P&AI) is a policy network that aims to address AI and data governance matters.
Their first report focuses on various aspects of AI governance, including gender and race inclusion and a just twin transition. Their multi-stakeholder approach ensures diverse perspectives are considered. Discussions during the analysis highlighted the need to understand local challenges in the Global South, the significance of AI education, and the connectivity between AI and internet governance.
Collaboration and information sharing were also observed, reflecting the importance of partnerships in achieving the SDGs.
Nobuo Nishigata
Speech speed
167 words per minute
Speech length
1846 words
Speech time
664 secs
Arguments
Initiatives for AI governance need a balance between regulation and innovation
Supporting facts:
- OECD developed council recommendation on artificial intelligence in 2019, first intergovernmental policy standard
- G7 discussed AI policy development, with Japan prioritizing innovation over regulation due to decrease in labor force
- Initiatives like AI Act in Europe and Hiroshima AI process in Japan
Topics: AI governance, Regulation, Innovation
AI policy development should consider perspectives and experiences from the Global South
Supporting facts:
- Report provides multiple perspectives on AI governance, highlighting commonalities and differences
- G7 lacks AI policy discussions through Global South lens
Topics: AI policy, Global South
AI technology presents both risks and opportunities
Supporting facts:
- Importance of discussing uncertainties or risks brought by AI technology along with the many opportunities
- AI needed in Japan for sustaining economy due to declining population
Topics: AI technology, Risks, Opportunities
The Hiroshima process on generative AI is still ongoing
Supporting facts:
- The G7 delegation taskforce is engaged in hard negotiations to finalize a report by the end of this year
- Discussion is focused on code of conduct from the private sector
- There’s some discussion about watermarking in relation to misinformation and disinformation
Topics: Hiroshima process, generative AI
AI global governance should be flexible and adaptable as AI is a moving target
Supporting facts:
- Nobuo argues for an AI treaty that isn’t strict and provides room for each government to best adapt it as per their needs
- Proper management and governance of AI falls under industry innovation
Topics: AI Governance, Treaties
Respecting human rights, ensuring safety, accountability, and explainability are fundamental in AI systems
Supporting facts:
- He refers to the importance of safety and human rights in the evolution of AI
- He mentions the necessity of accountability and explainability in AI
Topics: AI Ethics, Human Rights, Safety
Nobuo Nishigata emphasizes on the continuation of the global south forum
Topics: Global South Forum
Nishigata believes education is a key aspect for the forum to focus on.
Topics: Education
Nishigata finds harmonization important for the future.
Topics: Harmonization
Report
The analysis reveals several key points regarding AI governance. Firstly, it emphasizes the importance of striking a balance between regulation and innovation in AI initiatives. This suggests that while regulations are necessary to address concerns and ensure ethical practices, there should also be room for innovation and advancement in the field.
Furthermore, the report highlights the need for AI policy development to take into consideration perspectives and experiences from the Global South. This acknowledges the diverse challenges and opportunities that different regions face in relation to AI adoption and governance. The analysis also discusses the dual nature of AI technology, presenting both risks and opportunities.
It underscores the significance of discussing uncertainties and potential risks associated with AI, alongside the numerous opportunities it presents. Additionally, it highlights the potential of AI to significantly contribute to addressing economic and labor issues, as evidenced by Japan considering AI as a solution to its declining labour force and sustaining its economy.
Another noteworthy point raised in the analysis is the recommendation to view AI governance through the Global South lens. This suggests that the perspectives and experiences of developing nations should be taken into account to ensure a more inclusive and equitable approach to AI governance.
The analysis also provides insights into the ongoing Hiroshima process focused on generative AI. It highlights that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Notably, the report suggests support for this approach, emphasising the importance of a code of conduct in addressing concerns such as misinformation and disinformation.
Flexibility and adaptability in global AI governance are advocated for in the analysis. It argues that AI is a rapidly evolving field, necessitating governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.
Collaboration and coordination between organisations and governments are seen as crucial in AI policy-making, skills development, and creating AI ecosystems. The analysis suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building. The importance of respecting human rights, ensuring safety, and fostering accountability and explainability in AI systems are also highlighted.
These aspects are considered fundamental in mitigating potential harms and ensuring that AI technologies are used responsibly and ethically. In addition to these main points, the analysis touches upon the significance of education and harmonisation. It suggests that education plays a key role in the AI governance discourse, and harmonisation is seen as important for the future.
Overall, the analysis brings attention to the multifaceted nature of AI governance, advocating for a balanced approach that takes into account various perspectives, fosters innovation, and ensures ethical and responsible practices. It underscores the need for inclusive and collaborative efforts to create effective AI policies and systems that can address the challenges and harness the opportunities presented by AI technology.
Owen Larter
Speech speed
211 words per minute
Speech length
1808 words
Speech time
514 secs
Arguments
Microsoft aims to develop AI in a sustainable, inclusive, and globally governed manner
Supporting facts:
- Microsoft has a Responsible AI Standard
- Microsoft is mindful of fairness goals
- Microsoft created a Responsible AI Fellowship program
Topics: Artificial Intelligence, Inclusive Development, Sustainable Development, Global Governance
Open-source AI development is essential for understanding and safely using the technology.
Supporting facts:
- Open-source can help in distributing the benefits of AI technology broadly.
- Microsoft, the company Owen Larter works for, is a significant contributor to the open-source community.
- GitHub, a Microsoft company, embodies an open-source ethos.
Topics: Open-source, AI development, Safety
A need for a globally coherent framework for AI governance
Supporting facts:
- The global governance conversation has seen considerable progress
- G7 code of conduct under the Japanese leadership plays a crucial role
Topics: AI governance, Regulatory frameworks
Understand and reach consensus on the risks involving AI
Supporting facts:
- The Intergovernmental Panel on Climate Change has successfully advanced understanding of risks in climate change
Topics: AI Safety, AI Risks, Risk assessment
Importance of evaluation in AI
Supporting facts:
- There is a dearth of clarity in evaluation of these technologies at the moment
Topics: AI Evaluation, Quality standards
Social infrastructure development for AI
Supporting facts:
- Need for globally representative discussions to track AI technology progress
Topics: Public policy, Global discussions, Social infrastructure
Capacity building should be made concrete and actions should be taken to push things forward
Topics: Capacity Building, Actionable Measures
There’s a crucial need to invest more in evaluations
Topics: Investment, Evaluations
Importance of bridging the gap between technical and non-technical people to understand socio-technical challenges
Topics: Technical Education, Non-Technical Stakeholder Engagement, Socio-Technical Systems
Report
The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner. This approach is aligned with SDG 9 (Industry, Innovation and Infrastructure), SDG 10 (Reduced Inequalities), and SDG 17 (Partnerships for the Goals).
Microsoft has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices. Owen, another speaker in the analysis, emphasises the importance of transparency, fairness, and inclusivity in AI development. He advocates for involving diverse representation in technology design and implementation.
To this end, Microsoft has established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.
Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology. Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community.
By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI. However, it is crucial to strike a balance between openness and safety/security in AI development. Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models.
The analysis suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse. Furthermore, the need for a globally coherent framework for AI governance is emphasised. The advancement of AI technology necessitates establishing robust regulations to ensure its responsible and ethical use.
The conversation around global governance has made considerable progress, and the G7 code of conduct, under Japanese leadership, plays a crucial role in shaping the future of AI governance. Standards setting is proposed as an integral part of the future governance framework.
Establishing standards is essential for creating a cohesive global framework that promotes responsible AI development. The International Civil Aviation Organization (ICAO) is highlighted as a potential model, demonstrating the effective implementation of standards in a complex and globally interconnected sector.
Understanding and reaching consensus on the risks associated with AI is also deemed critical. The analysis draws attention to the successful efforts of the Intergovernmental Panel on Climate Change in advancing understanding of risks related to climate change. Similarly, efforts should be made to comprehensively evaluate and address the risks associated with AI, facilitating informed decision-making and effective risk mitigation strategies.
Investment in AI infrastructure is identified as crucial for promoting the growth and development of AI capabilities. Proposals exist for the creation of public AI resources, such as the National AI Research Resource, to foster innovation and ensure equitable access to AI technology.
Evaluation is recognised as an important aspect of AI development. Currently, there is a lack of clarity in evaluating AI technologies. Developing robust evaluation frameworks is crucial for assessing the effectiveness, reliability, and ethical implications of AI systems, enabling informed decision-making and responsible deployment.
Furthermore, the analysis highlights the importance of social infrastructure development for AI. This entails the establishment of globally representative discussions to track AI technology progress and ensure that the benefits of AI are shared equitably among different regions and communities.
The analysis also underscores the significance of capacity building and actions in driving AI development forward. Concrete measures should be taken to bridge the gap between technical and non-technical stakeholders, enabling a comprehensive understanding of the socio-technical challenges associated with AI.
In conclusion, responsible AI development requires a multi-faceted approach. It involves developing AI in a sustainable, inclusive, and globally governed manner, promoting transparency and fairness, and striking a balance between openness and safety/security. It also necessitates the establishment of a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure.
Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the socio-technical challenges posed by AI. By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.
Sarayu Natarajan
Speech speed
177 words per minute
Speech length
1772 words
Speech time
602 secs
Arguments
Generative AI has lowered the cost of producing and disseminating misinfo and disinfo
Supporting facts:
- Generative AI allows for easy content generation
- Internet and digital transmission further ease the dissemination of this content
Topics: generative AI, misinformation, disinformation
The rule of law is crucial in curbing misinformation and disinformation
Supporting facts:
- Understanding the context of misinformation/disinformation generation is important
- Concrete legal protections are necessary
Topics: misinformation, disinformation, rule of law
AI labeling work, crucial for generative AI, is often executed by workers in the global south, and the categories within which they label are frequently designed in the west.
Supporting facts:
- Global south workers label, annotate, and categorize data for AI in ways that are accessible to researchers and scholars and builders of AI.
- Categories for labeling, such as a car or a bus or a language or gender or race, are generally defined by the western companies requiring the AI.
Topics: Labor in AI, Generative AI
Bias in AI builds not just from broader societal politics but also specific practices in how AI is made.
Supporting facts:
- AI labor supply chains and AI building methods can contribute to bias.
- Language biases also occur due to labeling categories or inputs into large language models.
Topics: AI Bias, Generative AI
Efforts are being made to develop large language models in non-mainstream languages, often by smaller organizations working in specific communities.
Supporting facts:
- These models will open up the benefits of generative AI to a wider range of communities, communicating in the languages they speak.
Topics: Large Language Models, Language Bias
Mutual understanding and engagement is required between AI tech and policy domains
Supporting facts:
- Both of these disciplines, both of these empirical starting points need to be able to talk to each other in a meaningful way.
- Having various fora that enable these in a non-judgmental way, in a recognition of various empirical starting points is critical.
Topics: AI technology, capacity building, policy, governance
AI developments might lead to job loss, but also generate new types of jobs.
Supporting facts:
- ILO report states that the impacts of job loss are more likely to be felt in the global north.
- Global south will actually gain from very specific types of jobs that generative AI will generate.
Topics: AI technology, job loss, employment
Report
Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society as false information can be easily created and spread through the internet and digital platforms.
However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively. Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation.
This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated and implementing legal measures accordingly. This approach acknowledges the importance of tailored solutions based on a solid legal framework. The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south.
These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.
Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI. Smaller organizations that work within specific communities are actively involved in creating these language models. This represents a positive step towards inclusivity and accessibility in the field of AI, particularly in underrepresented communities and non-mainstream languages.
Mutual understanding and engagement between AI technology and policy domains are crucial for effective governance. It is essential for these two disciplines to communicate with each other in a meaningful way. Creating forums that facilitate non-judgmental discussions and acknowledge the diverse empirical starting points is critical.
This allows for a more integrated and collaborative approach towards harnessing the benefits of AI technology while addressing its ethical and societal implications. While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs.
Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. It is important to consider the potential benefits and challenges associated with AI technology and strive for humane conditions for workers in different parts of the world.
In conclusion, the advent of generative AI has made it easier and cheaper to produce and disseminate misinformation and disinformation, posing negative effects on society. However, the rule of law, through proper legal protections, plays a significant role in curbing the spread of false information.
A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue. Inclusivity, diversity, and mutual understanding between AI technology and policy domains are crucial considerations in the development and governance of AI. It is essential to closely monitor the impact of AI on job loss and ensure fair working conditions for all.
Shamira Ahmed
Speech speed
129 words per minute
Speech length
640 words
Speech time
299 secs
Arguments
AI and the environment and data governance is quite broad.
Supporting facts:
- The report focused on the data governance aspects at the nexus of AI and the environment
Topics: AI, Environment, Data Governance
Advocated for a decolonial informed approach to the geopolitical power dynamic
Supporting facts:
- This approach is needed to address historical injustices in the global power dynamics related to AI
Topics: Decolonial Approach, Geopolitical Power Dynamic, AI
A just green digital transition is vital for achieving a sustainable and equitable future
Supporting facts:
- Leverages AI to drive responsible practices for the environment and promote economic growth and social inclusion
Topics: Just Green Digital Transition, Sustainable Future, AI
Report
The analysis focuses on several key themes related to AI and its impact on various aspects, including the environment, data governance, geopolitical power dynamics, and historical injustices. It begins by highlighting the importance of data governance in the intersection of AI and the environment.
This aspect is considered to be quite broad and requires attention and effective management. Moving on, the analysis advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. It emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI.
By adopting a decolonial approach, it is believed that these injustices can be addressed and a more equitable and just AI landscape can be achieved. Furthermore, the analysis highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future.
This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion. It emphasizes the need for a balanced approach that takes into account the needs of the environment and all stakeholders involved.
In addition, the analysis underscores the importance of addressing historical injustices and promoting interoperable AI governance innovations. It emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard.
By doing so, it aims to create an AI governance framework that is inclusive, fair, and capable of addressing the challenges associated with historical injustices. Overall, the analysis provides important insights into the complex relationship between AI and various domains.
It highlights the need to consider historical injustices, power imbalances, and environmental concerns in the development and deployment of AI technologies. The conclusions drawn from this analysis serve as a call to action for policymakers, stakeholders, and researchers to work towards a more responsible, equitable, and sustainable AI landscape.
Xing Li
Speech speed
150 words per minute
Speech length
620 words
Speech time
248 secs
Arguments
AI governance can learn from internet governance by emulating its structure and global inclusivity
Supporting facts:
- Internet governance has organizations such as IETF for technical interoperability and ICANN for names and number assignments
- Internet governance evolved from a US-centric model to a global model
Topics: AI governance, Internet governance, Interoperability
Generic AI creates opportunities and challenges for the global south.
Supporting facts:
- Generic AI refers to algorithms, computing power, and data.
Topics: AI, Global South
Four educational factors are most important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration.
Supporting facts:
- Old educational systems need to change.
- New educational systems are needed specifically for AI age.
Topics: Education, AI
Report
The analysis explores various aspects of AI governance, regulations for generative AI, the impact of generic AI on the global south, and the need for new educational systems in the AI age. In terms of AI governance, the study suggests that it can learn from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments.
The shift from a US-centric model to a global model in internet governance is viewed positively and can serve as an example for AI governance. The discussion on generative AI regulations focuses on concerns that early regulations may hinder innovation.
It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance. The analysis also highlights the opportunities and challenges presented by generic AI for the global south.
Generic AI, consisting of algorithms, computing power, and data, has the potential to create new opportunities for development. However, it also poses challenges that need to be addressed to fully leverage its benefits. Regarding education, the study emphasises the need for new educational systems that can adapt to the AI age.
Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world.
Finally, the analysis supports the establishment of a global AI-related education system. This proposal, advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.
In conclusion, the analysis highlights the importance of drawing lessons from internet governance, balancing regulations to foster innovation in generative AI, addressing the opportunities and challenges of generic AI in the global south, and reimagining education systems for the AI age.
These insights provide valuable considerations for policymakers and stakeholders shaping the future of AI governance and its impact on various aspects of society.