International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Liming Zhu
Australia has taken significant steps in developing AI ethics principles in collaboration with industry stakeholders. The Department of Industry and Science, in consultation with these stakeholders, established these principles in 2019. The country’s national science agency, CSIRO, along with the University of New South Wales, has been working to operationalise these principles over the past four years.
The AI ethics principles in Australia have a strong focus on human-centred values, ensuring fairness, privacy, security, reliability, safety, transparency, explainability, contestability, and accountability. These principles aim to guide the responsible adoption of AI technology. By prioritising these values, Australia aims to ensure that AI is used in ways that respect and protect individuals’ rights and well-being.
In addition to the development of AI ethics principles, it has been suggested that the use of large language models and AI should be balanced with system-level guardrails. OpenAI’s GPT model, for example, modifies user prompts by adding text such as ‘please always answer ethically and positively.’ This demonstrates the importance of incorporating ethical considerations into the design and use of AI technologies.
Diversity of stakeholder groups and their perspectives on AI and AI governance is viewed as a positive factor. The presence of different concerns from these groups allows for robust discussions and a more comprehensive approach in addressing potential challenges and ensuring the responsible deployment of AI. Fragmentation in this context is seen as an opportunity rather than a negative issue.
Both horizontal and vertical regulation of AI are deemed necessary. Horizontal regulation entails regulating AI as a whole, while vertical regulation focuses on specific AI products. It is crucial to strike a balance and ensure that there are no overlaps or conflicts between these regulations.
Collaboration and wider stakeholder involvement are considered vital for effective AI governance. Scientific evidence and advice should come from diverse sources and require broader collaboration between policy and stakeholder groups. This approach ensures that AI policies and decisions are based on a comprehensive understanding of the technology and its impact.
Overall, Australia’s development of AI ethics principles, the emphasis on system-level guardrails, recognition of diverse stakeholder perspectives, and the need for both horizontal and vertical regulation reflect a commitment to responsible and accountable AI adoption. Continued collaboration, engagement, and evidence-based policymaking are essential to navigate the evolving landscape of AI technology.
Audience
The analysis of the speakers’ arguments and supporting facts revealed several key points about AI governance and its impact on various aspects of society. Firstly, there is a problem of fragmentation in AI governance, both at the national and global levels. This fragmentation hinders the development of unified regulations and guidelines for AI technologies. Various agencies globally are dealing with AI governance, but they approach the problem from different perspectives, such as development, sociological, ethical, philosophical, and computer science. The need to reduce this fragmentation is recognized in order to achieve more effective and cohesive AI governance.
On the topic of AI as a democratic technology, it was highlighted that AI can be accessed and interacted with by anyone, which sets it apart from centralized technologies like nuclear technology. This accessibility creates opportunities for a wider range of individuals and communities to engage with AI and benefit from its applications.
However, when considering the global governance of AI, the problem of fragmentation becomes even more apparent. The audience members noted the existence of fragmentation in global AI governance and highlighted the need for multi-stakeholder engagement in order to address this issue effectively. Talks were mentioned about the creation of an International Atomic Energy Agency (IAEA)-like organization for AI governance, which could help in regulating and coordinating AI development across countries.
Another important aspect discussed was the need for a risk-based approach in AI governance. One audience member, a diplomat from the Danish Ministry of Foreign Affairs, expressed support for the EU AI Act’s risk-based approach. This approach focuses on identifying and mitigating potential risks associated with AI technologies. It was emphasized that a risk-based approach could help strike a balance between fostering innovation and ensuring accountability in AI development.
The discussions also touched upon the importance of follow-up mechanisms, oversight, and accountability in AI regulation. Questions were raised about how to ensure the effective implementation of AI regulations and the need for monitoring the compliance of AI technologies with these regulations. This highlights the importance of establishing robust oversight mechanisms and accountability frameworks to ensure that AI technologies are developed and deployed responsibly.
In terms of the impact of AI on African countries, it was noted that while AI is emerging as a transformative technology globally, its use is geographically limited, particularly in Africa. One audience member pointed out that the conference discussions only had a sample case from Equatorial Guinea, highlighting the lack of representation and implementation of AI technologies in African countries. It was also mentioned that Africa lacks certain expertise in AI and requires expert guidance and support to prepare for the realities of AI’s development and deployment in the region.
Furthermore, questions arose about the enforceability and applicability of human rights in the context of AI. The difference between human rights as a moral framework and as a legal framework was discussed, along with the need to learn from established case law in International Human Rights Law. This raises important considerations about how human rights principles can be effectively integrated into AI governance and how to ensure their enforcement in AI technologies.
Additionally, concerns were voiced about managing limited resources while maintaining public stewardship in digital public goods and infrastructure. The challenge of balancing public stewardship with scalability due to resource limitations was highlighted. This poses a significant challenge in ensuring the accessibility and availability of digital public goods while managing the constraints of resources.
Finally, the importance of inclusive data collection and hygiene in conversational AI for women’s inclusion was discussed. Questions were raised about how to ensure equitable availability of training data in conversational AI and how to represent certain communities without infringing privacy rights or causing risks of oppression. This emphasizes the need to address biases in data collection and ensure that AI technologies are developed in a way that promotes inclusivity and respect for privacy and human rights.
In conclusion, the analysis of the speakers’ arguments and evidence highlights the challenges and opportunities in AI governance. The problem of fragmentation at both the national and global levels calls for the need to reduce it and promote global governance. Additionally, the accessibility of AI as a democratic technology creates opportunities for wider engagement. However, there are limitations in AI adoption in African countries, emphasizing the need for extended research and expert guidance. The enforceability and applicability of human rights in AI, managing limited resources in digital public goods, and ensuring inclusive data collection in conversational AI were also discussed. These findings emphasize the importance of addressing these issues to shape responsible and inclusive AI governance.
Kyung Ryul Park
Kyung Ryul Park has assumed the role of moderator for a session focused on AI and digital governance, which includes seven talks specifically dedicated to exploring this topic. The session is highly relevant to SDG 9 (Industry, Innovation and Infrastructure) as it delves into the intersection of technology, innovation, and the development of sustainable infrastructure.
Park’s involvement as a moderator reflects his belief in the significance of sharing knowledge and information about AI and digital governance. This aligns with SDG 17 (Partnerships for the goals), emphasizing the need for collaborative efforts to achieve sustainable development. As a moderator, Park aims to provide a comprehensive overview of the ongoing research and policy landscape in the field of AI and digital governance, demonstrating his commitment to facilitating knowledge exchange and promoting effective governance in these areas.
The inclusion of Matthew Liao, a professor at NYU, as the first speaker in the session is noteworthy. Liao’s expertise in the field of AI and digital governance lends valuable insights and perspectives to the discussion. As the opening speaker, Liao is expected to lay the foundation for further discussions throughout the session.
Overall, the session on AI and digital governance is highly relevant to the objectives outlined in SDG 9 and SDG 17. Through Kyung Ryul Park’s moderation and the contributions of speakers like Matthew Liao, the session aims to foster knowledge-sharing, promote effective governance, and enhance understanding of AI and its implications in the digital age.
Atsushi Yamanaka
The use of artificial intelligence (AI) and digital technologies in developing nations presents ample opportunities for development and innovation. These technologies can provide innovative products and services that meet the needs of developing countries. For instance, mobile money, which originated in Kenya, exemplifies how AI and modern technologies are being utilized to create innovative solutions.
Moreover, Information and Communication Technology (ICT) plays a vital role in achieving the Sustainable Development Goals (SDGs). ICT has the potential to drive socio-economic development and significantly contribute to the chances of achieving these goals. It can enhance connectivity, access to information, and facilitate the adoption of digital solutions across various sectors.
However, despite the progress made, the issue of digital inclusion remains prominent. As of 2022, approximately 2.7 billion people globally are still unconnected to the digital world. Bridging this digital divide is crucial to ensure equal access to opportunities and resources.
Additionally, there are challenges related to digital governance that need to be addressed. Growing concerns about data privacy, cybersecurity, AI, internet and data fragmentation, and misinformation underscore the need for effective governance. The increasing prevalence of cyber warfare and the difficulty in distinguishing reality from fake due to advanced AI technologies are particularly worrisome. Developing countries also face frustrations due to the perceived one-directional flow of data, concerns over big tech companies controlling data, and worries about legal jurisdiction over critical national information stored in foreign servers.
To tackle these issues, it is suggested that an AI Governance Forum be created instead of implementing a global regulation for AI. After 20 years of discussions on internet governance, no suitable model has been developed, making the establishment of a global regulation challenging. Creating an AI Governance Forum, and sharing successful initiatives, offers a more practical approach to governing AI. This process would require the active participation of different stakeholders, making the establishment of global regulations less appealing.
AI is gaining traction in Africa, despite a limited workforce. Many startups in Africa are leveraging AI and other database solutions to drive innovation. However, to further enhance AI adoption, there is a need to establish advanced institutions in Africa that can provide training for more AI specialists. Examples of such advanced institutions include Carnegie Mellon University in Africa and the African Institute of Mathematical Science in Rwanda. Additionally, African students studying AI in countries like Japan and Korea are further augmenting expertise in this field.
Digital technology also presents a unique opportunity for women’s inclusion. It offers pseudonymization features that can help mask gender while providing opportunities for inclusion. In fact, digital technology provides more avenues for women’s inclusion compared to traditional in-person environments, thereby contributing to the achievement of gender equality.
It is worth noting that open source initiatives, despite their advantages, face scalability issues. Scalability has always been a challenge for open source initiatives and ICT for development. However, the Indian MOSSIP model has successfully demonstrated its scalability by serving 1 billion people. This highlights the importance of finding innovative solutions to overcome scalability barriers.
In conclusion, the use of AI and digital technologies in developing nations offers significant opportunities for development and innovation. However, challenges such as digital inclusion, data privacy, cybersecurity, and data sovereignty must be addressed. Establishing an AI Governance Forum and advanced institutions for training AI specialists can contribute to harnessing these technologies more effectively. Additionally, digital technology can create unique opportunities for women’s inclusion. Finding innovative solutions for open source scalability is also crucial for the successful adoption of ICT for development.
Takayaki Ito
Upon analysis, several compelling arguments and ideas related to artificial intelligence (AI) and its impact on various domains emerge. The first argument revolves around the development of a hyper-democracy platform, initiated by the Co-Agri system in 2010. Although the specific details regarding this system are not provided, it can be inferred that the intention is to leverage AI to enhance democratic processes. This project is regarded positively, indicating an optimistic outlook on the potential of AI in improving democratic systems globally.
Another noteworthy argument is the role of AI in addressing social network problems such as fake news and echo chambers. Recognising the text structures by AI is highlighted as a potential solution. By leveraging AI algorithms to analyse and detect patterns in text, it becomes possible to identify and counteract the spread of false information and the formation of echo chambers within social networks. The positive sentiment expressed further underscores the belief in the power of AI to mitigate the negative impact of misinformation on society.
Additionally, the Agri system, initially developed as part of the Co-Agri project, is introduced as a potential solution for addressing specific challenges in Afghanistan. The system aims to collect opinions from Kabul civilians, indicating a focus on incorporating the perspectives of local populations. Furthermore, collaboration with the United Nations Habitat underscores the potential for the Agri system to contribute to the achievement of Sustainable Development Goals related to good health and well-being (SDG 3) and peace, justice, and strong institutions (SDG 16).
Lastly, the positive sentiment encompasses the potential of AI to support crowd-scale discussions through the use of multiple AI agents. A multi-agent architecture for group decision support is being developed, which emphasises the collaborative capabilities of AI in facilitating large-scale deliberations. This development aligns with the goal of fostering industry, innovation, and infrastructure (SDG 9).
The overall analysis showcases the diverse applications and benefits of AI in various domains, including democracy, social networks, conflict zones like Afghanistan, and large-scale discussions. These discussions and arguments highlight the hopeful perspective of leveraging AI to address complex societal challenges. However, it is important to note that further information and evidence would be necessary to fully understand the potential impact and limitations of these AI systems.
Summary: The analysis reveals promising arguments for the use of artificial intelligence (AI) in different domains. The development of a hyper-democracy platform through the Co-Agri system shows optimism for enhancing democratic processes. AI’s potential in combating fake news and echo chambers is underscored, providing hope for addressing social network problems. The Agri system’s focus on collecting opinions from Kabul civilians in Afghanistan and collaboration with the United Nations Habitat suggests its potential in achieving SDG goals. The use of multiple AI agents for crowd-scale discussions exhibits AI’s collaborative capabilities. Overall, AI presents opportunities to tackle complex societal challenges, though further information is needed to fully evaluate its impact.
Rafik Hadfi
Digital inclusion is an essential aspect of modern society and is closely linked to the goal of gender equality. It plays a crucial role in integrating marginalized individuals into the use of information and communication technology (ICT) tools. Programs conducted in Afghanistan have shown that digital inclusion efforts can empower women by providing them with the knowledge and resources to actively engage with ICT technologies, bridging the societal gap and enabling them to participate more fully in digital spaces.
Artificial Intelligence (AI) has significant potential in facilitating digital inclusion and promoting social good. Case studies conducted in Afghanistan demonstrate that integrating AI into online platforms predominantly used by women can enhance diversity, reduce inhibitions, and foster innovative thinking among participants. This highlights the transformative impact of AI in empowering individuals and ensuring their active involvement in digital spaces.
Additionally, emphasizing community empowerment and inclusion in data collection processes is crucial for achieving the Sustainable Development Goals (SDGs). By involving local communities in training programs focused on AI systems, effective datasets can be created and maintained, ensuring diversity and representation. This approach recognizes the significance of empowering communities and involving them in decision-making processes, thereby promoting inclusivity and collaborative efforts in achieving the SDGs.
It is worth noting that training AI systems solely in English can lead to biases towards specific contexts. To address this bias and ensure a fairer and more inclusive AI system, training AI in different languages has been implemented in Indonesia and Afghanistan. By expanding the linguistic training of AI, biases towards specific contexts can be minimized, contributing to a more equitable and inclusive implementation of AI technologies.
Moreover, AI has been employed in Afghanistan to address various challenges faced by women and promote women’s empowerment and gender equality. By utilizing AI for women empowerment initiatives, Afghanistan takes a proactive approach to address gender disparities and promote inclusivity in society.
In conclusion, digital inclusion, AI, and community empowerment are crucial components in achieving the SDGs and advancing towards a sustainable and equitable future. Successful programs in Afghanistan demonstrate the transformative potential of digital inclusion in empowering women. AI can further facilitate digital inclusion and promote social good by enhancing diversity and inclusivity in digital spaces. Emphasizing community empowerment and inclusion in data collection processes is essential for creating effective and diverse datasets. Training AI in different languages helps minimize bias towards specific contexts, promoting fairness and inclusivity. Lastly, utilizing AI for women empowerment initiatives contributes significantly to achieving gender equality and equity.
Matthew Liao
The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of regulations to prevent harm and protect human rights. They argue that regulations should be based on a human rights framework, focusing on the promotion and safeguarding of human rights in relation to AI. They suggest conducting human rights impact assessments and implementing regulations at every stage of the technology process.
The speakers all agree that AI regulations should not be limited to the tech industry or experts. They propose a collective approach involving tech companies, AI researchers, governments, universities, and the public. This multi-stakeholder approach would ensure inclusivity and effectiveness in the regulation process.
Enforceability is identified as a major challenge in implementing AI regulations. The complexity of enforcing regulations and ensuring compliance is acknowledged. The speakers believe that regulations should be enforceable but recognize the difficulties involved.
The analysis draws comparisons to other regulated industries, such as nuclear energy and the biomedical model. The speakers argue that a collective approach, similar to nuclear energy regulation, is necessary in addressing AI challenges. They also suggest using the biomedical model as a reference for AI regulation, given its successful regulation of drug discovery.
A risk-based approach to AI regulation is proposed, considering that different AI applications carry varying levels of risk. The speakers advocate for categorizing AI into risk-based levels, determining the appropriate regulations for each level.
Potential concerns regarding regulatory capture are discussed, where regulatory agencies may be influenced by the industries they regulate. However, the analysis highlights the aviation industry as an example. Despite concerns of regulatory capture, regulations have driven safety innovations in aviation.
In summary, the analysis underscores the importance of AI regulation in mitigating risks and protecting human rights. It emphasizes the need for a human rights framework, a collective approach involving various stakeholders, enforceability, risk-based categorization, and lessons from other regulated industries. Challenges such as enforceability and regulatory capture are acknowledged, but the analysis encourages the implementation of effective regulations for responsible and ethical AI use.
Seung Hyun Kim
The intersection between advanced technologies and developing countries can have negative implications for social and economic problems. In Colombia, drug cartels have found a new method of distribution by using the cable car system. This not only enables more efficient operations for the cartels but also poses a significant challenge to law enforcement agencies.
Another concern is the potential misuse of AI technologies in communities that are already vulnerable to illicit activities. The speakers highlight the need to address this issue, as the advanced capabilities of AI can be exploited by those involved in criminal activities, further exacerbating social and economic problems in these areas.
In terms of governance, the Ethiopian government faces challenges due to the fragmentation of its ICT and information systems. There are multiple systems running on different platforms that do not communicate with each other. This lack of integration and coordination hampers efficient governance and slows down decision-making processes. It is clear that the government needs to address this issue in order to improve overall effectiveness and service delivery.
Furthermore, the dependence of Equatorial Guinea on foreign technology, particularly Huawei and China for its ICT infrastructure, raises concerns about technology sovereignty. By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing control over its own systems and data. This dependence undermines the ability to exercise full control and authority over technological advancements within the country.
The speakers express a negative sentiment towards these issues, highlighting the detrimental impact they can have on social and economic development. It is crucial for policymakers and stakeholders to address these challenges and find appropriate solutions to mitigate the negative effects of advanced technologies in developing countries.
Overall, the analysis reveals the potential risks and challenges that arise from the intersection of advanced technologies and developing countries. By considering these issues, policymakers can make more informed decisions and implement strategies that help to maximize the benefits of technology while minimizing the negative consequences.
Dasom Lee
Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, infrastructure, and environmental sustainability. The lab’s research covers energy transition, transportation, and data centers, addressing key challenges in these areas. Currently, they have five projects aligned with their research objectives.
One significant concern is the lack of international regulations on data centers, particularly in relation to climate change. The United States, for instance, lacks strong federal regulations despite having the most data centers. State governments also lack the expertise to propose relevant regulations. This highlights the urgent need for global standards to address the environmental impact of data centers.
In the field of automated vehicle research, there is a noticeable imbalance in focus. The emphasis is primarily on technological improvements, neglecting the importance of social sciences in understanding the broader implications of this technology. The lab at KAIST recognizes this gap and is using quantitative and statistical methods to demonstrate the necessity of involving social science perspectives in automated vehicle research. This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology.
Privacy regulations present a unique challenge due to their contextual nature. The understanding and perception of privacy vary across geographical regions, making universal regulation unrealistic. To address this challenge, the KAIST-NYU project plans to conduct a survey to explore privacy perceptions and potential future interactions based on culture and history. This approach will help policymakers develop tailored and effective privacy regulations that respect different cultural perspectives.
To summarise, Dasom Lee and the AI and Cyber-Physical Systems Policy Lab at KAIST are making valuable contributions to AI, infrastructure, and environmental sustainability. Their focus on energy transition, transportation, and data centers, along with ongoing projects, demonstrates their commitment to finding practical solutions. The need for data center regulations, involvement of social sciences in automated vehicle research, and contextualization of privacy regulations are critical factors in the development of sustainable and ethical technologies.
Session transcript
Kyung Ryul Park:
I’m really honored to moderate this session. So we have actually a wonderful group of distinguished speakers today. So we have seven very interesting talks. But basically, this is the kind of networking session, but at the same time, we try to share our knowledge and information about the current landscape of research and also the policy in this field of AI and digital governance. So I’m very excited to introduce my speakers, who actually need very little introduction. So we’re going to have the seven talks. And then after that, maybe we might want to have some Q&A sessions to share our thoughts all together. So first of all, I’d like to introduce Professor Matthew Liao from NYU, who is actually currently in the States. So Matthew, are you here? I’m here. Sure. I would like to share your thoughts, and then maybe we’ll hear from Matthew. And then he’s giving a kind of very introductory and very fundamental questions for the digital governance from the perspective of human rights. Matthew, the floor is yours.
Matthew Liao:
Thank you, Kyung. So hi, everybody. Sorry, I couldn’t be there in person, but I’m very honored and delighted to join you. So we all know that AI has very incredible capabilities. They’re going to be able to help us develop medicine faster. In public health, they’re going to be able to identify those who are at risk of being in shelter. They’re going to be able to help us with the environment. At the same time, these powerful AIs also come with dangers. So many people are aware that the data on which AI is trained can be biased and discriminatory. at NYU and other educational institutions. We’re grappling with chat GPT and what that means for writing essays and plagiarism. Elections are coming up. People are worried that AI could be used to, you know, solve disinformation and distrust and influence elections. AI is already also being used in Ukraine and other wars. So there’s a question of whether AI is leading us towards sort of mutually assured destruction. And so to make sure that AI produces the right kind of benefits for everybody, and that doesn’t just cause harm, governments around the world are working really hard to try to come up with the right regulatory framework. So two weeks ago, President Yu was at NYU of Republic of Korea, and he talked about a digital Bill of Rights. In July, President Biden secured the voluntary commitments of a number of tech companies to secure three principles when using AI. So safety, security, and trust. The European Union is getting ready to adopt the EU AI Act, which will be one of the world’s first comprehensive laws on AI. And so this brings me to my lightning remarks today. So assuming that we should try to regulate AI in some ways, how should we go about regulating it? And so my students in my lab and I have been studying this issue, and we’ve structured this topic into the 5W1H framework. So the first question is what should be regulated? So that is the object of regulation. So many people talk about regulating data because how we collect them could raise issues such as bias and privacy. People talk about regulating the algorithms because as impressive as they are, algorithms can also produce bad results. So take generative AI like chat GPT, it’s known to hallucinate and make up stuff. There are also people who think that we should regulate by sectors. So for example, we should have regulations for self-driving cars, another set of regulations for medical devices, and so on and so forth. And then finally, the EU thinks that we should regulate based on risks, sort of whether the risk is going to be acceptable or too high or low, and so on and so forth. And the general issue here is that over-regulation could end up stifling innovation, but under-regulation could lead to harms and violations of human rights. So some of the questions that we can talk about is like, if someone wants to regulate large language models such as CHAT-GPT, where would they even start? Would it be the training data? Would it be the models themselves? Would it be the application? Or another question we can ask is whether EU’s risk-based approach, is that the way to go? And we can talk more about that in Q&A. So let’s turn to the question of why we should regulate. Well, there are many reasons. So we could regulate to promote national interests, for example, in order to establish a country as a leader in AI. We can also regulate for legal reasons to make sure that new AI technologies comport with existing laws. Or we can regulate for ethical reasons, for instance, to make sure that we protect human rights. And some say to make sure that AIs don’t cause human extinction. And of course, as an ethicist, I would hope that all regulations would conform to the highest ethical standards. But is this realistic? For instance, in a country that’s trying to win the AI race, you may feel that has no choice but to cut ethical corners. So how optimistic or pessimistic should we be that governments will pursue AI in an ethical way? We can make this discussion more concrete. A lot of people in 2015 already signed a letter arguing that we should ban lethal autonomous weapons, but these weapons are already being used. Is AI race a good thing? If not, what do we need to avert an AI race? So now let’s talk about who should be doing the regulating. Well, there are a number of parties and stakeholders here. So you’ve got the companies, the AI researchers themselves, the governments, universities, members of the public. Now some people, especially from those in the tech industry, are concerned that non-specialists would not know AI well enough to regulate it. Is this true? Should we leave the regulation to people in the know, to the experts? And other people think that we shouldn’t just rely on industries to regulate themselves. Why is that? And what’s the role of the public in regulating AI? And what’s the best way to engage the public? So in the AI, we can also talk about when we should begin the regulation process. That is, when in the lifecycle of a technology should we begin to regulate? So we can regulate at the beginning, which would be upstream, right? Or we can regulate once a product has been produced, which would be more downstream. We can also regulate the entire lifecycle from start to finish in every stage of the development. Now companies will say that they already have a regulatory process in place for their products. So what I have in mind is independent external regulation. And in the US, at least, the regulations tend to be more downstream, you know, external regulations. Take, for example, chatGPT. It’s already out there being used. And now we’re just grappling with how we should regulate it, externally speaking. Downstream regulation is usually seen as being more pro-innovation and pro-companies. How feasible would it be for an external regulatory body to regulate fast-paced AI research and development? Is downstream regulation enough, or should we be taking a more proactive approach and regulate earlier in the process to ensure more protection for humans? We can also ask, where should the regulation take place? Here, we can regulate at the local level, at the national level, at the international level, or all of the above. So how important is it for us to be able to coordinate at the international level? Are we gonna be able to do it effectively? We don’t have a very good record with respect to climate change, so can we count on doing that with respect to AI? What would it mean to regulate at a local level? And how can universities, for example, contribute to AI governance? And finally, we can kind of talk about how we should regulate. And by this, I mean, what kind of policies should we try to enact when regulating AI? So ideally, we’re looking for policies that can keep pace with innovation and won’t stifle it. At the same time, we hopefully, these policies will be enforceable. For example, through our legal system. Many people talk about transparency, accountability, and explainability as being important tools in AI regulation. Are those enough? If not, what other policies do we need? So I’ve been doing a lot of work on something called the human rights framework, where I think we should think about regulating from a human rights perspective. We should make sure that people’s human rights are protected and promoted through AI. And that’s what the purpose of the regulation. So let’s just go back and sort of apply. So the human rights framework, it’s kind of like an ethical framework, right? It says that the ethics should be prior to a lot of these discussions. And I already mentioned, there are questions about whether that’s realistic or not. But ideally, we should make sure ethics is at the forefront. What should it regulate? Well, on a human rights framework, we should regulate, we should look into everything, at least consider everything, the data, the algorithms by sectors. And by risk, like anything that could impact human rights, like there should be some sort of human rights impact assessment for these technologies. Who should do the regulating? Well, on the human rights framework, it says that everybody has a responsibility. Human rights belong to everybody. Everybody has an obligation. So companies, researchers, governments, universities, the public, we all have to be proactive in engaging in this sort of regulation process. When should we regulate? Well, the human rights framework thinks that, you know, it seems to point towards a lifecycle approach. So we should sort of at every stage, look to make sure that, you know, do some sort of human rights impact assessment, making sure that it doesn’t undermine human rights. I’ll talk, I can answer in Q&A whether, how that could be feasible. And where should we regulate? Well, the human rights framework is global. It’s all of the above. You know, we need to do it internationally, we need to do it nationally, and we need to do it locally. And finally, how should we regulate? Like sort of, is it gonna be enforceable? I think that’s gonna be the biggest challenge to a human rights framework or really any framework. I don’t think this is a problem exclusive to the human rights problem, but it’s certainly a big problem, which is the enforceability. I don’t think we have a very good track record. And so one of the challenges for all of us is, how can we get something together where we can actually make it binding and people will actually be willing to comply with it? So thank you very much.
Kyung Ryul Park:
Thank you very much. Thank you. So thank you also for mentioning about the KAIST and the NYU recent relationship for the research collaboration. So KAIST and NYU together with Matthew and Daniel Bierhoff and Claudia Ferreria, and also Professor Soyoung Kim and Dasom Lee here. So we’ve been leading this collaboration for the digital governance and AI policy research. So let’s move on to the Professor Dasom Lee from KAIST, Graduate School of Science and Technology Policy. Okay, sure, without further delay.
Dasom Lee:
Sure, thank you so much for the introduction. Could I have my slides on, please? Oh, clicker. Oh, there you go. Oh, thank you. So I know this is, I know we don’t have a lot of time, so I figured I would spend the time to introduce the kind of work that I’m doing and introduce the lab that I have at KAIST Korea. So I have a lab called the AI and Cyber-Physical Systems Policy Lab. It’s called AI and CPS Lab. And we basically study how AI-based infrastructures or infrastructures that try to incorporate AI in the future try to address and promote environmental sustainability. So more specifically, we look at energy transition and the technologies involved with that would be smart meters, smart grids, renewable energy technologies. We also look at transportation such as automated vehicles and unmanned aerial vehicles, which are drones, and data centers. Obviously, data centers are not specifically AI-focused, but they store the data that AI collects and ensures the reliability and the validity of AI technologies. And I actually have been criticized for being way too broad and not having a focus and studying everything, which is fine. I can take constructive criticism, but I also think that it’s really important to look at everything in a very harmonized and holistic way, especially when we are trying to address sustainability. And when we look at infrastructures and when we look at energy and transportation in particular, they are really interconnected. So for example, right now, we’re trying to use EVs as batteries and how each household can have their own battery. We’re using and using EVs as batteries. We can use renewable energy more sustainably and so on. So that’s basically, I’m trying to build like a harmonious infrastructural system in my mind somehow. And I’m getting there, hopefully, and I’ll hopefully get there in about 10 to 20 years. But right now it’s still kind of fuzzy. So the current projects, I don’t really want to go into too much detail. But there are these five ongoing projects right now. The first one is regulating data centers. We don’t have a lot of regulations on data centers, especially regarding climate change globally, internationally, not just Japan or Korea, but just everywhere in the world. The United States has the most number of data centers in the world, and the US is not really known for their hardcore federal regulation, which means that it’s often left up to the state level governments like California or Tennessee, where I was. And those kind of governments often do not have the expertise on data centers to propose any type of regulations. So that’s one of the projects. Another is the media analysis of energy transition obstruction in Korea. And we have the student who’s working on this sitting there in the back, and he’s done a wonderful job so far. His name is Uibam, and we’re looking at how different types of media outlets show how the energy transition in Korea has not really gone from oil-based energy to renewable energy, but instead, oil-based energy to natural gas. So there’s a transition, and it’s slightly better, but it’s not great. And we are trying to see how that actually, how that kind of obstruction happens in the media outlets. The third one is the quantitative analysis for the need of social science in automated vehicle, AV is the self-driving car automated vehicle research. Lots of automated vehicle research so far has been focused on more technological fixes. We need more sensors, we need LIDAR, we need more radars, we need more cameras, and then we need more of these kind of infrastructures around the road, and we needed these wires under the road. And we are actually, we show using quantitative and statistical methods that social science is the, social science needs to be involved in order for us to understand the AV technology much better. The fourth one is a bit of my small ambition and my little personal side project, is that I’m a sociologist by training, but I also study science and technology studies. So I try to merge MLP, multi-level perspective by Frank Hills, which is the science and technology studies, and Pierre Bourdieu’s, who’s a sociologist, study sociologist theory of forms of capital, and that’s a theoretical work that I’m doing. And I’m also doing some work on data donation to promote data privacy, and to promote more sustainable data management and collection. And I also want to, I know we don’t have it on there, but I also want to quickly mention the project that Professor Park, Professor Kim, and I are doing together with NYU, KAI’s project, which is that we are looking at how privacy is contextualized in different geographical regions based on their culture and based on their history. And when we look at privacy, we can’t just, and we have tried to do this with lots of technologies like cars, all cars need to have seatbelts, right? Everywhere in the world. But with privacy, it’s really difficult to have that kind of very concrete regulation that’s universally applied, because everyone. has a different understanding of what privacy really is. So we’re planning to collect data. We just passed the International Review Board, which is the ethical, you have to do an ethical review before you do a survey. So we just passed the ethical review, and we’re planning to do a survey on how people perceive those privacy issues and how the public would interact with those potential privacy issues in the future. I think that’s it for me, and I really look forward to the Q&A session. Thank you.
Kyung Ryul Park:
Thank you very much, Professor Dasom Lee. So now, right next to me, the senior advisor from JICA, Mr. Atsushi Yamanaka-sensei. So he has extensive experiences in the field of development. So I think he’s giving us a development perspective on how we can address the challenges and opportunities of digital governance.
Atsushi Yamanaka:
Thank you. It’s on? Okay. Thank you so much, Professor Park, actually, and then distinguished panelists here, and also the audiences here. It’s always very, very hard to be the first sessions in the morning. So thank you so much for your dedications, for actually being part of this sessions. I’m essentially a practitioner. So I’ve been actually doing ICT for development for more than actually quarter centuries, which is actually quite scary to think about it. So let me actually talk from the developmental perspectives. How does the digital governance actually can contribute, and also what are the threats of the digital governance or digital technologies for the development? But since I’m actually an optimist, so let me actually start from opportunities. So essentially, for example, like new technologies like AI, is essentially opening up a lot of windows of opportunities in developing countries. A lot of developing countries are using the AIs and other cutting-edge technologies. in order for them actually to innovate and then provide, actually coming up with different actually product and services, which is really affecting their lives and also contributing to the social economic development. So it is really, you know, accelerations of these kind of things are also giving opportunity for the reverse innovations. I don’t necessarily like the word reverse innovation because it sounds like it’s very pretentious, but we believe, and I believe as well, that a lot of actually next generations of innovations, whether it is the services or the product, will be coming back, will be coming out of the so-called developing countries or emerging economies. Because, you know, one of the things that they have plenty is the needs, right? So they actually have a lot of socio-economic challenges or needs, so that is actually is fueling the innovations. When you look at mobile money, for example, it came from Kenya. You know, it never, never actually came out from country like Japan, where we essentially actually still insulate with paper monies, right? Or coins. So that will not actually have happened without actually the needs of developing countries. Another interesting opportunity that we see is digital public goods and digital public infrastructures. That’s a very big topic this year, especially, with discussions with G20, where India is very pushing for digital public infrastructures to bridge the gap of the digital inclusions. So we are going to see a lot of interesting opportunities, and hopefully this time that we are not seeing, we are not going to see the same kind of fate that we saw about the hype of open source and also funding for, you know, ICT for development during the WSIS process. Another very, you know, really encouraging signs in doing the WSIS process. is multi-stakeholder’s involvement into the digital governance areas and policy-making processes. Prior, I’m old enough, so prior to actually with this process, UN really did not actually have this multi-stakeholder’s approach. I mean, of course, I mean, during the first summit of the Social Development Summit in Rio, the civil society got involved in, but still it was not really this kind of multi-stakeholder approach. So IGF actually exemplifies this multi-stakeholder approach and how everyone can put their inputs into it. So let me actually go to the next. I tend to speak so much, so let me, please, Professor Park, if I speak too much, please cut it. Now, the challenges. Well, there is a lot of challenges to be still made. Despite the fact that we actually have made huge progress in terms of digital inclusions, still 2.7 billion people remains to be unconnected in 2022. And still, there’s a lot of issues of affordability, digital service, digital devices affordability, also gender, and also the economic sort of inclusions as well. So in a way, it’s essentially the problem is the same, 20 years ago, but it became much more complex. So in that respect, the last 2.7 billion people, last 30% is very difficult to reach out. So that is gonna be essentially a huge issue that we need to tackle on. Another thing is three weeks ago, I was part of the SDG Digital Summit in New York. Still, if we cannot actually utilize digital technologies well, we will not be able to actually achieve SDGs. So that is gonna be another very big challenge into that. And then in the governance sides, yes, there’s so many different actual governance challenges. We are, Japan is promoting cross-border transactions of data, it’s called DFFT. And that is also becoming a lot of issues in terms of what would be the best examples? What would be the framework to do that? Personal privacy, so you know, professors from NYU I was talking about. What are we going to do with personal privacy and also human rights issues? Cyber security is another issue, because we’ve seen cyber wars now. AI, internet, and the data fragmentations, also another things. You know, they are fragmentations. You know, who has the rights actually to cut the internet? You know, all these things. And also miss, and then this, and the malinformations. You know, with the advent of the AI technologies, this is going to, how can you actually tell the reality to the fake? That’s going to be huge issues. And also, developing countries, especially, how can you actually incorporate the voices and the input? Because they’re still not fully involved in the rulemaking or the framework making process. So we really need to engage with them and then give them the opportunities, because they actually represent probably more than, you know, so-called G7 or even G20 wars. So how can you do that? And also, lastly, data and information flows are still one-directional. You know, this is actually getting very big frustrations among the developing countries, because they have a big concerns about this data colonizations or data oligopolies, especially with the big techs. And also, data sovereignties. I think this is a very big issues. What if we actually, if they are going to put in critical national informations on the crowd in, so for example, like in US, which actually laws and regulations actually is going to regulate this data? If it’s a national sovereign data, shouldn’t the data owner have the right to do it? But currently, the law is actually over the United States. So these are among some of the challenges that we really need to address in order to really. fully utilize the power of digital technologies for development. Thank you.
Kyung Ryul Park:
Thank you very much. Thank you. So, we have human rights perspective from Matthew, and also infrastructure perspective on digital governance, and also development perspective and development cooperation for the international relations from different kind of stakeholders. So now, we’re moving on to the Professor Rafik Hadfi from Kyoto University School of Informatics. So, he’s giving us the perspective of maybe the digital inclusion, so, okay, sure, Rafik.
Rafik Hadfi:
So thank you, Professor Park, for the invitation, and thank you everyone for being here at this early time of the day. So my name is Rafik Hadfi. I’m currently an Associate Professor in the Department of Social Informatics at Kyoto University. So, I mostly do work on AI, but in a way, try to deploy it into society to solve a number of problems, going from SDGs, LSI, to the most recent ethics-related issues. So the work we do is multidisciplinary by nature, and one of the topics that I’ve been working on most recently, perhaps the past two years, is digital inclusion. And I take digital inclusion here, particularly inclusion in a very global way, in the sense that inclusion sometimes means equity, sometimes means self-realization, autonomy, et cetera. So I’ll explain exactly what it means here. So it’s one of the key elements of modern society, in the sense that it’s a way of allowing the most disadvantaged individuals of society to have access to ICT technology. And this is more like answering the question of the how, I mean, what kind of activities allow us to include these members of society? And the goal here is to allow more equity. So equity here is a more, let’s say, inclusive approach. meaningful way to define meaning for an individual in society so the question of equity answers the what here so what’s the goal of an individual of society so this connection here leads us to something more global which is self-realization and this includes all members of society in the way if we allow self digital equity will allow individuals to let’s say meet their autonomy and also fully live their life so the question now is or the topic that I’m working on is how to enable this using most recent technologies not just ICT but AI in particular so I’ll take one case of a case study we’ve been conducting for the past let’s say two years it was very difficult challenging because it can address multiple problems in society so digital inclusion here is equated with gender equality empowering women so it’s a study that was conducted in Afghanistan and the focus here is women inclusion so the problem the main problem here was first of all conducting the study itself in Afghanistan this came at the same time where the Afghan government was collapsing and apart from the logistics there we had also the already established let’s say problems there like gender inequality insecurity so it was very difficult to conduct plus the ICT limitations in Afghanistan fast forward two years later we managed to conduct the experiment initially was a planned for 500 participants from Afghanistan and then narrowed down to 240 so the main target here is basically how to build an AI that could be deployed in an online let’s say setting where mostly women have the ability to use smartphones to communicate and also deliberate So, the AI was found to actually enhance a number of things, so one of them is the diversity of the contribution that women were providing in these kind of online debates. The second one, and most importantly, the fact that we found out that this kind of conversation like reduces inhibition. I mean, Middle Eastern societies in particular are known for limiting, let’s say, the reach of women in terms of freedom of expression, and also raising particularly issues or problems related to their livelihood. The third element was found with this kind of technology is increasing ideation. We found that AI allows actually women to provide more ideas with regard to the local problems there, like, let’s say, employment, let’s say, family-related issues. So this is one practical case of conversational AI, which is, in a way, building on what’s all known with the large language models, chat GPT, et cetera. This is more advanced than, let’s say, problem-solving approach to conversational agents. So yeah, so this is a particular practical example of using conversational life for social good, and the deployment was done in Afghanistan. So I’m looking forward to your questions, and yeah, that’s all from me.
Kyung Ryul Park:
Thank you very much, Rafik. Actually, Rafik has been leading the Democracy and AI, the research group in IJCAI, International Joint Conference on AI, and we’ll also have conferences next year in Seoul. So I think there was also a very interesting discussion in Hong Kong in August. You know, today is a Japanese national holiday, it’s a sports day. I think we are doing a lot of brain exercise today, so I think it’s very ambitious and very interesting talks and sessions today. So we’re moving on to Professor Liming Zhu, School of Computer Science and Engineering from the University of New South Wales. So we’ll talk about democratising the AI from the perspective of CS. Thank you.
Liming Zhu:
All right, thanks very much for having me. Right, so I’m a professor from the University of New South Wales, but I’m also a research director at CSIRO. So CSIRO is Australia’s national science agency. We have around 6,000 people working in the areas of agriculture, energy, mining, and of course AI. So Data Sichuan is an AI digital and data business unit. I also work quite internationally with OECD AI and some of the ISO standards on AI trustworthiness. And Australia also has a national AI centre that was established 18 months ago, which is hosted by Data Sichuan. Its main remit is not research, but AI adoption, especially responsible AI adoption. So very briefly on Australia’s journey. So Australia back in 2019 have developed Australia’s AI ethics principles, which is developed by Data Sichuan at the time, commissioned by the Department of Industry and Science, but with industry consultation. The principles, if you look at them, it’s not really that surprising. A lot of international organizations in each country have developed those. But I want to draw your attention is, you know, on the first two or three, it’s really the human centred values, especially the plural of values. We realise the trade-offs and the different culture and inclusiveness in human value. environment well-being and Australia is a fair go country and so fairness is high up in there. But then we have the traditional quality attributes I would say for any systems but AI will pose a very unique challenge to them such as privacy, security, reliability and safety. And then there are additional interesting quality attributes like transparency, explainability, contestability and accountability which is uniquely in the AI context. So since then, since 2019 that’s been four years, so Australian has been focusing on operationalizing these principles. We have done a lot of industry consultation case studies and how I get industry feedback and importantly Australian the minister, the picture is our minister for science and industry, Minister Husic has launched Australia’s responsible AI network which is Australian companies and organizations commits to responsible AI through governance mechanism. They have to commit to three, at least three, AI governance mechanism principles within their organization to be part of the member and featured and sharing their knowledge. And we are also, there’s a book coming up called Responsible AI Best Practices for Creating Trustworthy AI Systems based on the work we have done and there are three key Australian industry case studies in that book. So what is our approach? I think the key thing is we realized is a lot of best practices, they need context, they need to know when to apply them and also there’s both pros and cons of these best practices and a best practice needs to be connected. We also see people at the governance level, let’s have a responsible AI ethics committee, let’s have some auditing mechanism at a governance level doing great things but not connected to the practitioner. The practitioner is your ML engineers, AI engineers. software engineering, AI engineering developers. How do we connect all these best practices so people collaborate? So we have developed a responsive AI patent catalog which you can easily find by searching and connects governance patents, which we mean probably the people in this room as mostly interested in process patents, meaning software development process and AI engineering process from a development process point of view, and the product patents, which you see all the metrics, measurements on a particular product, how do we evaluate them? The key thing is they are connected and you can navigate around them and to have a whole-of-system assurance. At this moment, a lot of AI governance is not about AI model itself, even JGBT. There is many components, AI and non-AI components outside AI model. Every prompt you put into JGBT is not a true prompt going back to the model. Additional text added to those texts you have added. Things like simple, please always answer ethically and positively. So those kind of prompts will be attached into every single prompt you put into it. That’s a system-level guardrail. And many, so this is very simplistic example, but many organizations who leverage large language models and AI can put their unique context-specific guardrails to leverage the benefits while having some guardrails. And those kind of patterns, mitigations, need to connect with the responsible AI risks that every companies can find as part of their typical AI risk, our risk registry systems. And we have developed many of such question banks that you can ask questions about your organization and making responsible AI risk assessment part of that. So that you can find more information in some of the papers I listed here or search online. And this has been featured by the communication of ACM as one of the most. impactful project in the East Asia and Oceania region recently. So we’re very happy to share more experience with audience here. Thank you.
Kyung Ryul Park:
Thank you very much. I think today’s speaker is actually providing a lot of insights on how we can actually collaborate together from different stakeholders in the global shaping process of AI frameworks. We have also the Australian cases and Korean cases and I would say a little bit market-driven U.S. approach. And also we also need engagement from the developing countries. So I think that’s why it’s very timely. Today’s session is very timely. So I think we have Takayuki Ito-sensei online. So I’d like to introduce Professor Takayuki Ito from Kyoto University. Hello. Sure. Okay. Okay. Sure. The floor is yours. Do you hear me right? Yes. Yes. Okay. Sure.
Takayaki Ito:
Okay. I’ll share my slide. Okay. All right. So thank you. Thank you for introducing me. So I’m Takayuki Ito from Kyoto University. I’ll talk about one of my current projects towards hyper-democracy. It’s an empowered crowd-scale discussion support system. So we are working to develop the hyper-democracy platform where the multiple AI try to help group decision and consensus building support. So basically in the current social network, there are many social problems like fake news and digital gerrymandering and filter bubble and echo chamber. a very important problem. So here, by using artificial intelligence like the chatGBT and LLM, we are trying to solve that. So actually, we have been working for this kind of projects for 10 years. From 2010, we started to create a system called Co-Agri system, where the human facilitator try to facilitate and support consensus among the online participants. And then from 2015, we created the Agri system, where one AI agent supported the group decision among online participants. So here, we use AI to support the human collaboration. And then now we are working for hyper-democracy platforms where the many AI agents try to support crowd-scale discussion. So this is an overview of the Agri system. Here, as you can see, the people discuss by using text chat. And then these texts are recognized by our AI and then structuralized in the database. So basically, by using the structuralized discussion, AI facilitator try to interact with the online participant. Here, actually, we started this project in 2015. We didn’t have any chat GPT, but by using our classic AI technology, we realized that. So here, by using LLM or GPT, we are now working for more sophisticated AI, actually. So, this is one case study about DIAGRI. We used the DIAGRI system in Afghanistan, in particular in the 2021 August, there American troops left from Kabul city in Afghanistan. We opened in public the DIAGRI and then we gathered many opinions and the voices from the civilians in Kabul city. So as you can see, our AI can analyze the type, the opinion types and the characteristics. So from August 15, when the American troop left, the number of issues, problems increased drastically. That it’s shown in red box in the central graph. So we are working with the United Nation Habitat and many other NPOs about these kinds of things. So now we are extending the AI facilitator to many AI agents. And then this is the current multi-agent architecture. Now we are testing this agent architecture. So this is the conclusion. So yeah, we are now developing next-generation AI-based group decision support system. And it’s called a hyper-democratic platform. So yeah, thank you very much.
Kyung Ryul Park:
Thank you very much. Professor Takayuki Ito has been pioneering. is interdisciplinary research in the intersection between AI and democracy. So I think it provides a lot of insights on the importance of multidisciplinary approach. Especially in the research field, we have to work together with the policy scholars and also development scholars and also engineering scholars as well. So, right, so we are on time right now. So I’m feeling very relieved. So we have our last speaker, Seunghyun Kim from KAIS. Are we ready? Okay, sure. Sharing some insights from development field.
Seung Hyun Kim:
Hello, my name is Seunghyun Kim. I’m currently a PhD student studying under Professor Kyungyeol Park. But previously in my life, before the peaceful life of a graduate student, I was a program officer for an institute called the Korea Development Institute, responsible for producing policy recommendations for developing countries. So one of the stark realizations that I realized was that in the times that I worked at KDI as a program officer, I’ve only been to developing countries. So my passport records do not go above the annual GDP per capita of $10,000 per year. So I haven’t been anywhere. So this is the most advanced country that I’ve been to overseas in many years. And what I realized is that when I came to KAIS, I was exposed to all these discussions about AI, a chat GPT, bioethics, et cetera, extremely advanced technologies, cutting edge technologies that is going to change everything. But then I thought about, then my thoughts overlapped. What happens when these cutting edge technologies are overlapped with what I saw in the developing countries? which have a very different societal and economic context. So I’d like to share three snapshots that provide some peek, not insight, I wouldn’t say, just peeks into how all these discussions, insightful discussions we’ve had today, may develop in the developing world. So this is a snapshot of a photograph taken in November 16, 2016 in Medellin, Colombia. So if you’ve seen Narcos in Netflix, you’re probably familiar with the city. This is the Narco capital of the world. And this is, if you see a little bit below, you’ll see that there is called Bienvenidos a Calle Uno, which means welcome to area one. Area one is the zip code for the poorest neighborhood in the Colombian cities. So this city used to be before the introduction of a cable cause, which was a revolutionary transport mechanism. It was actually isolated from the entire city of Medellin. So it was a breeding ground for cartels, drug dealers, drug smugglers, cops would not go in there. They would say, I’m not going in there. But because of the cable cause, people were able to get jobs and come out of the city and have equal opportunities to education, jobs and capital to borrow money, et cetera, for banking, et cetera. But what wasn’t publicized was that drug cartels were using this new cable call. They would hide cocaine inside the replaceable parts. And then this would become an automated distribution mechanism that was not publicized for a very long time. But what if cable cause do this to the drug cartels? And what are they going to do when they get their hands on chat GPT and AI? So it’s something to think about. And also what the drug cartels were able to thrive in these regions because a dollar could bribe anyone in the entire neighborhood. You could, for $10, you could. basically make people do anything. What can you do with a brand new laptop, an iPad that’s connected to generative AI, and what can they do? What can the police do? What can the government do against such matters? So one snapshot is unequal opportunities and how that exposes existing social and economic problems. The second snapshot is one of lovely pictures that I’ve taken at Addis Ababa, Ethiopia. This was in 2018. This is a scene that I will probably remember for a very long time. This is a scene of an official explaining to us, the Korean researchers, about the current system of the Ethiopian public finance system, the electronic digital finance system. So the line, I would not forget this line. So the tax system runs on Microsoft. The tax expenditure system and budget planning system runs on Oracle. And we haven’t had a digital system for auditing system yet, but we will soon, as soon as we get the funding. So for one government, you have three different information systems running simultaneously that do not communicate with each other, that cannot communicate with each other. So fragmentation on an enormous level. And why is there fragmentation in the core governance structure of distribution of financial resources at the Ministry of Finance? Why is that? Because they’re financed by different institutions, by the World Bank, by UNDP, et cetera. They’re financed by different institutions who are connected to different service providers who in turn provide very little parts of the government. And no one agency can provide one comprehensive solution for the entire government. So what you have is extreme fragmentation of ICT and information systems in government. The third snapshot is… I spent two weeks in Equatorial Guinea. I don’t know if anybody’s been to Equatorial Guinea. So it was the unhealthiest time of my life. I was basically half unconscious because of the malaria vaccines that I had to take every two days. And so this is the president Nguma Mbaso speaking at the Equatorial Guinea National Economic Conference. So all the ministries were there, all the ministers were there, all the high level profiles of the government were there. But the entire conference was run and executed by Chinese officers from the mainland China. So every staff that, except for the participants, every staff would be Chinese from the mainland. And after about a week and spending a weekend there, it wasn’t long until I found that the entire ICT infrastructure was basically dependent on one country and one company, China, Huawei. So you couldn’t get an internet connection, Wi-Fi, anything. You can’t send an email without having some sort of support from Huawei. So I thought this snapshot provides a very significant view into technology sovereignty. So in sum, you have one, unequal opportunities that may be extravagated into a whole variety of societal problems. And two, fragmented ICT structures that create problems for governments. And technology sovereignty that ties in developing countries with companies and other governments and international agencies that provide a very complicated picture for the developing world. And for the developing world to become more advanced and to actually transform into a developed world, I think these three peaks are something that we should think about. Thank you.
Kyung Ryul Park:
Thank you. Thanks a lot. So we have like five minutes to go. So I’d like to open up the floor and because we have more than, I think, 30 attendees in this session, thank you very much for your participation. So if you have any comments or questions, please raise your hand. Professor Seoyeon Kim, from KAIST. Could you?
Audience:
First time doing this, but okay. If I knew that I should be standing here, I wouldn’t have asked the question, but the question, I guess, is very much common to all of us, because as Justice Seung Hyun pointed out, the problem of fragmentation in a single country level, we are actually witnessing huge fragmentation of AI governance at the global level. We have something going on at the World Bank, the UN, various agencies of the UN, and also even within single countries. And also, we are approaching this problem from many different angles, right? Development perspective, sociological, ethical, philosophical, CS, whatever. So the question is, how we can actually reduce this degree of fragmentation when you talk about AI governance and other mechanisms of regulation or issues? So someone, as you might know, or some circles around the world are talking about the need to create something like IAEA, version of AI. But then there are many limitations, of course, because of so different natures of two technologies, you know, nuclear technology is so centralized and it was born out of a very dire, emergent situations during World War II. But AI is such a apparently democratic technology because anybody can touch upon some part of AI. So my question is, how you, any of you, can address this question of the need and the way we can actually think about how we can actually reduce this problem? You see what I’m saying, right? Okay.
Kyung Ryul Park:
Thank you very much. So if you don’t mind, actually, why don’t we just have questions altogether and then we’ll just address the questions, please. Could you briefly introduce yourself?
Audience:
Yes, no problem. Hi. Thank you. I’m Sophie Kellehaug. I’m a diplomat from the Danish Ministry of Foreign Affairs posted in Geneva. So thank you so much for all your really interesting perspectives. I think we, of course, also see a need for global governance of AI. And just like my previous audience member, we also see this fragmentation. It was really good to hear, especially, of course, in the first presentation, the focus on human rights, which we also find very important, and the general multi-stakeholder engagement. This is a good example of engagement with academia. So we see a need to take a really risk-based approach. And of course, thank you for referencing also the EU AI Act as an EU member state. We very much support that approach. I wanted to ask basically the same question as well. How do we approach this globally? We see this fragmentation. But instead, then, I would like to come back to the first speaker’s, Professor Liao’s, point on follow-up mechanisms and monitoring. How do we afterwards then ensure that this regulation is implemented and we can oversee, we have oversight and accountability afterwards? Thank you.
Kyung Ryul Park:
Thank you very much. And gentleman here, over here.
Audience:
Hello, good day. My name is Peter King. I’m from Liberia. And I would like to ask a question based on the African perspective. I realize that we have some, I think, most of you are from Asian region, or you from, I mean, state in another continent. But the fact is, in Africa, we also are part of global society. But then we’re looking at how, what do you see as the impact of AI? The immersion is becoming, the use is geographically limited. So most of the conversation here, I saw only a sample case of Equatorial Guinea. But we’re looking at Africa as Africa, a lot of issues. So what advice can you give most of our policymakers in Africa in terms of how research can be extended and what can they do to ensure that as it seamlessly, I mean, get to become a reality, what can Africa prepare for? And we do lack certain expertise. What are your advices for some of us youth and then our larger audience in Africa? That’s my question.
Kyung Ryul Park:
Thank you very much. Oh, okay, another questions or comments, maybe?
Audience:
Hi, my name is Yasmin from Indonesia from the UN Institute for Disarmament Research. So I do have a few questions, so please bear with me. First of all, for the first speaker on a human rights-based AI framework, just to jump on the previous point that was made by a fellow audience member, on the question of enforceability, I was wondering if you could elaborate on the difference between human rights as a moral framework on the one hand, on the other hand, as a legal framework with the established international human rights law mechanisms, and what can we learn from the established case law that have been established over the past years as IHRL was developed? Second, on the digital public good and the digital public infrastructure, previously I used to work at Chatham House, a London-based think tank. We did some research on that, and one of the questions that we were wondering, of course, there is the importance of public stewardship, but at the same time, there’s also the question of limited resources and the need for scalability. How do you deal with that? And third, on conversational AI to advance women’s inclusions, one of the questions that popped into my mind, it’s great, it’s got a lot of potential, but how do you deal with the availability of training data, whether it’s in terms of data collection and data hygiene, that it’s available in an equitable way, so not just in terms of free in a way from biases, but also taking into account questions of, for example, some communities might need to be represented in these models, but at the same time, others might want to be forgotten because of privacy and oppression risks. How do you deal with that? Thank you so much.
Kyung Ryul Park:
Thank you very much. So, right. So, if I may, it’s wonderful comments and questions. So, if I may just summarize with this keyword, fragmentation in the global AI governance and how we can actually collaborate together for that. and what can the African countries especially prepare for AI strategy, and there are definitely the conflicting rationales in human rights perspective and also digital public goods, and how we can actually promote some enhanced scalability, and how we promote digital inclusion in terms of data collection and data analysis. So I’ll give you Matthew, are you there?
Matthew Liao:
Yes, I am.
Kyung Ryul Park:
Sure, I’ll give you two minutes. Is it okay for you?
Matthew Liao:
Yes, sure, sure.
Kyung Ryul Park:
You’re always smart, right?
Matthew Liao:
Yeah, those excellent questions and great questions. The fragmentation question is just such a difficult problem. Very quickly, I think this is something that we need to all work together. We need to, it’s multi-stakeholders, we need everybody involved in the conversation, the public, the government, the researchers, and so on. Now, that sounds kind of vague. So here’s something a bit more concrete. I think Professor Song Kim mentioned something about the nuclear energy. There are two things I want to say. Like I think the biomedical model is actually a pretty interesting thing to think about. If you think about drug discoveries, there’s a lot of innovation in the drug, a lot of research in the drug industry, you know, arena. At the same time, there’s a lot of regulation. You know, we’re protecting, you know, people. There are a lot of human subject research, you know, a lot of sort of stuff that’s not like high risk stuff, you know, not trivial risk things. And yet we can do it in a fairly responsible way. And the community, the international community have basically coalesced around sort of different norms, you know, to sort of say, hey, we need to make sure that this process is safe. And I feel like we can do something similar with respect to AI, you know, where, you know, some stuff, maybe they’re low risk. I like the EU-based approach as well. I think sort of some stuff that’s low risk that, you know, we can kind of look at it and sort of say, hey, we don’t need to worry that much about that if, you know, something is being used for games. But other things where, you know, like medical devices, maybe we should need to pay more attention, especially if it involves, you know, humans. And I’ll just say one other thing, which is, I think there’s a lot of sort of regulatory capture right now. A lot of people think, oh, you know, this is too big to be regulated. And I think it’s just, it’s useful to look at the history of regulation. Take airplanes, for example. Airplanes use the fall out of the sky every single day, right? And then at some point, people say, you know, we need to come, come together and regulate, you know, sort of, you know, sort of the airline industry. So everything like from the engines and so on and so forth is regulated. And now airline, the airline industry is the safest, like, you know, so it’s so safe to fly these days. And so I feel like we can do something similar as well. And so maybe those are indirect models where we can appeal to for, to address things like fragmentation and things like that. So.
Kyung Ryul Park:
Thank you, Matthew. Does anyone address to the questions or comments from you? Okay, sure.
Atsushi Yamanaka:
Well, thank you so much, actually, it’s a very, very interesting questions. And then I have a few, actually, comments on this, actually, fragmentations. Okay, fragmentations. Maybe perhaps we need to actually create the AIGF, right? Instead of IGF, AI Governance Forum. You know, even the internet governance, right, when we talk about 20 years, we still have not actually came up with a suitable models. I think we need to actually come up with what is actually workable and it’s a best examples models, instead of actually having a complete global regulations, which is very, very difficult and not very palatable for many, many stakeholders. So I think we need to actually have to come up with sort of best example, what is really, you know, workable solutions, rather than actually having a concrete regulations on AI, I think. I think that’s the way to go. In terms of Africa, yes, actually, I actually work mostly in Africa, by the way. So the last 12 years, 13 years, actually, I mostly actually work in Africa. in Africa, mostly in Rwanda. So actually, AI is actually utilized very much so in African context as well. A lot of startups actually have been using AI. And then other actually database solutions. So I think there are actually a lot of solutions which is coming out. However, the human resources are limited. You’re right. So there are different ways. There are actually a lot of advanced institutions now established in Africa as well, like Carnegie Mellon University in Africa, in Rwanda. Ames, African Institute of Mathematical Science is also in Rwanda as well. So that is also ways to advance those kind of initiatives. And also, I think developed countries like Korea, Japan, for example, I have quite many students actually who study AI and continue to actually do PhDs here in Japan. And one of them actually have generative AI models for African languages. And he was studying here. He was actually a research fellow at RIKEN, which is one of the top actually research institute. Unfortunately, he moved to Princeton to continue his research. But so that’s kind of sort of human resource capacity with initiatives. That’s, for example, JICA is actually known for. And also KOICA, the Korean, and also other countries actually doing that. So I think you can take advantage of that kind of framework. About DPI and DPGs, yes. Scalability was always the issues on open source initiatives and also on ICT for development as well. But I think we’re seeing a lot of interesting sort of DPI, like the Indian MOSSIP model. That is scalable. They’re actually serving like 1 billion people, basically. So that actually is seeing a lot of scalabilities beyond POC, which is a hallmark for ICT for development. And lastly, about women’s inclusions. I think digital technology actually gives actually quite unique opportunities in terms of pseudonymizations. So sort of masking, sorry, sort of masking the gender, but basically giving the opportunity for the inclusions. I think much more so than like in-person environment. So I just wanted to point that out.
Kyung Ryul Park:
Thank you very much. So before we close, I know we’re running out of time. I’d like to just give a couple of minutes from Rafiq and Professor Zhu. So Rafiq. Yeah, sure.
Rafik Hadfi:
A few words in regard to the, let’s say, the empowerment of local communities, for example, in terms of the inclusion, in terms of data collection, training, the audience of buyers, et cetera. So one approach we found is that not just deploy a solution in a simple social experiment, but instead have a holistic approach where we tend to form local communities, let’s say villages, municipalities, universities, schools, train them on how to use the whole, let’s say, AI system. And then this is done for a few studies, but at the same time, it allows us to build data sets to train these models for these communities. Because one of the things we encounter is that when you train these AIs, obviously in English, I mean, you’re biased towards one particular, let’s say, context. So we’ve done this in Indonesia. So this was a island called West Nusa Tenggara. With the University of Mataram, we trained these data sets with Indonesian. And currently we’re focusing on the Afghan case, because I think, as we all know, there’s a lot to do in Afghanistan. And the case study that I covered is mostly focusing on equity, women empowerment. And of course, as I said, the data collection, the AI models are trained particularly for this context. And of course, it can be generalized. This year, it’s for Afghanistan. Maybe, I don’t know, we’ll try Iran, I don’t know, next year. Yeah, so that’s all for me.
Liming Zhu:
Thank you. Thanks, I will be brief. I think on the fragmentation, I’m going to be slightly controversial because from a science point of view, we see they are all different stakeholder groups. We have great institutions, UN, OECD, WEF, you know, and if they are paying attention to AI and AI governance, there’s certainly, I think it’s valid because different stakeholder groups have slightly different concerns and the robust discussion between the groups and making trade-offs in some of this is going to be important. I don’t see at that level a fragmentation, but more of a more interest from different stakeholder groups. Comes to regulation, of course, there is also the importance of both horizontal regulation, regulating AI as a whole, and there’s a pros and cons in that, and the vertical regulations on particular products. The interaction between them, removing some overlaps is important, but there needs to be both rather than one way or another. But only one thing I think that shouldn’t be fragmentation is science. Science is international, science is not value-based, and the scientific evidence and the advice to this policy and stakeholder groups really need to collaborate more. I see a lot of scientists, research organizations here. I’m looking forward in collaborating with them. Thank you.
Kyung Ryul Park:
Thank you. So all of the essence of the global platform for collaboration. So thank you very much for your participation today, and I’d like to continue, you know, our discussion. So please keep in touch and then, so before we close, actually, I’d like to particularly thank, you know, the So Young Kim and Junho Kwon, doctoral students from KAIST. Thank you very much. And thank you very much for your time, and all the speakers, and if you leave your contact to Junho after this session, we’ll particularly just, you know, keep in touch with you after this session. Thank you very much. Thank you. Thank you. Thank you. Thank you.
Speakers
Atsushi Yamanaka
Speech speed
178 words per minute
Speech length
1841 words
Speech time
622 secs
Arguments
AI and other digital technologies can present ample opportunities for development and innovation in developing nations
Supporting facts:
- Many developing countries are using AI and other modern technologies to provide innovative products and services
- The needs of developing countries can fuel innovations
- Examples include mobile money which originated from Kenya
Topics: Artificial Intelligence, Digital Technologies, Development, Innovation
The issue of digital inclusion is still prominent with 2.7 billion people unconnected
Supporting facts:
- As of 2022, 2.7 billion people still remain unconnected globally
- The problem has become complex and the last 30% of this population seems difficult to reach
Topics: Digital Inclusion, Connectivity
The issue of data colonialism and data sovereignty is significant, especially for developing countries
Supporting facts:
- Developing countries face frustrations due to the perceived one-directional flow of data
- Worries about data being controlled by big tech companies
- Concerns on the legal jurisdiction over critical national information stored in foreign servers
Topics: Data Colonialism, Data Sovereignty, Developing Countries
There’s a need to create AI Governance Forum instead of a global regulation on AI
Supporting facts:
- After 20 years of discussing internet governance, no suitable model has been created, making it very difficult to have a global regulation.
- Creating an AI Governance forum and bringing up examples that work is more practical.
- The process will involve different stakeholders making global regulation less palatable.
Topics: AI, IGF, AI Governance Forum, Global Regulations, Internet Governance, Stakeholders
AI is being utilized a lot in Africa, despite a limited workforce
Supporting facts:
- Many startups in Africa are using AI and other database solutions.
- Students from Africa are studying AI in countries like Japan.
Topics: AI, Human Resources, Africa, startups, Database Solutions
Digital technology provides a unique opportunity for women’s inclusion
Supporting facts:
- Digital technology allows for pseudonymizations which can mask gender while giving opportunities for inclusion.
- It provides more opportunities for inclusion than in-person environments.
Topics: Digital Technology, Women’s Inclusion, Gender
Report
The use of artificial intelligence (AI) and digital technologies in developing nations presents ample opportunities for development and innovation. These technologies can provide innovative products and services that meet the needs of developing countries. For instance, mobile money, which originated in Kenya, exemplifies how AI and modern technologies are being utilized to create innovative solutions.
Moreover, Information and Communication Technology (ICT) plays a vital role in achieving the Sustainable Development Goals (SDGs). ICT has the potential to drive socio-economic development and significantly contribute to the chances of achieving these goals. It can enhance connectivity, access to information, and facilitate the adoption of digital solutions across various sectors.
However, despite the progress made, the issue of digital inclusion remains prominent. As of 2022, approximately 2.7 billion people globally are still unconnected to the digital world. Bridging this digital divide is crucial to ensure equal access to opportunities and resources. Additionally, there are challenges related to digital governance that need to be addressed.
Growing concerns about data privacy, cybersecurity, AI, internet and data fragmentation, and misinformation underscore the need for effective governance. The increasing prevalence of cyber warfare and the difficulty in distinguishing reality from fake due to advanced AI technologies are particularly worrisome.
Developing countries also face frustrations due to the perceived one-directional flow of data, concerns over big tech companies controlling data, and worries about legal jurisdiction over critical national information stored in foreign servers. To tackle these issues, it is suggested that an AI Governance Forum be created instead of implementing a global regulation for AI.
After 20 years of discussions on internet governance, no suitable model has been developed, making the establishment of a global regulation challenging. Creating an AI Governance Forum, and sharing successful initiatives, offers a more practical approach to governing AI. This process would require the active participation of different stakeholders, making the establishment of global regulations less appealing.
AI is gaining traction in Africa, despite a limited workforce. Many startups in Africa are leveraging AI and other database solutions to drive innovation. However, to further enhance AI adoption, there is a need to establish advanced institutions in Africa that can provide training for more AI specialists.
Examples of such advanced institutions include Carnegie Mellon University in Africa and the African Institute of Mathematical Science in Rwanda. Additionally, African students studying AI in countries like Japan and Korea are further augmenting expertise in this field. Digital technology also presents a unique opportunity for women’s inclusion.
It offers pseudonymization features that can help mask gender while providing opportunities for inclusion. In fact, digital technology provides more avenues for women’s inclusion compared to traditional in-person environments, thereby contributing to the achievement of gender equality. It is worth noting that open source initiatives, despite their advantages, face scalability issues.
Scalability has always been a challenge for open source initiatives and ICT for development. However, the Indian MOSSIP model has successfully demonstrated its scalability by serving 1 billion people. This highlights the importance of finding innovative solutions to overcome scalability barriers.
In conclusion, the use of AI and digital technologies in developing nations offers significant opportunities for development and innovation. However, challenges such as digital inclusion, data privacy, cybersecurity, and data sovereignty must be addressed. Establishing an AI Governance Forum and advanced institutions for training AI specialists can contribute to harnessing these technologies more effectively.
Additionally, digital technology can create unique opportunities for women’s inclusion. Finding innovative solutions for open source scalability is also crucial for the successful adoption of ICT for development.
Audience
Speech speed
191 words per minute
Speech length
1057 words
Speech time
331 secs
Arguments
The problem of fragmentation in AI governance at both single country and global level
Supporting facts:
- There are various agencies globally dealing with AI governance
- Approaching the problem from various angles like development perspective, sociological, ethical, philosophical, CS, etc.
Topics: AI governance, Fragmentation, Global Regulation
AI is different from other technologies and appears to be a democratic technology
Supporting facts:
- AI can be touched upon by anyone which makes it different from centralized technologies like nuclear technology
Topics: AI as a Democratic Technology, AI Characteristics
Need for global governance of AI due to fragmentation
Supporting facts:
- Fragmentation in global AI governance noted by previous audience member
- Highlighted need for multi-stakeholder engagement
Topics: AI governance, Global Regulation, EU AI Act
Importance of follow-up mechanisms and monitoring in AI regulation
Supporting facts:
- Questioned on how to ensure regulation implementation
- Mentioned the need for oversight and accountability
Topics: AI Regulation, Accountability, Oversight
AI’s impact and geographic limitations in Africa
Supporting facts:
- AI is emerging but its use is geographically limited, especially in Africa
- The conversation in the conference only had a sample case from Equatorial Guinea, as per the audience member
Topics: AI, Africa, geographic limitations
Questions on enforceability and applicability of human rights as both a moral and legal framework in AI
Supporting facts:
- Question raised about the difference between human rights as a moral framework and as a legal framework
- Query about what can be learned from established case law in International Human Rights Law
Topics: Human Rights, AI Framework, International Human Rights Law, Enforceability
Concerns about managing limited resources while maintaining public stewardship in digital public goods and infrastructure
Supporting facts:
- Point made on the challenge of balancing public stewardship with scalability due to limited resources
Topics: Digital Public Good, Digital Public Infrastructure, Scalability, Resource Management
Issues about the equitable availability, data collection, and data hygiene in conversational AI for women’s inclusion
Supporting facts:
- Question raised about how to ensure equitable availability of training data in conversational AI
- Issues of representing certain communities without infringing privacy rights or causing oppression risks
Topics: Conversational AI, Women’s Inclusion, Data Collection, Data Hygiene
Report
The analysis of the speakers’ arguments and supporting facts revealed several key points about AI governance and its impact on various aspects of society. Firstly, there is a problem of fragmentation in AI governance, both at the national and global levels.
This fragmentation hinders the development of unified regulations and guidelines for AI technologies. Various agencies globally are dealing with AI governance, but they approach the problem from different perspectives, such as development, sociological, ethical, philosophical, and computer science. The need to reduce this fragmentation is recognized in order to achieve more effective and cohesive AI governance.
On the topic of AI as a democratic technology, it was highlighted that AI can be accessed and interacted with by anyone, which sets it apart from centralized technologies like nuclear technology. This accessibility creates opportunities for a wider range of individuals and communities to engage with AI and benefit from its applications.
However, when considering the global governance of AI, the problem of fragmentation becomes even more apparent. The audience members noted the existence of fragmentation in global AI governance and highlighted the need for multi-stakeholder engagement in order to address this issue effectively.
Talks were mentioned about the creation of an International Atomic Energy Agency (IAEA)-like organization for AI governance, which could help in regulating and coordinating AI development across countries. Another important aspect discussed was the need for a risk-based approach in AI governance.
One audience member, a diplomat from the Danish Ministry of Foreign Affairs, expressed support for the EU AI Act’s risk-based approach. This approach focuses on identifying and mitigating potential risks associated with AI technologies. It was emphasized that a risk-based approach could help strike a balance between fostering innovation and ensuring accountability in AI development.
The discussions also touched upon the importance of follow-up mechanisms, oversight, and accountability in AI regulation. Questions were raised about how to ensure the effective implementation of AI regulations and the need for monitoring the compliance of AI technologies with these regulations.
This highlights the importance of establishing robust oversight mechanisms and accountability frameworks to ensure that AI technologies are developed and deployed responsibly. In terms of the impact of AI on African countries, it was noted that while AI is emerging as a transformative technology globally, its use is geographically limited, particularly in Africa.
One audience member pointed out that the conference discussions only had a sample case from Equatorial Guinea, highlighting the lack of representation and implementation of AI technologies in African countries. It was also mentioned that Africa lacks certain expertise in AI and requires expert guidance and support to prepare for the realities of AI’s development and deployment in the region.
Furthermore, questions arose about the enforceability and applicability of human rights in the context of AI. The difference between human rights as a moral framework and as a legal framework was discussed, along with the need to learn from established case law in International Human Rights Law.
This raises important considerations about how human rights principles can be effectively integrated into AI governance and how to ensure their enforcement in AI technologies. Additionally, concerns were voiced about managing limited resources while maintaining public stewardship in digital public goods and infrastructure.
The challenge of balancing public stewardship with scalability due to resource limitations was highlighted. This poses a significant challenge in ensuring the accessibility and availability of digital public goods while managing the constraints of resources. Finally, the importance of inclusive data collection and hygiene in conversational AI for women’s inclusion was discussed.
Questions were raised about how to ensure equitable availability of training data in conversational AI and how to represent certain communities without infringing privacy rights or causing risks of oppression. This emphasizes the need to address biases in data collection and ensure that AI technologies are developed in a way that promotes inclusivity and respect for privacy and human rights.
In conclusion, the analysis of the speakers’ arguments and evidence highlights the challenges and opportunities in AI governance. The problem of fragmentation at both the national and global levels calls for the need to reduce it and promote global governance.
Additionally, the accessibility of AI as a democratic technology creates opportunities for wider engagement. However, there are limitations in AI adoption in African countries, emphasizing the need for extended research and expert guidance. The enforceability and applicability of human rights in AI, managing limited resources in digital public goods, and ensuring inclusive data collection in conversational AI were also discussed.
These findings emphasize the importance of addressing these issues to shape responsible and inclusive AI governance.
Dasom Lee
Speech speed
170 words per minute
Speech length
1087 words
Speech time
384 secs
Arguments
Dasom Lee heads the AI and Cyber-Physical Systems Policy Lab at KAIST, which studies AI and infrastructure in the context of environmental sustainability.
Supporting facts:
- The lab particularly focuses on energy transition, transportation, and data centers.
- Parallel studies are conducted to maintain a holistic view of sustainability.
- The lab has five ongoing projects in line with its research objectives.
Topics: KAIST, AI and Cyber-Physical Systems Policy Lab, Artificial Intelligence, Environmental Sustainability
The lack of regulations on data centers internationally, particularly in relation to climate change, is a concern.
Supporting facts:
- The US has the most data centers and lacks strong federal regulation.
- State-level governments often lack the expertise to propose relevant regulations.
Topics: Regulations, Data Centers, Climate Change
Understanding privacy is contextualized and varies based on culture and history.
Supporting facts:
- Privacy cannot be universally regulated as understandings differ across geographical regions.
- The KAIST-NYU project aims to conduct a survey on privacy perceptions and potential future interactions.
Topics: Privacy, Contextualization, Culture and History
Report
Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, infrastructure, and environmental sustainability. The lab’s research covers energy transition, transportation, and data centers, addressing key challenges in these areas.
Currently, they have five projects aligned with their research objectives. One significant concern is the lack of international regulations on data centers, particularly in relation to climate change. The United States, for instance, lacks strong federal regulations despite having the most data centers.
State governments also lack the expertise to propose relevant regulations. This highlights the urgent need for global standards to address the environmental impact of data centers. In the field of automated vehicle research, there is a noticeable imbalance in focus.
The emphasis is primarily on technological improvements, neglecting the importance of social sciences in understanding the broader implications of this technology. The lab at KAIST recognizes this gap and is using quantitative and statistical methods to demonstrate the necessity of involving social science perspectives in automated vehicle research.
This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology. Privacy regulations present a unique challenge due to their contextual nature. The understanding and perception of privacy vary across geographical regions, making universal regulation unrealistic.
To address this challenge, the KAIST-NYU project plans to conduct a survey to explore privacy perceptions and potential future interactions based on culture and history. This approach will help policymakers develop tailored and effective privacy regulations that respect different cultural perspectives.
To summarise, Dasom Lee and the AI and Cyber-Physical Systems Policy Lab at KAIST are making valuable contributions to AI, infrastructure, and environmental sustainability. Their focus on energy transition, transportation, and data centers, along with ongoing projects, demonstrates their commitment to finding practical solutions.
The need for data center regulations, involvement of social sciences in automated vehicle research, and contextualization of privacy regulations are critical factors in the development of sustainable and ethical technologies.
Kyung Ryul Park
Speech speed
149 words per minute
Speech length
1244 words
Speech time
500 secs
Arguments
Kyung Ryul Park is moderating a session on AI and digital governance
Supporting facts:
- Kyung Ryul Park is the moderator for the session
- The session has seven talks focusing on AI and digital governance
Topics: AI, digital governance
Matthew Liao, a professor at NYU, will be the first speaker in the session
Supporting facts:
- Matthew Liao is introduced as the first speaker in the session
Topics: University, Education
Report
Kyung Ryul Park has assumed the role of moderator for a session focused on AI and digital governance, which includes seven talks specifically dedicated to exploring this topic. The session is highly relevant to SDG 9 (Industry, Innovation and Infrastructure) as it delves into the intersection of technology, innovation, and the development of sustainable infrastructure.
Park’s involvement as a moderator reflects his belief in the significance of sharing knowledge and information about AI and digital governance. This aligns with SDG 17 (Partnerships for the goals), emphasizing the need for collaborative efforts to achieve sustainable development. As a moderator, Park aims to provide a comprehensive overview of the ongoing research and policy landscape in the field of AI and digital governance, demonstrating his commitment to facilitating knowledge exchange and promoting effective governance in these areas.
The inclusion of Matthew Liao, a professor at NYU, as the first speaker in the session is noteworthy. Liao’s expertise in the field of AI and digital governance lends valuable insights and perspectives to the discussion. As the opening speaker, Liao is expected to lay the foundation for further discussions throughout the session.
Overall, the session on AI and digital governance is highly relevant to the objectives outlined in SDG 9 and SDG 17. Through Kyung Ryul Park’s moderation and the contributions of speakers like Matthew Liao, the session aims to foster knowledge-sharing, promote effective governance, and enhance understanding of AI and its implications in the digital age.
Liming Zhu
Speech speed
171 words per minute
Speech length
1179 words
Speech time
415 secs
Arguments
Australia has developed AI ethics principles with the goal of responsible AI adoption
Supporting facts:
- In 2019, Australia’s Department of Industry and Science, in consultation with industry stakeholders, established AI ethics principles
- CSIRO (Australia’s national science agency) and the University of New South Wales have been working on operationalizing these principles over the past four years
- The AI ethics principles focus on human-centered values, environment well-being, fairness, privacy, security, reliability, safety, transparency, explainability, contestability, and accountability
Topics: AI Ethics, Responsible AI Adoption
Fragmentation in terms of AI and AI governance coming from different stakeholder groups is not a negative issue
Supporting facts:
- Different stakeholder groups have slightly different concerns and the robust discussion between the groups are important
Topics: AI, AI governance, stakeholder groups, fragmentation
Science should not be fragmented as it is international and not value-based
Supporting facts:
- Scientific evidence and advice to policy and stakeholder groups need to come from wider collaboration
Topics: Science
Report
Australia has taken significant steps in developing AI ethics principles in collaboration with industry stakeholders. The Department of Industry and Science, in consultation with these stakeholders, established these principles in 2019. The country’s national science agency, CSIRO, along with the University of New South Wales, has been working to operationalise these principles over the past four years.
The AI ethics principles in Australia have a strong focus on human-centred values, ensuring fairness, privacy, security, reliability, safety, transparency, explainability, contestability, and accountability. These principles aim to guide the responsible adoption of AI technology. By prioritising these values, Australia aims to ensure that AI is used in ways that respect and protect individuals’ rights and well-being.
In addition to the development of AI ethics principles, it has been suggested that the use of large language models and AI should be balanced with system-level guardrails. OpenAI’s GPT model, for example, modifies user prompts by adding text such as ‘please always answer ethically and positively.’ This demonstrates the importance of incorporating ethical considerations into the design and use of AI technologies.
Diversity of stakeholder groups and their perspectives on AI and AI governance is viewed as a positive factor. The presence of different concerns from these groups allows for robust discussions and a more comprehensive approach in addressing potential challenges and ensuring the responsible deployment of AI.
Fragmentation in this context is seen as an opportunity rather than a negative issue. Both horizontal and vertical regulation of AI are deemed necessary. Horizontal regulation entails regulating AI as a whole, while vertical regulation focuses on specific AI products.
It is crucial to strike a balance and ensure that there are no overlaps or conflicts between these regulations. Collaboration and wider stakeholder involvement are considered vital for effective AI governance. Scientific evidence and advice should come from diverse sources and require broader collaboration between policy and stakeholder groups.
This approach ensures that AI policies and decisions are based on a comprehensive understanding of the technology and its impact. Overall, Australia’s development of AI ethics principles, the emphasis on system-level guardrails, recognition of diverse stakeholder perspectives, and the need for both horizontal and vertical regulation reflect a commitment to responsible and accountable AI adoption.
Continued collaboration, engagement, and evidence-based policymaking are essential to navigate the evolving landscape of AI technology.
Matthew Liao
Speech speed
186 words per minute
Speech length
2409 words
Speech time
778 secs
Arguments
AI regulation is critical to ensure the technology doesn’t cause harm and promotes human rights
Supporting facts:
- Tech companies voluntary commitments on principles of safety, security, trust secured by President Biden
- EU getting ready to adopt AI Act to provide a law framework for AI
Topics: AI, human rights, tech governance, legal regulation
Everyone has a responsibility towards AI regulation, not just tech industry or experts
Supporting facts:
- Regulation should involve companies, AI researchers, governments, universities and the public
Topics: AI, regulation, public participation
Enforceability is one of the biggest challenge in imposing regulations
Topics: AI regulation, legal challenges
Need for a collective approach in dealing with AI-related challenges
Supporting facts:
- Mention of need for collective approach involving the public, government and researchers
- Compares with the Nuclear Energy scenario
Topics: Collective Approach, AI Governance, Multi-stakeholders
Biomedical model as a reference for AI
Supporting facts:
- Consider drug discovery scenario where there is a lot of innovation and yet it is heavily regulated
- AI could be regulated similarly
Topics: Biomedical Model, AI Regulation, Drug Discovery
AI can be seen in risk-based levels
Supporting facts:
- Some AI applications might be low risk such as games, while others like medical devices could be more serious
Topics: AI Risk Levels, AI Regulation
Regulatory capture should not deter regulations of AI
Supporting facts:
- Regulatory capture is a concern, but history shows large industries, like the airline, too face regulations
- Regulation has driven safety innovations in the aviation industry
Topics: Regulatory Capture, AI Regulation
Report
The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of regulations to prevent harm and protect human rights. They argue that regulations should be based on a human rights framework, focusing on the promotion and safeguarding of human rights in relation to AI.
They suggest conducting human rights impact assessments and implementing regulations at every stage of the technology process. The speakers all agree that AI regulations should not be limited to the tech industry or experts. They propose a collective approach involving tech companies, AI researchers, governments, universities, and the public.
This multi-stakeholder approach would ensure inclusivity and effectiveness in the regulation process. Enforceability is identified as a major challenge in implementing AI regulations. The complexity of enforcing regulations and ensuring compliance is acknowledged. The speakers believe that regulations should be enforceable but recognize the difficulties involved.
The analysis draws comparisons to other regulated industries, such as nuclear energy and the biomedical model. The speakers argue that a collective approach, similar to nuclear energy regulation, is necessary in addressing AI challenges. They also suggest using the biomedical model as a reference for AI regulation, given its successful regulation of drug discovery.
A risk-based approach to AI regulation is proposed, considering that different AI applications carry varying levels of risk. The speakers advocate for categorizing AI into risk-based levels, determining the appropriate regulations for each level. Potential concerns regarding regulatory capture are discussed, where regulatory agencies may be influenced by the industries they regulate.
However, the analysis highlights the aviation industry as an example. Despite concerns of regulatory capture, regulations have driven safety innovations in aviation. In summary, the analysis underscores the importance of AI regulation in mitigating risks and protecting human rights. It emphasizes the need for a human rights framework, a collective approach involving various stakeholders, enforceability, risk-based categorization, and lessons from other regulated industries.
Challenges such as enforceability and regulatory capture are acknowledged, but the analysis encourages the implementation of effective regulations for responsible and ethical AI use.
Rafik Hadfi
Speech speed
159 words per minute
Speech length
1033 words
Speech time
390 secs
Arguments
Digital inclusion is a key element of modern society and it is also linked to gender equality
Supporting facts:
- Digital inclusion is an integral aspect of incorporating disadvantaged individuals of society into the use of ICT technology.
- Worked on a program in Afghanistan that aimed to empower women through digital inclusion
Topics: Digital inclusion, Gender equality, AI
Emphasizes the importance of community empowerment and inclusion in data collection
Supporting facts:
- Approach forms local communities to train on AI systems
- Training helps in building effective data sets and maintains diversity
Topics: Data Collection, Artificial Intelligence, Community Empowerment, Inclusion
Training AI in different languages reduces bias towards a particular context
Supporting facts:
- AI trained in English tends to be biased towards a particular context
- Different language training of AI was done in Indonesia and Afghanistan
Topics: Artificial Intelligence, Cultural Diversity, Bias in AI, Language Training
Using AI for women empowerment and equity
Supporting facts:
- Focus on Afghan case to use AI for women empowerment
Topics: Artificial Intelligence, Women Empowerment, Equity
Report
Digital inclusion is an essential aspect of modern society and is closely linked to the goal of gender equality. It plays a crucial role in integrating marginalized individuals into the use of information and communication technology (ICT) tools. Programs conducted in Afghanistan have shown that digital inclusion efforts can empower women by providing them with the knowledge and resources to actively engage with ICT technologies, bridging the societal gap and enabling them to participate more fully in digital spaces.
Artificial Intelligence (AI) has significant potential in facilitating digital inclusion and promoting social good. Case studies conducted in Afghanistan demonstrate that integrating AI into online platforms predominantly used by women can enhance diversity, reduce inhibitions, and foster innovative thinking among participants.
This highlights the transformative impact of AI in empowering individuals and ensuring their active involvement in digital spaces. Additionally, emphasizing community empowerment and inclusion in data collection processes is crucial for achieving the Sustainable Development Goals (SDGs). By involving local communities in training programs focused on AI systems, effective datasets can be created and maintained, ensuring diversity and representation.
This approach recognizes the significance of empowering communities and involving them in decision-making processes, thereby promoting inclusivity and collaborative efforts in achieving the SDGs. It is worth noting that training AI systems solely in English can lead to biases towards specific contexts.
To address this bias and ensure a fairer and more inclusive AI system, training AI in different languages has been implemented in Indonesia and Afghanistan. By expanding the linguistic training of AI, biases towards specific contexts can be minimized, contributing to a more equitable and inclusive implementation of AI technologies.
Moreover, AI has been employed in Afghanistan to address various challenges faced by women and promote women’s empowerment and gender equality. By utilizing AI for women empowerment initiatives, Afghanistan takes a proactive approach to address gender disparities and promote inclusivity in society.
In conclusion, digital inclusion, AI, and community empowerment are crucial components in achieving the SDGs and advancing towards a sustainable and equitable future. Successful programs in Afghanistan demonstrate the transformative potential of digital inclusion in empowering women. AI can further facilitate digital inclusion and promote social good by enhancing diversity and inclusivity in digital spaces.
Emphasizing community empowerment and inclusion in data collection processes is essential for creating effective and diverse datasets. Training AI in different languages helps minimize bias towards specific contexts, promoting fairness and inclusivity. Lastly, utilizing AI for women empowerment initiatives contributes significantly to achieving gender equality and equity.
Seung Hyun Kim
Speech speed
169 words per minute
Speech length
1195 words
Speech time
423 secs
Arguments
The intersection of advanced technologies and developing countries can further exacerbate social and economic problems
Supporting facts:
- Drug cartels using cable car system as a distribution mechanism in Colombia
- The possibility of misuse of AI technologies in communities vulnerable to illicit activities
Topics: AI Technologies, Social Implications, Developing Countries
Fragmentation of ICT and information systems in government hinders efficient governance
Supporting facts:
- The Ethiopian public finance system had fragmented systems running on different platforms that do not communicate with each other
- Different funding sources lead to different systems being implemented
Topics: Governance, ICT
Dependence on foreign technology undermines technology sovereignty
Supporting facts:
- Equatorial Guinea was largely dependent on Huawei and China for its ICT infrastructure
Topics: Technology Sovereignty, Foreign Dependence
Report
The intersection between advanced technologies and developing countries can have negative implications for social and economic problems. In Colombia, drug cartels have found a new method of distribution by using the cable car system. This not only enables more efficient operations for the cartels but also poses a significant challenge to law enforcement agencies.
Another concern is the potential misuse of AI technologies in communities that are already vulnerable to illicit activities. The speakers highlight the need to address this issue, as the advanced capabilities of AI can be exploited by those involved in criminal activities, further exacerbating social and economic problems in these areas.
In terms of governance, the Ethiopian government faces challenges due to the fragmentation of its ICT and information systems. There are multiple systems running on different platforms that do not communicate with each other. This lack of integration and coordination hampers efficient governance and slows down decision-making processes.
It is clear that the government needs to address this issue in order to improve overall effectiveness and service delivery. Furthermore, the dependence of Equatorial Guinea on foreign technology, particularly Huawei and China for its ICT infrastructure, raises concerns about technology sovereignty.
By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing control over its own systems and data. This dependence undermines the ability to exercise full control and authority over technological advancements within the country.
The speakers express a negative sentiment towards these issues, highlighting the detrimental impact they can have on social and economic development. It is crucial for policymakers and stakeholders to address these challenges and find appropriate solutions to mitigate the negative effects of advanced technologies in developing countries.
Overall, the analysis reveals the potential risks and challenges that arise from the intersection of advanced technologies and developing countries. By considering these issues, policymakers can make more informed decisions and implement strategies that help to maximize the benefits of technology while minimizing the negative consequences.
Takayaki Ito
Speech speed
104 words per minute
Speech length
509 words
Speech time
292 secs
Arguments
Development of a hyper-democracy platform
Supporting facts:
- Project started in 2010 with the Co-Agri system
Topics: artificial intelligence, group decision support system
Use of the Agri system in Afghanistan
Supporting facts:
- Collecting opinions from Kabul civilians in August 2021
- Collaboration with United Nations Habitat
Topics: Agri system, Afghanistan, United Nations, American troops
Report
Upon analysis, several compelling arguments and ideas related to artificial intelligence (AI) and its impact on various domains emerge. The first argument revolves around the development of a hyper-democracy platform, initiated by the Co-Agri system in 2010. Although the specific details regarding this system are not provided, it can be inferred that the intention is to leverage AI to enhance democratic processes.
This project is regarded positively, indicating an optimistic outlook on the potential of AI in improving democratic systems globally. Another noteworthy argument is the role of AI in addressing social network problems such as fake news and echo chambers. Recognising the text structures by AI is highlighted as a potential solution.
By leveraging AI algorithms to analyse and detect patterns in text, it becomes possible to identify and counteract the spread of false information and the formation of echo chambers within social networks. The positive sentiment expressed further underscores the belief in the power of AI to mitigate the negative impact of misinformation on society.
Additionally, the Agri system, initially developed as part of the Co-Agri project, is introduced as a potential solution for addressing specific challenges in Afghanistan. The system aims to collect opinions from Kabul civilians, indicating a focus on incorporating the perspectives of local populations.
Furthermore, collaboration with the United Nations Habitat underscores the potential for the Agri system to contribute to the achievement of Sustainable Development Goals related to good health and well-being (SDG 3) and peace, justice, and strong institutions (SDG 16). Lastly, the positive sentiment encompasses the potential of AI to support crowd-scale discussions through the use of multiple AI agents.
A multi-agent architecture for group decision support is being developed, which emphasises the collaborative capabilities of AI in facilitating large-scale deliberations. This development aligns with the goal of fostering industry, innovation, and infrastructure (SDG 9). The overall analysis showcases the diverse applications and benefits of AI in various domains, including democracy, social networks, conflict zones like Afghanistan, and large-scale discussions.
These discussions and arguments highlight the hopeful perspective of leveraging AI to address complex societal challenges. However, it is important to note that further information and evidence would be necessary to fully understand the potential impact and limitations of these AI systems.
Summary: The analysis reveals promising arguments for the use of artificial intelligence (AI) in different domains. The development of a hyper-democracy platform through the Co-Agri system shows optimism for enhancing democratic processes. AI’s potential in combating fake news and echo chambers is underscored, providing hope for addressing social network problems.
The Agri system’s focus on collecting opinions from Kabul civilians in Afghanistan and collaboration with the United Nations Habitat suggests its potential in achieving SDG goals. The use of multiple AI agents for crowd-scale discussions exhibits AI’s collaborative capabilities. Overall, AI presents opportunities to tackle complex societal challenges, though further information is needed to fully evaluate its impact.