AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Audience
Countries around the world are facing significant challenges in implementing artificial intelligence (AI) due to variations in democratic processes and understanding of ethical practices. The differences in governance structures and ethical frameworks make it difficult for countries with non-democratic processes to effectively grasp and navigate the complexities of AI ethics. Even in relatively democratic countries like the Netherlands, issues arise due to these disparities.
Furthermore, many countries are hastily rushing to implement AI without giving due consideration to important factors such as data quality, data collection, and data protection and privacy laws. The focus seems to be on implementing AI algorithms without laying down the necessary core elements required for a successful transition to AI-driven systems. This is a cause for concern, particularly in most countries in the global south where data protection and privacy laws are often inadequate.
The lack of adequate data quality and collection mechanisms, coupled with inadequate data protection and privacy laws, raises serious concerns about the safety and integrity of AI systems. Without proper measures in place, there is a risk of bias, discrimination, and potential misuse of data, which can have far-reaching consequences for individuals and societies.
In order to address these challenges, governments must recognize the need to ensure that their technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge. The rapid advances in AI capabilities require a proactive approach in developing the necessary infrastructure and upskilling the workforce to keep up with the evolving technology.
In conclusion, the implementation of AI is hindered by variations in democratic processes and understanding of ethical practices among countries. Rushing into AI implementation without addressing critical issues such as data quality and protection can lead to significant problems, particularly in countries with insufficient data protection and privacy laws. Governments play a crucial role in fostering appropriate technical infrastructure and developing the necessary skills to effectively navigate the challenges posed by AI technologies.
Jingbo Huang
Jingbo Huang places significant emphasis on the importance of collective intelligence in both human-to-human and human-to-machine interactions. He recognizes the potential for artificial intelligence (AI) and human intelligence to work in unison to tackle challenges, highlighting the positive aspects of this partnership rather than focusing solely on the negatives. Huang emphasizes the need for collaboration and preparation among human entities to ensure the integration of AI into society benefits all parties involved.
Huang further expresses curiosity about the collaboration between different AI assessment tools developed by various organizations. Specifically, he mentions the UNDP’s AI readiness assessment tool and raises questions about how it aligns or interacts with tools developed by the OECD, Singapore, Africa, and others. This indicates Huang’s interest in exploring potential synergies and knowledge-sharing among these assessment tools.
Additionally, Huang demonstrates an interest in understanding the challenges faced by panelists during AI conceptualization and implementation. Although specific supporting facts are not provided, this suggests Huang’s desire to explore the obstacles encountered in bringing AI projects to fruition. By examining these challenges, he aims to acquire knowledge that can help overcome barriers and facilitate the successful integration of AI into various industry sectors.
In summary, Jingbo Huang underscores the significance of collective intelligence, both within human-to-human interactions and between human and machine intelligence. Huang envisions a collaborative approach that leverages the strengths of both AI and human intelligence to address challenges. He also shows a keen interest in exploring how different AI assessment tools can work together, seeking to identify potential synergies and compatibility. Moreover, he expresses curiosity about the challenges faced during the AI conceptualization and implementation process. These insights reflect Huang’s commitment to fostering mutual understanding, collaboration, and effective utilization of AI technologies.
Denise Wong
Singapore has taken a human-centric and inclusive approach to AI governance, prioritising digital readiness and adoption within communities. This policy aims to ensure that the benefits of AI are accessible and beneficial to all members of society. The model governance framework developed by Singapore aligns with OECD principles, demonstrating their commitment to ethical and responsible AI practices.
In adopting a multi-stakeholder approach, Singapore has sought input from a diverse range of companies, both domestic and international. They have collaborated with the World Economic Forum Center for the Fourth Industrial Revolution for ISAGO (Intentional Standards for AI Governance Organizations) and have worked with a local company to write a discussion paper on Gen-AI. This inclusive approach allows for a variety of perspectives and fosters collaboration between different stakeholders in the development of AI governance.
Practical guidance is a priority for Singapore in AI governance. They have created a compendium of use cases that serves as a reference for both local and international organisations. Additionally, they have developed ISAGO, an implementation and self-assessment guide for companies to ensure that they adhere to best practices in AI governance. Furthermore, Singapore has established the AI Verify Foundation, an open-source foundation that provides an AI toolkit to assist organisations in implementing AI in a responsible manner.
Singapore recognises the importance of international alignment and interoperability in AI governance. They encourage alignment with international organisations and other governments and advocate for an open industry focus on critical emerging technologies. Singapore believes that future conversations in AI governance will revolve around international technical standards and benchmarking, which will facilitate cooperation and harmonisation of AI practices globally.
However, concerns are raised about the fragmentation of global laws surrounding AI; compliance costs can increase when laws are fragmented, which could hinder the development and adoption of AI technologies. Singapore acknowledges the need for a unified framework and harmonised regulations to mitigate these challenges.
Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening divides and negatively affecting vulnerable groups. Singapore, being a highly connected society, is aware of the possibility of certain groups being left behind. Bridging these divides and ensuring that technology is inclusive and addresses the needs of vulnerable populations is a priority in their AI governance efforts.
Cultural and ethnic sensitivities in conjunction with black box technology are also a concern. It is unpredictable whether technology will fragment or unify communities, particularly in terms of ethnic and cultural sensitivities. Singapore acknowledges the importance of considering a culturally specific perspective to understand the potential impacts of AI better.
In conclusion, Singapore’s approach to AI governance encompasses human-centricity, inclusivity, and practical guidance. Their multi-stakeholder approach ensures a diversity of perspectives, and they prioritise international alignment and interoperability in AI governance. While concerns exist regarding the fragmentation of global laws and the potential negative impacts on vulnerable groups and cultural sensitivities, Singapore actively addresses these issues to create an ethical and responsible AI ecosystem.
Dr. Romesh Ranawana
Sri Lanka is currently facing challenges in terms of its AI readiness and capacity, which puts it behind many other countries in this field. The country has just begun its journey towards improving AI readiness and it lags behind in terms of both readiness and capacity.
However, the government of Sri Lanka has recognised the importance of AI development and has taken the initiative to develop a national AI policy and strategy. This is expected to be rolled out in November and April 2024 respectively. The government understands that engagement in AI development should not be limited to the private sector or select universities, but it needs to be a national initiative involving various stakeholders.
Currently, AI projects in Sri Lanka face challenges in terms of their implementation. Although over 300 AI projects were conducted by university students in the country last year, none of them went into production. The proposed AI projects in Sri Lanka often do not progress beyond the conceptual stage. This highlights the need for better infrastructure and support to bring these projects to fruition.
One of the primary obstacles to AI advancement in Sri Lanka is the lack of standardized and digitized data. Data is often siloed and still available in paper format, making it difficult to utilize it effectively for AI applications. This challenge is not just technical but also operational, requiring a change in mindsets, awareness, and trust. Efforts to develop AI projects are being wasted due to the absence of consolidated data sets that address national problems.
In order to overcome these challenges, Sri Lanka aims to establish a sustainable, inclusive, and open digital ecosystem. The United Nations Development Programme (UNDP) is working on an AI readiness assessment for Sri Lanka. This assessment will help identify areas that need improvement and provide recommendations to establish an ecosystem that fosters AI development.
In conclusion, Sri Lanka is in the early stages of improving its AI readiness and capacity. The government is taking an active role in formulating a national AI policy and strategy. However, there are challenges in terms of implementing AI projects, primarily due to the lack of standardized and digitized data. Efforts are being made to address these challenges and establish a sustainable digital ecosystem that supports AI development.
Alison Gillwald
In Africa, achieving digital readiness for artificial intelligence (AI) poses significant challenges due to several fundamental obstacles. Limited access to the internet is a major barrier, with many countries in Africa having 95% broadband coverage, but less than 20% of the population experiencing the network effects of being online. This indicates that the lack of internet connectivity severely hampers the potential benefits of AI. Additionally, the high cost of devices is a crucial factor preventing a large portion of the population from acquiring the necessary technology to access the internet and engage with AI applications. Moreover, rural location is a greater hindrance to access than gender, further exacerbating the digital divide in Africa.
Education emerges as a key driver of digital readiness and the ability to absorb AI applications in Africa. Access to education directly impacts individuals’ affordability of devices, thereby influencing their ability to engage with AI technology. Consequently, investing in education is crucial for enhancing digital readiness and facilitating successful AI adoption in Africa.
The African Union Data Policy Framework plays a critical role in creating an enabling environment for AI in Africa. The framework recognizes the significance of digital infrastructure in supporting the African continental free trade area and provides countries with a clear action plan alignment and implementation support. This framework aims to overcome the challenges faced in achieving digital readiness for AI in Africa.
Addressing data governance challenges and managing the implications of AI require global cooperation. Currently, 90% of the data extracted from Africa goes to big tech companies abroad, necessitating the development of global governance frameworks to effectively manage digital public goods. Collaboration on an international scale is essential to ensure that data governance supports AI development while protecting the interests and sovereignty of African nations.
Structural inequalities pose a significant challenge to equal AI implementation. When AI blueprints from countries with different political economies are implemented in other societies, inequalities are deepened, leading to the perpetuation of inequitable outcomes. Ethical concerns surrounding AI are also raised, highlighting the role played by major tech companies, particularly those rooted in the world’s most prominent democracies. Ethical challenges arise from these companies’ actions and policies, which have far-reaching implications for AI development.
An additional concern is the presence of bias and discrimination in AI algorithms due to the absence of digitization in some countries. In certain nations, such as Sri Lanka, where there is a lack of full digitization, people remain offline, resulting in their invisibility, underrepresentation, and discrimination in AI algorithms. This highlights the inherent limitations of AI datasets in being truly unbiased and inclusive, as they rely on digitized data that may exclude significant portions of the global population.
In conclusion, African countries face several challenges in achieving digital readiness for AI, including limited internet access, high device costs, and rural location constraints. Education plays a crucial role in enhancing digital readiness, while the African Union Data Policy Framework provides an important foundation for creating an enabling environment. Addressing data governance challenges and managing the implications of AI require global cooperation and collaboration. Structural inequalities and ethical concerns pose significant risks to the equitable implementation of AI. Additionally, the absence of digitization in some countries leads to bias and discrimination in AI algorithms.
Alain Ndayishimiye
AI has the potential to have a profound impact on societies, but it requires responsible and transparent practices to ensure its successful integration and development. Rwanda is actively harnessing the power of AI to advance its social and economic goals. The country aims to become an upper middle-income nation by 2035 and a high-income country by 2050, relying heavily on AI technologies.
Rwanda’s national AI policy is considered a beacon of responsible and inclusive AI. This policy serves as a roadmap for the country’s AI development and deployment and was developed collaboratively with various stakeholders. Through this multi-stakeholder approach, Rwanda was able to create a comprehensive and robust policy framework that supports responsible AI practices.
One key benefit of the multi-stakeholder approach in developing Rwanda’s AI policy is the promotion of knowledge sharing and capacity building. By bringing together different stakeholders, experiences and insights were shared, fostering learning and collaboration. This approach also contributed to the strengthening of local digital ecosystems, creating a supportive environment for the development and implementation of AI technologies.
However, ethical considerations remain important in the development and deployment of AI. Concerns such as biases in AI models and potential privacy breaches need to be addressed to ensure AI is used ethically and does not harm individuals or society. Additionally, the impact of AI on job displacement and potential misuse in surveillance should be carefully managed and regulated.
To further promote the responsible use of AI and create a harmonised environment, it is crucial for African countries to collaborate and harmonise their AI policies and regulations. This would allow for a unified approach when dealing with large multinational companies and help reduce the complexities of regulation. Harmonisation would also facilitate the development of shared digital infrastructure, attracting global tech giants by providing a consistent and supportive regulatory environment.
In conclusion, the transformative potential of AI for societies is significant, but responsible and transparent practices are essential in its development and deployment. Rwanda’s national AI policy serves as an example of responsible and inclusive AI, with a multi-stakeholder approach promoting knowledge sharing and capacity building. However, ethical considerations and the harmonisation of AI policies among African countries should be prioritised to ensure the successful integration and benefits of the digital economy, positioning Africa as a significant player in the global digital space.
Galia Daor
The Organisation for Economic Cooperation and Development (OECD) has been actively involved in the field of artificial intelligence (AI) since 2016. They adopted the first intergovernmental standard on AI, called the OECD AI Principles, in 2019. These principles consist of five values-based principles for all AI actors and five policy recommendations for governments and policymakers.
The five values-based principles of the OECD AI Principles focus on fairness, transparency, accountability, and human-centrality. They aim to ensure that AI systems respect human rights, promote fairness, avoid discrimination, and maintain accountability. The OECD aims to establish a global framework for responsible AI development and use.
The OECD AI Principles also provide policy recommendations to assist governments in developing national AI strategies that align with the principles. The OECD supports countries in adapting and revising their AI strategies according to these principles.
In addition, the OECD emphasizes the need for global collaboration in AI development. They believe that AI should not be controlled solely by specific companies or countries. Instead, they advocate for a global approach to maximize the potential benefits of AI and ensure equitable outcomes.
While the OECD is optimistic about the positive changes AI can bring, they express concerns about the fragmentation of AI development. They highlight the importance of cohesive efforts and coordination to avoid hindering progress through differing standards and practices.
To conclude, the OECD’s work on AI focuses on establishing a global framework for responsible AI development and use. They promote principles of fairness, transparency, and accountability and provide support to countries in implementing these principles. The OECD also emphasizes the need for global collaboration and acknowledges the potential challenges posed by fragmentation in AI development.
Robert Opp
Embracing artificial intelligence (AI) has the potential to make significant progress towards achieving the Sustainable Development Goals (SDGs), according to a report by the UN Development Programme (UNDP) and ITU. The report highlights the positive impact that digital technology, including AI, could have on 70% of the SDG targets. However, the adoption of AI varies among countries due to their differing stages of digital transformation and the challenges they face.
For instance, Sri Lanka requires a national-level initiative to build AI readiness and capacity, as building AI readiness and capacity cannot be achieved solely at the corporate or private sector level. Other countries have recognized this and have implemented national-level initiatives. UNDP is actively involved in supporting digital programming and has initiated the AI readiness process in Sri Lanka, Rwanda, and Colombia. This process aims to complement national digital transformation processes and views the government as an enabler of AI.
Challenges in implementing AI include fragmentation, financing, ensuring foundation issues are addressed, and representation and diversity. Fragmentation and foundational issues have been identified as concerns, as AI is only as good as the data it is trained on. Additionally, financing issues may hinder the effective implementation of AI, and it is crucial to ensure representation and diversity to avoid bias and promote fairness.
Advocates argue for a multi-stakeholder and human-centered approach to AI development as a method of risk management. This approach emphasizes the importance of including various worldviews and cultural relevancy in the development process.
The report also highlights the need for inclusivity and leaving no one behind in the journey towards achieving the SDGs. It champions working with indigenous communities, who represent different worldviews, to ensure that every individual has the opportunity to realize their potential.
In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful consideration must be given to address challenges such as fragmentation, financing, foundation issues, and representation and diversity. By adopting a multi-stakeholder and human-centered approach, AI can be harnessed effectively and inclusively to drive sustainable development and improve the lives of people worldwide.
Session transcript
Robert Opp:
So, please feel free to join us at the table. Don’t have to sit in the gallery. This is a round table after all. Peter, are you going to lurk in the corner over there or you want to join us at the table? Can I just do a check for our panelists online? Dr. Ranawana, are you there? Can you hear us? Oh, okay, now we can see you. Oh, perfect. Thank you. And we’ve got Alan, are you there? Yes. Can you unmute, please, so that we can? Oh, apologies for that. I was speaking on mute. Perfect. Yeah, good morning to you all, and good day to whatever part of the world you’ve been waiting for. Okay, great. Thank you so much. All right. We’ll get started. Still some seats at the table. Feel free to join us at the table if you wish. I think we’ll get started here. Okay. Okay. Good afternoon, everyone in Kyoto. Good morning, good afternoon, or good evening for those of you joining online. It’s great to have you all with us. This session is on AI is Coming, Are Countries Ready or Not? And this week has been full of AI-related events, and I’m grateful that you’ve still got the stamina to join us for this one. This is a discussion that we really want to bring forward on how countries in different stages of their digital transformation effort are taking the opportunity or trying to figure out the challenges around adopting artificial intelligence for the purpose of their national development process. And so looking forward to a good conversation on this. I’ll just, my name is Robert Opp, and I’m the Chief Digital Officer from the United Nations Development Program. UNDP, for those of you who are not aware, is essentially a big development arm of the UN system. We have presence in 170 countries. We work across many different thematic areas, including governance, climate, energy, resilience, gender, et cetera, all for the purpose of poverty eradication. And our work in digital really stems from that, because it is about how do we embrace the power of digital technology in a responsible and ethical way that puts people and their rights at the center of technological support for the development. So just to set a few words of context, I think, obviously, AI, especially with the advent of generative AI, has just exploded into the public consciousness around what is potentially available for countries in terms of the power of technology. And as we are in terms of the state of, let’s say, a pivotal point of history, three weeks ago we celebrated the SDG Summit. It marks the halfway point to the Sustainable Development Goals. We are not on track for the Sustainable Development Goals, unfortunately. Only 15% of the targets have actually been achieved. Some work that we, together with the International Telecommunication Union did in a report that was released called the SDG Digital Acceleration Agenda, we found that 70% of the SDG targets could actually be positively impacted with the use of technology. And I have to say, during that week of the high-level segment, a few weeks back, of the General Assembly, there was a lot of discussion around digital transformation overall, the power of technology, and particularly, like here, the interest, I might say, the buzz around artificial intelligence and what might it do. But it’s not so straightforward for countries to know what to do, where to turn, for countries who don’t have necessarily all of the foundations, who are not aware of the models out there. And so the conversation today is really about how do we, what situation are countries in now, and what might we do to support countries as they embrace AI? What can countries also do to reach out and organize themselves with the support of others? And I think it’s important to note that our view on this is really based in the opportunity. A number of discussions this week have focused on the potential negative impacts of artificial intelligence, which is correct, because there are lots of concerns. But on the positive side, when we look at this as UNDP, there is tremendous potential opportunity here to embrace AI and really make significant progress against the SDGs. And so the conversation today is about how to do that in a responsible and ethical way. But we’re going to focus a little bit more on the opportunity than the sort of doom and gloom end of humanity view, that that’s not important. But okay, so to join us today and for really kind of giving some texture to this roundtable, we’ve got a few fire starter speakers with us. And we’re very grateful to have a great mix of people that can really speak to this issue. So we have Dr. Romesh Ranawana, who’s the chairman of the National Committee to Formulate AI Policy and Strategy for Sri Lanka. That was an entity that was established by the president this year. We have joining us soon, hopefully, in the chair beside me, which is still empty, Dr. Allison Gilwald, who’s the executive director of Research ICT Africa, which is a digital policy and regulatory think tank based in South Africa. We have Denise Wong with us, who’s assistant chief executive within the Data Innovation and Protection Group at Singapore’s Infocom Media Development Authority, IMDA. We have Galia Daur, who’s a policy analyst within the Digital Economy Policy Division at OECD’s Directorate for Science, Technology, and Innovation. And we have Alain Indashamaye. I’m sorry, Alain, if I haven’t got all of the syllables of your last name in there, who’s the project lead of the Artificial Intelligence Machine Learning of the Center for the Fourth Industrial Revolution based in Rwanda. And so my plan here is that we’ll go through some initial comments from all of our speakers, and then we do want to turn this over to you as well. I’m also going to make just a couple remarks from the UNDP side and some of the work that we’re doing in this space as well before, just before we go to Q&A. But the offer to join us at the table is still open for those of you who’d like to come, because it is a roundtable. All right. With that, let’s go to our first speaker. And, you know, the setting here or the overall question is, you know, are countries ready for AI? What are you seeing on the ground? And what have the experiences been so far in building open, inclusive, trusted digital ecosystems that can support AI? And to speak first, I’m going to turn to Dr. Romesh Ranawana from Sri Lanka. Dr. Ranawana, the floor is yours.
Dr. Romesh Ranawana:
Thank you so much, Robert, and good morning, good afternoon to all. As you mentioned, Sri Lanka has just embarked on this journey of, you know, trying to improve AI readiness and bring the benefits of AI to the general population. But what we are faced here as a country with very low level of AI readiness and AI capacity is quite a gargantuan task, mainly because the AI revolution is just starting. And if you look at where other countries are, we are significantly behind, and we need to catch up to make sure that we bring the benefits of AI both to the people and our economy as well. And something that we’ve seen happening around the world over the last few years is that I think most countries have realized that building AI readiness, building AI capacity cannot be done at, you know, at the corporate level or the private sector level or by a few universities. It’s been accepted now that it’s got to be a national level initiative that needs to take this forward. And we’ve seen most of the developed countries that have formulated national AI strategies over the last few years, and most of the middle income countries as well, especially over the last two years, have formulated policies. So in Sri Lanka, what we have is a strange situation where we have lots of engineers who are capable of building AI systems. And we did a study recently where we found that just over the last year, there have been more than 300 projects in our universities conducted by university students on AI. But the problem that we have is that very few of these systems or none of these systems are actually going into production. They are stopping at the stage where it’s a proof of concept or a research paper, but it’s not really going into society and actually causing benefits. So our challenge here was how do we create an ecosystem where not only is the research done, but also for some of these benefits to be brought out into government services, into building economy, to making food production more efficient, bringing in education and things like that. Now, the challenge that we have, and we are very fortunate that the government took the initiative to set up the Presidential Task Force to look at national AI policy, and our current trajectory is to launch the policy in November, and then a strategy which will come up with the execution plan for the policy, which will come out in April 2024. But the challenge with AI is the fact that AI is a general-purpose technology. AI can affect just about any sector, from education to health sector to the national economy, government services, and as a country with limited resources, our challenge was how do we pick the battles that we want to address initially with our AI policy. We can’t do everything because our resources are limited, and this is quite a difficult task. And as the general guidelines for how we want to approach this was, we had three main pillars that we were looking at. First, what were those foundation elements that we need to put in place to build up AI readiness and AI capacity? Number two, what are those specific applications and specific areas that we need to focus on that will cause immediate impact and also impact on the medium term? And third is also set up the regulatory environment on how we are going to protect our citizens from the negative impacts of AI as well. And for this, once again, the scope is unlimited on what we can do, and we’ve been very fortunate that the UNDP stepped in and has started working on an AI readiness assessment for Sri Lanka, which will be the foundation of setting out those parameters on what we need to look at for what should be our main priorities and focus areas for the AI strategy that we are developing. So the AI readiness assessment at the moment is underway, and this AI readiness assessment will evaluate our strengths, our weaknesses, and the opportunities that lie ahead for Sri Lanka in terms of AI. So as we stride forward, our eyes are set on fostering an open and inclusive digital ecosystem that will not only withstand the shockwaves of the AI revolution, but also harness its potential for the greater good of our people. It’s not going to be an easy task. I mean, developing a policy and a strategy is one thing, but I think the key element for Sri Lanka is how we are going to execute on this and also do this in such a way that it’s sustainable, where this policy is not going to be put aside when governments change or the priorities of the government change. So that’s something that we are also looking at on how we can approach that. But really, our focus at the moment is first identifying our boundaries. What should the AI policy in Sri Lanka initially focus on? And then from there onwards, building on where we are going to go. Thank you.
Robert Opp:
Thank you so much, Dr. Ranawana. Fascinating questions to be asked, and I’m sure shared with a number of other countries. We’re going to go to our next speaker, Denise Wong of IMDA in Singapore. And Singapore has done a lot very quickly, I would say, in the AI space. And we’re aware of some of the work you’ve done in policy and governance and how you’ve really worked to include putting people at the centre, taking a human-centric approach. Could you tell us a little bit more about the approach Singapore has taken and some of the things you’ve done to make this a human-centric endeavour?
Denise Wong:
Thanks for the question, and thank you for having me. So, you’re indeed right. I think our policy has always been quite an inclusive one. As part of the national AI strategy, everything that we’re doing today has really been on the back or building upon foundations about inclusion and about high level of digital readiness and adoption within our communities. And that’s really been the bedrock for all the work that we’ve done after that. Focusing specifically on AI governance, which is the area that I work in, of course, in the area of governance and regulation, you’re always thinking about risks and potential of misuse. But I prefer not to see it only in that frame. A lot of it has been about what does AI mean and what does AI mean for the public? good in the public interest and it’s in that context that we see both opportunities and opportunities for our public at large but of course with the appropriate guardrails and safety nets and implementable guidance and thus if I sum up our approach it’s really been about being practical and having detailed guidance to to help shape norms and usage and in doing so we started off with a model governance framework fully aligned to OECD principles which was a very important to us to have the international alignment and we took a multi-stakeholder approach in developing that we also took in a fairly international approach in doing that we got feedback from more than 60 companies from different sectors both domestic and international as part of the first iteration of the model governance framework we also worked on what we call ISAGO which is an implementation and self-assessment guide for companies and that was actually done together with the World Economic Forum Center for the Fourth Industrial Revolution and that helps to provide practical alignment for companies with their governance practices with the model governance framework we also put together a compendium of use cases which contains illustrations on how local and international organizations can align and implement these practices so it was always a fairly practical approach that we took an organization centric lens and that sort of took away the sting of maybe politics or risk or existential and really just focused on what companies could do, should do at the very practical level. In the Gen-AI space I would say we’ve also been fairly so practical and industry focused. We issued a discussion paper in June focusing on Gen-AI. It was framed as a discussion paper rather than a white paper because we really wanted to generate discussion. It was an acknowledgement that we didn’t know all the answers. No one does and we wrote it together with a company in Singapore so that we had both perspectives. We’ve also launched the AI Verify Foundation in June. It’s an open source foundation. To be honest we’re also learning how to do open source foundations as we go along but that also has an AI open source toolkit not in Gen-AI space in the discriminative AI space but that was really a toolkit that we wanted to build and let companies sort of take and adopt and adapt for their own use so that we sort of lowered the cost of compliance for companies. The AI Verify Foundation has over 80 companies now who’s joined us from so all over the globe and we did think that it was important to bring different voices to the table at the industry level but also at the end-user level to understand what were the fears and concerns that people had on the ground. So it’s been a constant sort of conversation that we’ve had with our public and with our companies, with international organisations, with other governments. All of the aim of I guess interoperability, global alignment but also to sort of encourage a sort of open industry focus lens and that’s so generally the way we have approached a lot of these issues in sort of critical emerging technologies, frontier technologies where we may not know what the answer is. The last piece I’ll say is that we’ve also been looking at the question of standards and benchmarking and evaluations because a lot of that beyond the principles will be about that what are those technical standards and we do think that it is quite important to have international alignment on that as well and we do so hope that beyond general principles that’s where a lot of where the conversation will go. Thank you.rn to
Robert Opp:
Thank you so much and I want to tu our third kind of country focused example and we’re going to go to Alain Ndayeshimaye, I’m sorry Alain you’ll have to correct me on the pronunciation of your last name I’m so sorry, who works at the Centre for the Fourth Industrial Revolution based in Rwanda and you know we know you know as a Centre for the Fourth Industrial Revolution it’s by nature a multi-stakeholder endeavor and I guess my question for you is what’s the situation you see on the ground in Rwanda and what’s how can multi-stakeholder approaches help with building the capacity of local digital ecosystems to engage in AI?
Alain Ndayishimiye:
Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in the world before I contribute to this esteemed panel allow me to extend my heartfelt gratitude to the UNDP team for inviting me to be part of this dialogue. As AI continues to shape our world the need for responsible and transparent practices have never been more pressing. AI has the potential to transform societies on a global scale but it also brings with it inherent risks if not developed, deployed and managed responsibly. This calls for a multi-stakeholder approach in addressing these issues. So as introduced my name is Alain Ndayeshimaye, I’m the project lead for AI and machine learning at the Centre for the Fourth Industrial Revolution. Our work revolves around the defined governance gaps for designing, testing and defining governance protocols and policy frameworks that can be developed and adopted by government policy makers and regulators just to keep up with the accelerated pace of the benefits of adopting AI while minimizing their potential risks. For Rwanda, AI is a leap forward technology that through appropriate design and responsible implementation can help advance Rwanda’s social and economic aspirations while becoming an upper middle income country by 2035 and a high income country by 2050. Even more, AI as a general purpose technology holds the power to achieve the UN sustainable development goals. In addition, AI has been identified as a driver of innovation and global competitiveness and this is as a result of the government’s dedication of harnessing the power of data algorithm as a catalyst for social and economic change and transformation. So in response to the question posed to me, allow me to reference our journey in developing Rwanda’s policy as a case study. Rwanda often referred to as the land of a thousand hills is now appraising to be the land of AI innovation. With our national air policy now formally approved, we have set forth transformative journey. This policy isn’t just a roadmap, it’s a testament to Rwanda’s vision and commitment to position itself as a beacon of responsible and inclusive AI on the global stage. However, it’s ambitious goals requires a strong foundation to build upon and this is where we bring the concept of stakeholder collaboration at the forefront and this is why we’re established as a centre. Our experience with multi-stakeholder approach has been both enlightening and transformative. Crafting and implementing a national air policy wasn’t a solitary endeavour. It was a symphony of collaboration between the Minister of RCT and Innovation of Rwanda, the Centre for Industrial Revolution, the public sector, international partners, academia, the private sector and civil society, collaborating towards a common goal. These stakeholders brought different perspective, experiences and expertise and reaching the policy development process. The process of developing AI policy wasn’t an inclusive and consultative one. The consultation and workshops were held enabling stakeholders to share their insights, concerns and ideas. By involving multiple stakeholders, the policy development process ensured transparency, accountability and participation, resulting in a more comprehensive and robust policy framework. One of the key benefits of a multi-stakeholder approach is the diversity of perspective it brings on the table. In the case of Rwanda’s AI policy, involving diverse stakeholders meant a holistic understanding of the current challenges and opportunities, resulting in a more nuanced, effective policy solution. The collaboration between stakeholders also helped build consensus and trust, fostering a sense of ownership of the policy among all stakeholders. Furthermore, multi-stakeholder approach promotes a knowledge sharing and capacity building among stakeholders, ultimately strengthening local digital ecosystems. In a development of Rwanda’s AI policy, stakeholders from different sectors and organisations shared experiences and knowledge, fostering learning and collaboration. This has not only resulted in a more comprehensive AI policy, but also heightened the capacity of stakeholders to effectively implement it. The multi-stakeholder approach has greatly aided Rwanda in establishing its AI strategy on a firm data governance foundation. As we all know, data serves as the lifeblood of AI, making robust data governance essential. By collaborating with stakeholders through thoughtful consultation, Rwanda’s AI policy now encompasses a stringent data protection and privacy guidelines. And this aligns with the principles of the recently enacted Rwanda Data Protection and Privacy Law that will help co-design, which mandates the safeguards and upholding of data privacy of processing of any processing data of Rwandan residents. In conclusion, the multi-stakeholder approach has undoubtedly played a critical role in strengthening local digital ecosystems in Rwanda and building a foundation of our strategy. It has promoted collaboration, knowledge sharing, capacity building among stakeholders, resulting in a more comprehensive and effective AI policy. This approach has not only fostered inclusive and responsible development of AI, but also builds on the trust and confidence among stakeholders, promoting sustainable and inclusive growth of local digital systems. Furthermore, collaborative risk assessment informed by various stakeholders enables us to identify and mitigate any diverse AI-associated risks. Moreover, by collaborating with our international partners, we are aligned and aligned our local AI initiative with global-based practices, ensuring that Rwanda is at the forefront of AI, both locally and internationally. Thank you for the opportunity to speak. Over to you, moderator.
Robert Opp:
Thanks so much, Alain. Some really interesting observations there. And actually, the last thing you said was looking at what’s happening globally. And that’s actually where I’d like to turn the conversation now. We have a couple speakers who are going to talk to kind of a zoomed out perspective and sort of looking overall. So with that, I want to turn to Alison Gilwald, who is the Executive Director of Research ICT Africa. And you’ve been working across the African continent on some research to understand where countries are with their AI readiness. We’ve just heard an example from the Rwanda case. But if you zoom out a bit, what are some of the takeaways that you’re seeing from the African experience so far with AI?
Alison Gillwald:
Thank you very much. So I think, you know, when we when we speak about the digital readiness of AI, we’re actually asking the same question as we did about the digital readiness for the data economy, the same questions we were asking about the digital readiness for broadband or internet. Because in fact, many across the continent, many of those foundational requirements are still not met. So many, many countries, you know, Rwanda, Lesotho, many, many countries actually have now 95% plus broadband coverage, mobile coverage, high speed broadband coverage. And yet we have, you know, less than the sort of 20% critical mass that we know to see the kind of network, network effects, the benefits of, of being online, of broadband, of, you know, associated with economic growth and those kinds of things. So they, you know, they’re still analog, existing analog problems. And they’re also still, you know, enormous digital backlogs. And what our research that we do nationally representative surveys, access and use surveys, they used to be, but now they’re kind of very much more comprehensive, looking at financial inclusion and platform work and all sorts of other things. So they really give us a better sense of the maturity and the, you know, what what people are actually doing. Those studies are done across several countries in Africa. And what we see actually, is that the real challenges around the demand side issues. So yes, you know, the biggest barrier actually to the internet is actually the cost of the device. And, you know, there are all sorts of associated policy issues around that and, you know, things can be done. And then of course, once people are online, you get this very minimal use of, of data, of broadband, because people can’t afford it. The affordability side is actually that, you know, this is the demand side, the pricing is the supply side. And that goes to our business models, our regulatory models, our lack of institutional capabilities or endowments to do some of the effective regulation you would need of these very imperfect markets. But only the real challenges are on the demand side. And, you know, all the kind of aggregated gender data that you get that presents, you know, this growing disparity between women and men, which is not true across all parts of the continent at all, is really around education. The thing that is driving access to education, whether you can afford that device or everything, is education. And that’s from the modelling that we can do, because these are demand side, fully, you know, representative of the net, of the census frame, demand side studies. And of course associated with education is income. So, you know, people who are employed. And it’s because women are concentrated amongst those who are less educated and employed. In fact, gender, if you can’t profit on its own, is not necessarily a major factor. And then, of course, multiple other factors. So, you know, the much greater factor than gender actually is rural location, rural location, but a number of intersectional factors that really impact people’s participation. So a lot, you know, a lot of the demand driven new technology frontier strategies are looking at some of the supply side issues, and of course are looking at the high-level skills issues that you need. Short of data scientists or data engineers or that sort of thing. But it’s actually, you know, it’s this fundamental human development challenge, but also this fundamental ecosystem, you know, the economy and society, that really has to be addressed fundamentally if we’re going to be able to address these higher level issues. And so, you know, just questions of absorption. Even if we are thinking about trying to create, you know, public sector data sets that could be used for the public sector, you know, planning and purposes, so it’s kind of building some public value out of this. I think that’s an important point that we need to come back to, because I think a lot of the AI kind of models are driven by, you know, commercial value creation, which of course we desperately want on the continent, and the kind of innovation, you know, discourse, which of course we want on the continent. But actually to get there and to make sure that that is, you know, equitable, inclusive, just, requires that some of these other factors actually drive policy. And, you know, basically the kind of absorptive capacity of your firms, the absorptive capacity of your citizenry. You know, we see, for example, many, many countries with, you know, now planning AI applications for government services, which, you know, historically are, you know, if you’ve got less than 20% of your population connected, then, you know, digital services become a, you know, vanity project, unless you actually can get people that can, you know, you can use these services more effectively. And I think that’s why the, you know, this enabling environment, these foundational requirements that we have are absolutely essential. And the, you know, we speak about a lot in terms of the infrastructure side. and the human development side of things. But the enabling legal environment, the enabling, you know, human-centered, as you called it, but, you know, rights-based environment, as we’ll see as it plays out, is actually absolutely essential foundations for building this kind of environment. And so I just briefly want to touch on, because it might seem tendential to AI, but actually we think is an absolutely critical step in creating these conditions is the African Union Data Policy Framework, which has really created this enabling environment that you need. The first half of the framework really deals with these enabling conditions. I mean, we don’t call them preconditions because we don’t have the luxury of getting, you know, 50% of people online or, you know, more than the majority of your country with a digital ID or, you know, a data infrastructure in place. So these things have to happen, but they’re very strongly acknowledged. So there’s a very strong component in the data policy framework that creates this enabling environment that has really leveraged the African continental free trade area in getting member states, I think, to understand that unless they have this digital underpinning for the continental free trade agreement, which is a single digital market for Africa, they’re not simply not going to be the beneficiaries of a common market. And I think that’s allowed some leverage on that. But it’s also allowed us to return to some of the challenges we’ve had around, you know, a human rights framework. And I think there’s a, it’s a high-level principle document, but there’s a commitment to progressive realization of very ambitious, and I think, you know, absolutely laudable and good objectives that we now need to get to. There’s an implementation plan, so there’s countries can actually be supported. I think that’s been our biggest challenge. I think Sri Lanka was actually speaking about the challenge of implementation being such a great one, so there’s an implementation strategy now. But I think, you know, the important part is that we can kind of, we can come back to some of these foundational things that we haven’t got right. You know, there’s lots of talk about a trusted environment. There are a lot of assumptions from so-called best practices from elsewhere in the world that assume, you know, institutional endowments, regulatory autonomy, you know, competitive markets, you know, skills and ability in these markets that simply isn’t there. And I think, you know, the document importantly points out that, you know, of course, cybersecurity is important for, you know, for building trust, data protection. These are necessary conditions, but they’re not sufficient conditions. And so the questions around, you know, the legitimacy of the environment that you’re in, if you’re wanting to build, you know, a kind of digital financial system that’s going to engage with a common market, these kinds of things all become really important. And so it’s got very kind of clear action plan alignment of, you know, various potentially conflicting legacy policies that might be there. And of course, the big acknowledgment, which I will try and make the last point because I may have just run over. But I think, you know, the issues particularly with data governance. Sorry. The issues particularly with data governance but have implications for AI, very strong implications for AI, are that, you know, we’re setting up a lot of national plans. And of course, that’s all we can do at one level. But essentially, these are globalized. And we would argue, you know, digital public goods that we now need to govern through global governance frameworks. A lot of the things we want to do, particularly the, you know, safeguarding of harms are very often, you know, we’ve got our local companies. We try to build local companies. But, you know, 90% of the data that’s extracted from Africa, you know, goes out of Africa. It goes to big tech and big companies. So these national strategies have to be located globally. And the other side of that, also from a global governance point of view because we no longer can do this, which we would usually do with public interest regulation. And again, I think a lot of the focus is on the, you know, the negative things of AI. And so you’ve got to build this, you know, compliance regime, harms, you know, protection compliance regime, is the lack of attention, which we do see in a lot of OECD work in this area, is about the economic regulation that you need of the underlying, you know, data economy, access to data, access to quality data, those kinds of things, you know, open data regimes, which are in the data policy framework, governance component, by the way. But I think there’s, you know, a lot of the discussions that we’ve had this week, a lot of emphasis on safeguards, harms, privacy, but not a lot on what you would really need to require to redress the uneven distribution that we see in opportunities, not just harms, which we do see as well, you know, between countries of the world, but also within countries. Well, speaking of OECD, we just happen to have them here.
Robert Opp:
But Allison, thank you for opening up a huge can of worms there on multiple levels of global governance and things. We won’t be able to get to all of those, but really interesting insight in the Africa experience so far. So I want to turn to our next and last speaker for kind of our initial set of speakers here, Galia Dower, who’s from the OECD. And, you know, OECD, as Allison was saying, has done a fair bit of work in this space, and you’ve produced a set of AI principles, and I know you’re working on toolkits and guidance and things like that, but maybe tell us a little bit more what you see from the global level here about what countries are asking for, what the state of readiness is, just what you’re seeing in general.
Galia Daor:
Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, and I will try to do justice to the OECD’s work, but also recognizing that there really are challenges and also I think not one organization, or obviously not one country, can address all of them. So I think at the OECD we come to this from the perspective of, yes, a set perhaps of assumptions, but I think it doesn’t replace, I think, other work that needs to be done. So maybe just to sort of get a bit into that work, so the OECD started working on artificial intelligence in 2016, and then in 2019 we adopted the first intergovernmental standard on artificial intelligence, the OECD AI principles, and these are sort of a set of five values-based principles that apply to all AI actors, and a set of five policy recommendations for governments, for policymakers. The values-based principles are sort of about what makes AI trustworthy, and also go into some of what other speakers have mentioned on the benefits of AI, but also the risks, and I think both are important. So elements like using AI for sustainable development and for well-being, also sort of having AI as human-centered, and as well as risks such as transparency, security, importance of accountability. Separately, the policy recommendations, so I think perhaps linked to what Allison said without sort of prejudging the situation of any specific country, sort of looking at what a country would need to put into place in order to be able to achieve these things. So R&D for AI, but also the digital infrastructure, including data, including connectivity. The enabling policy environment, the capacity, the human capacity building, and of course, international and multi-stakeholder collaboration, which is a point that others have made already. So the principles are now adopted by 46 countries and also serve the basis for the, sorry, including Singapore as was already mentioned, including other countries like Egypt, and also serves as the basis for the G20 AI principles. And as was mentioned, our work now is focusing on how to support countries in implementing these principles, so how to translate principles into practice. And sort of looking at perhaps three types of actions that we’re taking, so focusing on the evidence base. So one aspect is to look at what countries are actually doing, so looking at national AI strategies that countries around the world are adopting. So we have an online interactive platform, the OECD.AI Policy Observatory that has already more than 70 countries in it. And what we’ve seen, for example, since we started this work, then at least of what we know, that 50 countries have adopted national AI strategies, which I think is an interesting data point. The observatory also has other data on AI, including sort of investment in AI in countries around the world, research publications, so to see which countries are more active in this space, and what they’re doing, jobs and skills, and sort of movement around the world of jobs and skills for AI. So a lot of sort of wealth of information there. We also have an expert group that, a network of experts, which is multidisciplinary and international, with sort of very broad participation. And we’re also developing sort of practical tools to support countries, and organizations, sorry, I should say, in implementing the AI principles. Perhaps one last point that I would mention, sort of in terms of what we’re seeing with these principles now, then one thing is we see that they are impacting national and international AI frameworks around the world with the definition of AI that’s in the OECD principles, but also our classification framework for AI systems. And the other thing that I’ll say is that we are also supporting countries, if they’re interested in sort of developing or revising their national AI strategies to align with the AI principles. So this is work that, for example, we’re now doing with Egypt. But I’ll stop here, and I really look forward to the discussion. Thank you.
Robert Opp:
Thanks so much, Galia. And the time is racing by. I can’t believe it. We have about 15 minutes left in this session. And I’ll do my best now to open up for some questions. And Jingbo, I wonder if you want to make a couple remarks as well, just to put you on the spot. But before, so think of your questions now. Before I turn to those, just to mention a couple things from the UNDP side, we are doing digital programming or supporting digital programming in about 125 countries, 40 to 50 of which are really kind of looking at national digital transformation processes and some of those foundations that Alison was talking about. Because we really see the importance of building an ecosystem. This doesn’t happen with fragmented solutions. This happens when you build the kind of foundational ecosystem that is comprised of people, the regulatory side, the government side, the business side, and so on and so forth, as well as your underlying connectivity and affordability. And we’ve started also to kind of an additional process to that, which we’re calling the AI readiness process that basically can complement that. And it really looks at, and this is what Dr. Ranawana was talking about, where we’ve been working to support Sri Lanka, Rwanda, and Colombia currently on looking at how does government serve as an enabler and how is society set up in terms of being able to handle artificial intelligence in terms of capacity and some of those foundational issues. And this is something that we have been doing. It’s been piloted together with or in the auspices of an interagency UN process that’s led by ITU and UNESCO. And something that we hope will be one of the tools that are available to countries in the toolkit as they seek to address these issues, taking that kind of ecosystem approach. So if there are any of you who are representing national interests here and would be more interested in that, please let us know. With that, I think I’d like to turn over. Jingbo, I was pointing to you because Jingbo Huang is the director of the UN University in Macau and has a research initiative focused on AI. And if you want to take the floor, I don’t want to put you on the spot, but if you had any quick observations, and then I’ll turn to some questions. We’ve got a question here and a couple online. Is that okay? I didn’t warn you before. I’m sorry.
Jingbo Huang:
Thank you, Robert. I’m here to learn. My name is Jingbo. I’m the director of UN University Research Institute in Macau. So we are a research UN organization, and our work is mainly related to, you know, AI governance. So, for example, we conduct research training education from the angle of the biases related to gender, children in the algorithm, and we have done research in collaboration with some UN organizations, for example, UNESCO, ITU, UN Women, and soon to be, hopefully, with UNDP. So I’m really here to have an open mind to learn about this topic. We saw a very nice overview and pictures from Africa, from Asia, from OECD, so it’s really a great learning. So the one keyword that comes into my mind is a collective intelligence, and it’s not only the collective intelligence between people and people, and we talk about regulatory framework, business. We have all these entities among human to work together to make this infrastructure ready, and we’re also talking about machine intelligence, if we call them intelligence, and the human intelligence working together. How are we taking it? Like what Robert has said at the beginning, it’s not only about the dark side. So how do we bring the bright side together? So the collective intelligence is the keyword that just emerged in my mind. So I have, like, two questions since I’m learning here. So the first question is related to the different tools and the frameworks that OECD developed, that Singapore developed, and maybe Africa also has developed, and also UNDP. So how do these tools work together? For example, I just learned the concept of UNDP’s AI readiness assessment tool, and now I heard about your different tools. How do these tools work together? Or maybe they don’t. So this is the first question. Second question is to all the panelists about what keeps you awake at night now? Because this is important for me to learn, what are the challenges you’re facing right now in this implementation process, in this conceptualization process? I have the overview, but I want to know the pain points. Thank you.
Robert Opp:
All right, we’re going to quickly just go to a couple questions here so that we then will have time for response from panelists.
Audience:
Thank you very much. My name is Auke Aukepals, and I work for KPMG in the responsible AI practice. And first of all, I was triggered by this session title, and so you did a good job with the session proposal. So, I work for KPMG, the Netherlands, but also coordinating our efforts globally. And what we see is a large difference in countries just acting in a democratic way itself. And also, being part of the ethics work stream, yeah, really gives me a broad view of the entire world, actually, as certain countries that are having no democratic processes in place, others do. So with our advisory practice, it’s really difficult to advise on ethics with a country that has no clue what that’s about, to be a little bit proactive about that. So that’s really difficult. Also, asking your question, are countries ready? No, definitely not yet. Because coming from the Netherlands, we also see even issues in our own country, which relatively is quite democratic. However, yeah. So we really need to cooperate together. And also thanks to the OECD guidelines and principles, they really function well. And we use them in our daily work and daily basis. And also happy to contribute on next iterations, if possible. But yeah, these are my observations from the outside. Thanks. We’re going to take one more question here. We’ll go online quickly. And then I think we’ll just have a chance for panelists to come back once and then we’ll close. Hi. I am Armando Manzuela from the Dominican Republic. First of all, I’d like to thank all the organizations for doing this amazing session. All the people that were intervening have done a remarkable point regarding AI for development in this case. Well, there’s a thing here with AI. And it’s the way that it’s being promoted by the companies, by international organizations that are promoting that AI will transform the world, that will change everything, which is actually right. But there’s the thing that in the race to become AI proficient at all levels in most nations, especially in the global south, has been taken into, I must say, not necessarily the right direction. Because we’re focusing on implementing algorithms, implementing solutions that are AI infused to do a myriad of things, especially in government. But the main problem is that we don’t have the core elements for doing a transition to an AI-based society just yet, starting with data. So we have problems with data quality, with data collection, with how we assure that the data is correct so we can prevent biases. And of course, we don’t have the infrastructure in place, and yet most of the countries have inadequate data protection and privacy laws and regulations. So given this situation, and knowing how things are moving and how things are approaching, how do we propose or create a set of rules, a set of frameworks that help to guide the countries into the right direction regarding data? Because when we talk about AI, we’re really talking about large language models, which is just data. So if the data is not right, how we can implement properly AI solutions that actually help our country to develop? And this is moreover the question in the global south we’re asking now. Dominican Republic. Thank you, Armando. Okay, let’s go online very quickly, because we’re really running out of time, and I’m going to turn to my colleague who’s on my team, Yasmin Hamdar, who’s been moderating online. And Yasmin, I’m sorry to make you do this, but can you just pick one question for the – I know you’ve got more than that, but just pick one and ask, please. First, thanks, Rob. So we have one interesting question. Given the rapid advances in AI capabilities, how can governments ensure that its technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge?
Robert Opp:
Okay, how to make sure the workforce is agile enough, which is related, I think, to many of these. All right, so I’m going to go back to our panelists, and I think this, unfortunately, will have to be our closing round as well. And I think Jingbo’s given a good question that I’d like all of you to answer, which is what keeps you awake at night. But if you’d like to speak as well to the questions about the tools, I will also have a response on that one. Also the issue of this sort of how do we get the fundamentals right? How do we get the right data? And those kinds of things. How do we work toward a collective intelligence? Dr. Ranawana, can I just turn to you first for your brief responses, please?
Dr. Romesh Ranawana:
Of course. I mean, essentially, the problem is, like it’s been mentioned so many times, are the foundation elements. And for us, one of the biggest obstacles to take our AI ambitions forward and also to provide the benefit to the people, especially in terms of government efficiency, corruption, making things more available. Sri Lanka is fortunate that we have good connectivity and about 90% of the population does have connectivity available to them. But the lack of data, I think, has been highlighted so many times, is probably our biggest problem. Data is extremely siloed and it’s still available in paper format in a lot of situations. So how to first digitize it, standardize it, and then make it available to those who need it in a fair and responsible manner is probably our biggest challenge now. And that’s not only a technical challenge, but also an operational challenge. It’s changing mindsets, awareness, trust in these systems. And that’s something that we are really struggling with on how to take that forward. Thanks so much. Is that what keeps you awake at night? Absolutely. That is definitely one of the big ones. Because like I said, we have so many people doing AI projects, but they’re running AI projects on data that they download from the internet, data related to other countries. We don’t have projects running on Sri Lankan problems because we just don’t have those problems available. So all these efforts are being wasted because we don’t have a consolidated set of datasets to address national problems. Thanks so much.
Robert Opp:
Alain, let’s go to you next. What keeps you awake at night?
Alain Ndayishimiye:
So the technical development and deployment of AI is… So here I’m referring to ethical considerations when developing and actually deploying these technologies. It’s what often keeps me up at night. Concerns around risk associated with technology such as biases in AI models, potential privacy breaches, and broader society impacts such as job displacement and misuse of AI in areas such as surveillance and autonomous operatory are one of the things that actually keep me at night. So ensuring that AI is used responsibly and benefits all societies per month, and it’s a challenge that requires continuous vigilance and adoption. And please allow me to also talk on how there’s a question around how these instruments need to work together. So let me speak on harmonization, especially in the African context. Harmonizing policy and regulatory efforts among African countries is not only pivotal for their participation in the global digital economy, but also provides a unified front when dealing with large multinationals that are at the center of this global digital data economy transformation. Such harmonization efforts force economic integration, enabling smoother cross-border trade and investment, and promote standardization, reducing complexities of different regulation. It also facilitates the development of shared digital infrastructure, ensuring connectivity across regions. A unified stance, Africa’s voice in global negotiations, ensures a better representation of its interests. By addressing shared digital challenges collectively, Africa can devise effective solutions, attract global tech giants through a consistent regulatory environment, and inspire innovation. Furthermore, harmonized approach ensures the continuous web, consumer protection, robust data privacy standards, and boost African competitiveness in the digital realm. In an essence, a coordinated policy framework is essential for Africa to leverage the digital economy and benefits and positions itself as a significant player within this space. So thank you once again for this opportunity. Over to you.
Robert Opp:
Thanks so much, Alain. Denise, let’s turn to you. What keeps you awake at night on this issue, and any other comments you want to make?
Denise Wong:
Thank you. I think on a global level, I worry about fragmentation. I think we’ve been in this space for a long time now in different areas where global laws are fragmented, and that just raises compliance costs for everyone. So I think we have an opportunity to do it right and have that conversation early, and we should try and do that. I think at a more domestic level, I worry about leaving vulnerable groups behind, even in a society that’s highly connected and highly literate like Singapore. There’s always that fear that technology will widen divides and create harms that we cannot anticipate to groups of people that we should be protecting the most. And I guess the third thing I worry about is cultural sensitivities and ethnic sensitivities, especially with black box technology. It’s hard to predict whether the technology is going to fragment and divide, or it’s going to unify and cohere. And so part of what we do is to try and unpack what it means from a culturally specific lens. And that is really about AI for the public good.
Robert Opp:
Thanks Denise. I think I’ll turn to Alison.
Alison Gillwald:
Thank you. Sure. What keeps me awake at night is the inevitable implication of inequality, unless we address some of the underpinning structural inequalities that are leading to this. And I think that’s very likely if we simply take these blueprints and take them from countries with completely different political economies, conditions, and just implement them onto these societies. And just in that regard, I have to say that although the challenges of having democratic frameworks within developing AI policy is obviously a challenge for many of us, but I think we really need to appreciate that actually the ethical challenges that we are facing are with some of the biggest tech companies that come from, at least some of them, come from the biggest democracies in the world. So I think the ethical issues should be addressed globally and can be addressed globally. And just finally to say that, you know, the point that was being made about we can’t actually unbias these big data sets because the countries like Sri Lanka was mentioning, the countries aren’t digitized. People are not online. We simply can’t unbias the invisibility, the underrepresentation, and the discrimination that we’re seeing in algorithms currently.
Robert Opp:
Kalia. Yeah.
Galia Daor:
So very quickly, just to say, I think I can really relate to a lot of the things that Denise said about the fragmentation, and this is a real concern. I think what keeps me up at night is also that we will miss out on the opportunities that AI has to really, that I think ultimately have the potential to make everything better for everyone if we do it right. And I think it’s too big to miss, and that means that it’s something that we can’t leave to just companies, we can’t leave to a certain set of countries, which I guess leads me to this has to be, and because AI itself is global, because it has no border, then it has to be a collaborative effort, and that needs to be genuinely collaborative, and I think this is a good, it’s not a start because we’ve been in that process for a while, but I think this kind of conversation is really important. Thanks.
Robert Opp:
Thanks, Kalia, and thanks to all our panelists, and just we’re over time, and I’m sure we’re going to, yeah, I’m getting the nod, but I would just say a couple things to try to sum up what I’ve heard, and to add a little bit of my own insomnia or sleeplessness to this. You know, I think we’ve heard, there are certainly the challenges here, and the challenges that have been named are things like fragmentation, and the foundations, and it’s so important to get the foundations right, which is hard, you know, this is not a simple process. It involves a lot of moving parts, and a lot of complexity, and a lot of issues around financing and everything else, but we have to do it, and countries, we have to help countries get there, and I’m talking to myself, that’s partly our role, but, and if I sort of add what keeps me awake at night, it’s very similar to what is being mentioned here. If we, as the United Nations system, stand for leaving no one behind as part of the 2030 agenda, then if we say that artificial intelligence is a major opportunity for humanity, but artificial intelligence is only as good as the data behind it, and the training behind the data, and, or the training of the data, and the production of the algorithms, so how are we going to ensure the representation and diversity in the underlying data sets, and the models that are put forward, because these will not be culturally relevant to everyone’s worldview. We work with indigenous communities across the world, with thousands of local languages, these represent different worldviews, and human development is not about everyone becoming the same, it’s about every human realizing their own potential, so, that being said, the opportunities here are that, I think what we’ve heard over and over again is the multi-stakeholder approach is really critical, and if we’re going to bring in those worldviews, it’s going to have to be an intentional consultative process, and I think being human-centered in all of this is a method of risk management. This is a way to ensure that we build the basis and the foundation that we really need, and I know I’m missing out some of the nuanced points that were made, but I’m really very grateful for all of you for, first, our panelists for having spoken today, and giving us some insights, and for all of you who’ve joined us in the room, as well as online, and please do reach out to us at UNDP, or the other panelists in their organizations for any other questions or support that we might be able to give, and we will get through this together. So thank you very much. Please give ourselves a round of applause. Thank you. Thank you. Thank you.
Speakers
Alain Ndayishimiye
Speech speed
160 words per minute
Speech length
1287 words
Speech time
482 secs
Arguments
AI has the potential to transform societies but requires responsible, transparent practices
Supporting facts:
- AI carries risks if not managed and developed responsibly
- The importance of multi-stakeholder approach in addressing these issues
Topics: AI, Responsible use of AI, AI implementation
Rwanda is using AI to advance their social and economic goals
Supporting facts:
- AI is a leap forward technology for Rwanda
- Rwanda aims to become an upper middle income country by 2035 and a high income country by 2050 through the use of AI
Topics: AI, Rwanda’s social and economic goals, AI for social and economic transformation
Multi-stakeholder approach promotes knowledge sharing and capacity building, strengthening local digital ecosystems
Supporting facts:
- Multi-stakeholder approach brought diversity in the development of Rwanda’s AI policy
- The collaboration created a comprehensive and robust policy framework
- Stakeholders shared experiences and knowledge, fostering learning and collaboration
Topics: Multi-stakeholder approach, Knowledge sharing, Capacity building, Local digital ecosystems
Ethical considerations of AI development and deployment
Supporting facts:
- Concerns about biases in AI models and potential privacy breaches
- Impact on society such as job displacement and AI misuse in surveillance
Topics: AI ethics, privacy, bias in AI models
A coordinated policy framework is essential for Africa
Supporting facts:
- For levering the digital economy benefits
- For positioning Africa as a significant player in the digital space
Topics: Digital economy, policy framework, digital data transformation
Report
AI has the potential to have a profound impact on societies, but it requires responsible and transparent practices to ensure its successful integration and development. Rwanda is actively harnessing the power of AI to advance its social and economic goals.
The country aims to become an upper middle-income nation by 2035 and a high-income country by 2050, relying heavily on AI technologies. Rwanda’s national AI policy is considered a beacon of responsible and inclusive AI. This policy serves as a roadmap for the country’s AI development and deployment and was developed collaboratively with various stakeholders.
Through this multi-stakeholder approach, Rwanda was able to create a comprehensive and robust policy framework that supports responsible AI practices. One key benefit of the multi-stakeholder approach in developing Rwanda’s AI policy is the promotion of knowledge sharing and capacity building.
By bringing together different stakeholders, experiences and insights were shared, fostering learning and collaboration. This approach also contributed to the strengthening of local digital ecosystems, creating a supportive environment for the development and implementation of AI technologies. However, ethical considerations remain important in the development and deployment of AI.
Concerns such as biases in AI models and potential privacy breaches need to be addressed to ensure AI is used ethically and does not harm individuals or society. Additionally, the impact of AI on job displacement and potential misuse in surveillance should be carefully managed and regulated.
To further promote the responsible use of AI and create a harmonised environment, it is crucial for African countries to collaborate and harmonise their AI policies and regulations. This would allow for a unified approach when dealing with large multinational companies and help reduce the complexities of regulation.
Harmonisation would also facilitate the development of shared digital infrastructure, attracting global tech giants by providing a consistent and supportive regulatory environment. In conclusion, the transformative potential of AI for societies is significant, but responsible and transparent practices are essential in its development and deployment.
Rwanda’s national AI policy serves as an example of responsible and inclusive AI, with a multi-stakeholder approach promoting knowledge sharing and capacity building. However, ethical considerations and the harmonisation of AI policies among African countries should be prioritised to ensure the successful integration and benefits of the digital economy, positioning Africa as a significant player in the global digital space.
Alison Gillwald
Speech speed
178 words per minute
Speech length
2133 words
Speech time
718 secs
Arguments
African countries struggle with digital readiness for AI due to fundamental challenges that remain unmet.
Supporting facts:
- Many countries in Africa have 95% broadband coverage yet less than 20% achieve the network effects of being online.
- The biggest barrier to the internet is the cost of the device.
- Rural location is a greater factor ofs access limitation than gender.
- Less than 20% of the population is connected in many countries.
Topics: AI readiness, Africa, Digital readiness, Challenges
Education is the key to enhancing digital readiness and AI absorption in Africa.
Supporting facts:
- Access to education impacts whether people can afford the device.
- Education is a major factor driving digital readiness and the ability to absorb AI application.
Topics: Education, Digital readiness, AI readiness, Africa
The challenges of data governance and implications for AI require a global perspective and international collaboration.
Supporting facts:
- National plans need to be globally located because 90% of the data extracted from Africa goes to big tech companies abroad.
- Global governance frameworks are needed for managing digital public goods.
Topics: Data governance, AI, Globalization
Structural inequalities lead to inevitable implication of inequality in AI
Supporting facts:
- Inequalities are deepened if AI blueprints from countries with different political economies are implemented in other societies
Topics: Artificial Intelligence, Inequalities, AI policy
Ethical challenges with AI are from major tech companies, stemming from the biggest democracies in the world
Topics: Ethics, AI policy, Major tech companies
AI datasets can’t be ‘unbiased’ because some countries and individuals are not digitised and leave no data trails
Supporting facts:
- Countries like Sri Lanka are not fully digitized, thus, people are not online leading to invisibility, underrepresentation, and discrimination in algorithms
Topics: Artificial Intelligence, Datasets, Bias, Undigitization
Report
In Africa, achieving digital readiness for artificial intelligence (AI) poses significant challenges due to several fundamental obstacles. Limited access to the internet is a major barrier, with many countries in Africa having 95% broadband coverage, but less than 20% of the population experiencing the network effects of being online.
This indicates that the lack of internet connectivity severely hampers the potential benefits of AI. Additionally, the high cost of devices is a crucial factor preventing a large portion of the population from acquiring the necessary technology to access the internet and engage with AI applications.
Moreover, rural location is a greater hindrance to access than gender, further exacerbating the digital divide in Africa. Education emerges as a key driver of digital readiness and the ability to absorb AI applications in Africa. Access to education directly impacts individuals’ affordability of devices, thereby influencing their ability to engage with AI technology.
Consequently, investing in education is crucial for enhancing digital readiness and facilitating successful AI adoption in Africa. The African Union Data Policy Framework plays a critical role in creating an enabling environment for AI in Africa. The framework recognizes the significance of digital infrastructure in supporting the African continental free trade area and provides countries with a clear action plan alignment and implementation support.
This framework aims to overcome the challenges faced in achieving digital readiness for AI in Africa. Addressing data governance challenges and managing the implications of AI require global cooperation. Currently, 90% of the data extracted from Africa goes to big tech companies abroad, necessitating the development of global governance frameworks to effectively manage digital public goods.
Collaboration on an international scale is essential to ensure that data governance supports AI development while protecting the interests and sovereignty of African nations. Structural inequalities pose a significant challenge to equal AI implementation. When AI blueprints from countries with different political economies are implemented in other societies, inequalities are deepened, leading to the perpetuation of inequitable outcomes.
Ethical concerns surrounding AI are also raised, highlighting the role played by major tech companies, particularly those rooted in the world’s most prominent democracies. Ethical challenges arise from these companies’ actions and policies, which have far-reaching implications for AI development.
An additional concern is the presence of bias and discrimination in AI algorithms due to the absence of digitization in some countries. In certain nations, such as Sri Lanka, where there is a lack of full digitization, people remain offline, resulting in their invisibility, underrepresentation, and discrimination in AI algorithms.
This highlights the inherent limitations of AI datasets in being truly unbiased and inclusive, as they rely on digitized data that may exclude significant portions of the global population. In conclusion, African countries face several challenges in achieving digital readiness for AI, including limited internet access, high device costs, and rural location constraints.
Education plays a crucial role in enhancing digital readiness, while the African Union Data Policy Framework provides an important foundation for creating an enabling environment. Addressing data governance challenges and managing the implications of AI require global cooperation and collaboration.
Structural inequalities and ethical concerns pose significant risks to the equitable implementation of AI. Additionally, the absence of digitization in some countries leads to bias and discrimination in AI algorithms.
Audience
Speech speed
143 words per minute
Speech length
730 words
Speech time
306 secs
Arguments
Countries are not yet ready for implementing AI, due to vast differences in democratic processes and understanding of ethical practices.
Supporting facts:
- Countries with non-democratic processes struggling to grasp AI ethics
- Even Netherlands faces issues despite being relatively democratic
Topics: AI implementation, Ethics, Democratic processes
The need for governments to ensure that technical infrastructure and workforce skills can adapt to new AI technologies as they emerge.
Supporting facts:
- Rapid advances in AI capabilities require infrastructure and skills agility
Topics: Government policies, Technical infrastructure, Workforce skills, AI technologies
Report
Countries around the world are facing significant challenges in implementing artificial intelligence (AI) due to variations in democratic processes and understanding of ethical practices. The differences in governance structures and ethical frameworks make it difficult for countries with non-democratic processes to effectively grasp and navigate the complexities of AI ethics.
Even in relatively democratic countries like the Netherlands, issues arise due to these disparities. Furthermore, many countries are hastily rushing to implement AI without giving due consideration to important factors such as data quality, data collection, and data protection and privacy laws.
The focus seems to be on implementing AI algorithms without laying down the necessary core elements required for a successful transition to AI-driven systems. This is a cause for concern, particularly in most countries in the global south where data protection and privacy laws are often inadequate.
The lack of adequate data quality and collection mechanisms, coupled with inadequate data protection and privacy laws, raises serious concerns about the safety and integrity of AI systems. Without proper measures in place, there is a risk of bias, discrimination, and potential misuse of data, which can have far-reaching consequences for individuals and societies.
In order to address these challenges, governments must recognize the need to ensure that their technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge. The rapid advances in AI capabilities require a proactive approach in developing the necessary infrastructure and upskilling the workforce to keep up with the evolving technology.
In conclusion, the implementation of AI is hindered by variations in democratic processes and understanding of ethical practices among countries. Rushing into AI implementation without addressing critical issues such as data quality and protection can lead to significant problems, particularly in countries with insufficient data protection and privacy laws.
Governments play a crucial role in fostering appropriate technical infrastructure and developing the necessary skills to effectively navigate the challenges posed by AI technologies.
Denise Wong
Speech speed
165 words per minute
Speech length
1029 words
Speech time
375 secs
Arguments
Singapore has taken a human-centric approach to AI
Supporting facts:
- Policy has been inclusive focusing on digital readiness and adoption within communities
- Model governance framework aligned to OECD principles
Topics: AI governance, Singapore AI strategy, Inclusion, Digital technology adoption
Singapore has adopted a multi-stakeholder approach in developing AI governance
Supporting facts:
- Got feedback from more than 60 companies from different sectors both domestic and international for the first iteration of the model governance framework
- Worked with World Economic Forum Center for the Fourth Industrial Revolution for ISAGO
- Issued a discussion paper on Gen-AI written together with a local company
Topics: AI governance, Multi-stakeholder approach, OECD principles
Singapore has prioritized practical, comprehensive and detailed guidance for AI governance
Supporting facts:
- Created a compendium of use cases for local and international organizations
- Created ISAGO, an implementation and self-assessment guide for companies
- Created AI Verify Foundation, an open source foundation that provides an AI toolkit
Topics: AI governance, Practical AI implementation, AI standards and benchmarking
Denise is concerned about fragmentation in global laws about AI.
Supporting facts:
- She mentions that fragmentation in global laws raises compliance costs.
Topics: Artificial Intelligence, Global Laws, Compliance
Denise worries about technology widening divides and negatively affecting vulnerable groups.
Supporting facts:
- She uses the example of Singapore, a highly connected society, where there is a fear of some groups being left behind due to technology.
Topics: Technology, Social Divide, Vulnerable Groups
Denise is apprehensive about cultural and ethnic sensitivities in conjunction with black box technology.
Supporting facts:
- She states that it’s unpredictable whether technology will fragment or unify communities, particularly in terms of ethnic and cultural sensitivities.
Topics: Culture, Ethnicity, Black Box Technology
Report
Singapore has taken a human-centric and inclusive approach to AI governance, prioritising digital readiness and adoption within communities. This policy aims to ensure that the benefits of AI are accessible and beneficial to all members of society. The model governance framework developed by Singapore aligns with OECD principles, demonstrating their commitment to ethical and responsible AI practices.
In adopting a multi-stakeholder approach, Singapore has sought input from a diverse range of companies, both domestic and international. They have collaborated with the World Economic Forum Center for the Fourth Industrial Revolution for ISAGO (Intentional Standards for AI Governance Organizations) and have worked with a local company to write a discussion paper on Gen-AI.
This inclusive approach allows for a variety of perspectives and fosters collaboration between different stakeholders in the development of AI governance. Practical guidance is a priority for Singapore in AI governance. They have created a compendium of use cases that serves as a reference for both local and international organisations.
Additionally, they have developed ISAGO, an implementation and self-assessment guide for companies to ensure that they adhere to best practices in AI governance. Furthermore, Singapore has established the AI Verify Foundation, an open-source foundation that provides an AI toolkit to assist organisations in implementing AI in a responsible manner.
Singapore recognises the importance of international alignment and interoperability in AI governance. They encourage alignment with international organisations and other governments and advocate for an open industry focus on critical emerging technologies. Singapore believes that future conversations in AI governance will revolve around international technical standards and benchmarking, which will facilitate cooperation and harmonisation of AI practices globally.
However, concerns are raised about the fragmentation of global laws surrounding AI; compliance costs can increase when laws are fragmented, which could hinder the development and adoption of AI technologies. Singapore acknowledges the need for a unified framework and harmonised regulations to mitigate these challenges.
Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening divides and negatively affecting vulnerable groups. Singapore, being a highly connected society, is aware of the possibility of certain groups being left behind. Bridging these divides and ensuring that technology is inclusive and addresses the needs of vulnerable populations is a priority in their AI governance efforts.
Cultural and ethnic sensitivities in conjunction with black box technology are also a concern. It is unpredictable whether technology will fragment or unify communities, particularly in terms of ethnic and cultural sensitivities. Singapore acknowledges the importance of considering a culturally specific perspective to understand the potential impacts of AI better.
In conclusion, Singapore’s approach to AI governance encompasses human-centricity, inclusivity, and practical guidance. Their multi-stakeholder approach ensures a diversity of perspectives, and they prioritise international alignment and interoperability in AI governance. While concerns exist regarding the fragmentation of global laws and the potential negative impacts on vulnerable groups and cultural sensitivities, Singapore actively addresses these issues to create an ethical and responsible AI ecosystem.
Dr. Romesh Ranawana
Speech speed
174 words per minute
Speech length
1246 words
Speech time
429 secs
Arguments
Sri Lanka has a low level of AI readiness and capacity, lagging behind many countries
Supporting facts:
- Sri Lanka has just embarked on the journey of improving AI readiness
Topics: AI readiness, AI capacity, Digital Transformation
Engagement in AI development must be a national initiative, not limited to the private sector or select universities
Supporting facts:
- Developed and middle-income countries have been formulating national AI policies in recent years
Topics: National AI Strategy, Government Involvement
Proposed AI projects do not proceed beyond conceptual stage in Sri Lanka
Supporting facts:
- Over 300 AI projects were conducted by university students in Sri Lanka in last year, but none went into production
Topics: AI Development, Project Implementation
Sri Lanka is in the process of developing an AI policy and strategy expected to roll out in November and April 2024 respectively
Supporting facts:
- The government of Sri Lanka took the initiative to set up the Presidential Task Force to look at national AI policy
Topics: AI Policy, National AI Strategy
Establishing a sustainable, inclusive, open digital ecosystem is a primary goal
Supporting facts:
- UNDP is working on an AI readiness assessment for Sri Lanka
Topics: Digital Ecosystem, Inclusion, Sustainability
The lack of standardised, digitised data is a primary obstacle to AI advancement
Supporting facts:
- Data is extremely siloed and still available in paper format in many situations
- the challenge is not only technical, but also operational, encompassing a change in mindsets, awareness, and trust
Topics: AI advancement, data management, government efficiency
Report
Sri Lanka is currently facing challenges in terms of its AI readiness and capacity, which puts it behind many other countries in this field. The country has just begun its journey towards improving AI readiness and it lags behind in terms of both readiness and capacity.
However, the government of Sri Lanka has recognised the importance of AI development and has taken the initiative to develop a national AI policy and strategy. This is expected to be rolled out in November and April 2024 respectively. The government understands that engagement in AI development should not be limited to the private sector or select universities, but it needs to be a national initiative involving various stakeholders.
Currently, AI projects in Sri Lanka face challenges in terms of their implementation. Although over 300 AI projects were conducted by university students in the country last year, none of them went into production. The proposed AI projects in Sri Lanka often do not progress beyond the conceptual stage.
This highlights the need for better infrastructure and support to bring these projects to fruition. One of the primary obstacles to AI advancement in Sri Lanka is the lack of standardized and digitized data. Data is often siloed and still available in paper format, making it difficult to utilize it effectively for AI applications.
This challenge is not just technical but also operational, requiring a change in mindsets, awareness, and trust. Efforts to develop AI projects are being wasted due to the absence of consolidated data sets that address national problems. In order to overcome these challenges, Sri Lanka aims to establish a sustainable, inclusive, and open digital ecosystem.
The United Nations Development Programme (UNDP) is working on an AI readiness assessment for Sri Lanka. This assessment will help identify areas that need improvement and provide recommendations to establish an ecosystem that fosters AI development. In conclusion, Sri Lanka is in the early stages of improving its AI readiness and capacity.
The government is taking an active role in formulating a national AI policy and strategy. However, there are challenges in terms of implementing AI projects, primarily due to the lack of standardized and digitized data. Efforts are being made to address these challenges and establish a sustainable digital ecosystem that supports AI development.
Galia Daor
Speech speed
162 words per minute
Speech length
957 words
Speech time
355 secs
Arguments
OECD has been working on artificial intelligence since 2016 and adopted the first intergovernmental standard on AI, the OECD AI principles, in 2019
Supporting facts:
- OECD started working on artificial intelligence in 2016
- In 2019, OECD adopted the first intergovernmental standard on artificial intelligence, the OECD AI Principles
Topics: Artificial Intelligence, OECD AI Principles
OECD AI principles consist of a set of five values-based principles for all AI actors and five policy recommendations for governments
Supporting facts:
- OECD AI principles consist of a set of five values-based principles that apply to all AI actors, and a set of five policy recommendations for governments, for policymakers
Topics: Artificial Intelligence, Government Policy, OECD AI Principles
OECD provides support to countries in implementing the AI principles and translating them into practice
Supporting facts:
- OECD is focusing on how to support countries to implement the AI principles
- OECD is developing practical tools for governments and organizations to implement the AI principles
Topics: Artificial Intelligence, Government Policy, OECD AI Principles
AI development ought to be a global, collaborative effort
Supporting facts:
- AI itself is global, as it has no border
- AI’s potential impact is too great to be left to certain companies or countries.
Topics: AI, Collaboration, Global Effort
Opportunities from AI might go unrealized if not approached correctly
Supporting facts:
- Potential of AI could make things better for everyone if done right
Topics: AI, Opportunity, Unrealized Potential
Report
The Organisation for Economic Cooperation and Development (OECD) has been actively involved in the field of artificial intelligence (AI) since 2016. They adopted the first intergovernmental standard on AI, called the OECD AI Principles, in 2019. These principles consist of five values-based principles for all AI actors and five policy recommendations for governments and policymakers.
The five values-based principles of the OECD AI Principles focus on fairness, transparency, accountability, and human-centrality. They aim to ensure that AI systems respect human rights, promote fairness, avoid discrimination, and maintain accountability. The OECD aims to establish a global framework for responsible AI development and use.
The OECD AI Principles also provide policy recommendations to assist governments in developing national AI strategies that align with the principles. The OECD supports countries in adapting and revising their AI strategies according to these principles. In addition, the OECD emphasizes the need for global collaboration in AI development.
They believe that AI should not be controlled solely by specific companies or countries. Instead, they advocate for a global approach to maximize the potential benefits of AI and ensure equitable outcomes. While the OECD is optimistic about the positive changes AI can bring, they express concerns about the fragmentation of AI development.
They highlight the importance of cohesive efforts and coordination to avoid hindering progress through differing standards and practices. To conclude, the OECD’s work on AI focuses on establishing a global framework for responsible AI development and use. They promote principles of fairness, transparency, and accountability and provide support to countries in implementing these principles.
The OECD also emphasizes the need for global collaboration and acknowledges the potential challenges posed by fragmentation in AI development.
Jingbo Huang
Speech speed
163 words per minute
Speech length
400 words
Speech time
148 secs
Arguments
Jingbo Huang emphasizes the need for collective intelligence, both among humans and between humans and machine intelligence.
Supporting facts:
- He sees potential for AI and human intelligence working together to address challenges, not just the dark side.
- He sees the need for entities among the human population to work together to prepare for this technological integration.
Topics: Artificial Intelligence, AI Governance, Collective Intelligence
Report
Jingbo Huang places significant emphasis on the importance of collective intelligence in both human-to-human and human-to-machine interactions. He recognizes the potential for artificial intelligence (AI) and human intelligence to work in unison to tackle challenges, highlighting the positive aspects of this partnership rather than focusing solely on the negatives.
Huang emphasizes the need for collaboration and preparation among human entities to ensure the integration of AI into society benefits all parties involved. Huang further expresses curiosity about the collaboration between different AI assessment tools developed by various organizations. Specifically, he mentions the UNDP’s AI readiness assessment tool and raises questions about how it aligns or interacts with tools developed by the OECD, Singapore, Africa, and others.
This indicates Huang’s interest in exploring potential synergies and knowledge-sharing among these assessment tools. Additionally, Huang demonstrates an interest in understanding the challenges faced by panelists during AI conceptualization and implementation. Although specific supporting facts are not provided, this suggests Huang’s desire to explore the obstacles encountered in bringing AI projects to fruition.
By examining these challenges, he aims to acquire knowledge that can help overcome barriers and facilitate the successful integration of AI into various industry sectors. In summary, Jingbo Huang underscores the significance of collective intelligence, both within human-to-human interactions and between human and machine intelligence.
Huang envisions a collaborative approach that leverages the strengths of both AI and human intelligence to address challenges. He also shows a keen interest in exploring how different AI assessment tools can work together, seeking to identify potential synergies and compatibility.
Moreover, he expresses curiosity about the challenges faced during the AI conceptualization and implementation process. These insights reflect Huang’s commitment to fostering mutual understanding, collaboration, and effective utilization of AI technologies.
Robert Opp
Speech speed
172 words per minute
Speech length
3039 words
Speech time
1062 secs
Arguments
There is tremendous potential in embracing AI to make significant progress against the SDGs.
Supporting facts:
- 70% of the SDG targets could actually be positively impacted with the use of technology.
- A report by UNDP and ITU found that digital technology could accelerate progress towards SDGs
Topics: Artificial Intelligence, Sustainable Development Goals
Countries are at different stages of digital transformation and face various challenges in adopting AI
Supporting facts:
- UNDP works in 170 countries across various thematic areas, including governance, climate, energy, resilience and gender
- Many countries are unaware of the models of AI or lack the foundational knowledge
Topics: Digital Transformation, Artificial Intelligence
AI is a general-purpose technology and choosing which sectors to focus on initially is challenging
Supporting facts:
- AI can affect just about any sector, from education to health sector to the national economy, government services.
- Sri Lanka needs to pick battles to address initially with the AI policy due to limited resources.
Topics: AI policy, AI readiness, Sri Lanka
Implementation and sustainability are key considerations in Sri Lanka’s AI strategy
Supporting facts:
- Developing a policy and a strategy is one thing, but the key element for Sri Lanka is how the execution happens and how to make it sustainable.
- Policy should not be put aside when governments change or priorities of the government change.
Topics: AI policy, AI readiness, Sri Lanka
UNDP is supporting digital programming in about 125 countries, focusing on national digital transformation processes
Supporting facts:
- 40 to 50 of these countries are looking at the foundational elements of digital transformation
- UNDP is working on building an ecosystem comprising of people, regulatory side, government side, business side as well as underlying connectivity and affordability
Topics: UNDP initiatives, national digital transformation, digital programming
UNDP has initiated the AI readiness process to complement digital programming
Supporting facts:
- The AI readiness process is being piloted in Sri Lanka, Rwanda, and Colombia currently
- The process is in the context of an inter-agency UN process led by ITU and UNESCO
- It views the government as an enabler of AI and probes the societal setup for handling AI
Topics: UNDP initiatives, AI readiness, digital programming
Challenges in implementing AI include fragmentation, getting the foundations right, financing issues, and ensuring representation and diversity
Supporting facts:
- Fragmentation and foundation issues have been mentioned as concerns by the panelists
- AI is only as good as the data that trains it
Topics: Artificial Intelligence, Fragmentation, Data Representation, Diversity
Promotes the discussion and tackle on Artificial Intelligence as it is a unique opportunity for human progress if done correctly.
Supporting facts:
- Artificial intelligence is a major opportunity for humanity
Topics: Artificial Intelligence, Human Progress
Report
Embracing artificial intelligence (AI) has the potential to make significant progress towards achieving the Sustainable Development Goals (SDGs), according to a report by the UN Development Programme (UNDP) and ITU. The report highlights the positive impact that digital technology, including AI, could have on 70% of the SDG targets.
However, the adoption of AI varies among countries due to their differing stages of digital transformation and the challenges they face. For instance, Sri Lanka requires a national-level initiative to build AI readiness and capacity, as building AI readiness and capacity cannot be achieved solely at the corporate or private sector level.
Other countries have recognized this and have implemented national-level initiatives. UNDP is actively involved in supporting digital programming and has initiated the AI readiness process in Sri Lanka, Rwanda, and Colombia. This process aims to complement national digital transformation processes and views the government as an enabler of AI.
Challenges in implementing AI include fragmentation, financing, ensuring foundation issues are addressed, and representation and diversity. Fragmentation and foundational issues have been identified as concerns, as AI is only as good as the data it is trained on. Additionally, financing issues may hinder the effective implementation of AI, and it is crucial to ensure representation and diversity to avoid bias and promote fairness.
Advocates argue for a multi-stakeholder and human-centered approach to AI development as a method of risk management. This approach emphasizes the importance of including various worldviews and cultural relevancy in the development process. The report also highlights the need for inclusivity and leaving no one behind in the journey towards achieving the SDGs.
It champions working with indigenous communities, who represent different worldviews, to ensure that every individual has the opportunity to realize their potential. In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful consideration must be given to address challenges such as fragmentation, financing, foundation issues, and representation and diversity.
By adopting a multi-stakeholder and human-centered approach, AI can be harnessed effectively and inclusively to drive sustainable development and improve the lives of people worldwide.