A Digital Future for All (afternoon sessions)
21 Sep 2024 14:30h - 17:00h
A Digital Future for All (afternoon sessions)
Session at a Glance
Summary
This discussion focused on the Global Digital Compact (GDC) and the role of artificial intelligence (AI) in shaping a digital future that benefits humanity. The event brought together leaders from government, technology, civil society, and international organizations to explore how to harness digital technologies and AI for sustainable development while addressing potential risks.
Key themes included the importance of inclusivity, bridging the digital divide, and ensuring AI governance is rooted in human rights. Speakers emphasized the need for multi-stakeholder cooperation and global governance frameworks to guide AI development. The United Nations was highlighted as uniquely positioned to facilitate this process due to its global reach and legitimacy.
Participants discussed both the transformative potential of AI to accelerate progress on sustainable development goals and the need to mitigate risks like bias, privacy concerns, and potential misuse. The importance of building capacity, especially in developing countries, was stressed to prevent an “AI divide” from emerging.
Recommendations from the UN’s High-Level Advisory Body on AI were presented, including proposals for a global AI capacity network, an international scientific panel on AI, and mechanisms to foster inclusive AI development. Speakers noted the urgency of action, given AI’s rapid advancement.
The discussion concluded on an optimistic note, with participants expressing hope that early engagement on AI governance could help steer the technology towards benefiting humanity. However, they emphasized sustained effort and cooperation would be needed to realize this vision of an inclusive, sustainable digital future for all.
Keypoints
Major discussion points:
– The importance of developing AI and digital technologies in an inclusive, ethical way that benefits all of humanity
– The need for global cooperation and governance frameworks for AI, with the UN playing a key role
– Bridging the digital divide and ensuring developing countries can participate in and benefit from AI advancements
– Balancing the opportunities of AI with potential risks and challenges
– Implementing the Global Digital Compact and moving from principles to concrete actions
Overall purpose/goal:
The discussion aimed to highlight the transformative potential of AI and digital technologies while emphasizing the need for responsible development and governance to ensure these technologies benefit all of humanity. It sought to build momentum for global cooperation on AI governance through initiatives like the Global Digital Compact.
Tone:
The overall tone was optimistic and forward-looking, with speakers emphasizing the positive potential of AI while acknowledging challenges. There was a sense of urgency about the need to act quickly to shape AI’s development. The tone became more action-oriented towards the end, focusing on next steps and implementation.
Speakers
Moderators/Facilitators:
– Redi Thlabi – Journalist and TV Host Al Jazeera English
– Tumi Makgabo – In Africa World Wide Media
Speakers:
– Ian Bremmer – Political Scientist, President of Eurasia Group and GZERO Media
– Ebba Busch – Minister for Energy, Business and Industry and Deputy Prime Minister of Sweden
– Sundar Pichai – CEO, Google and Alphabet
– Felix Mutati – Minister of Technology and Science, Zambia
– Margrethe Vestager – Executive Vice President of the European Union
– Rebeca Grynspan – Secretary-General, United Nations Trade and Development (UNCTAD)
– Omar Al Olama – Minister of State for Artificial Intelligence, Digital Economy and Remote Work in the United Arab Emirates
– Josephine Teo – Minister for Digital Development and Information, Singapore
– Nnenna Nwakanma – Digital Policy, Advocacy and Cooperation Strategist
– Carme Artigas – Former Secretary of State for Digitalisation and AI of Spain and Co-Chair of the Secretary-General’s High-level Advisory Body on Artificial Intelligence
– James Manyika – Senior VP, Google-Alphabet and Co-Chair of the Secretary-General’s High-level Advisory Body on Artificial Intelligence
– Vilas Dhar – President and Trustee, Patrick J. McGovern Foundation
– Jian Wang – CTO and Founder, Alibaba Cloud
– Volker Türk, High Commissioner for Human Rights (OHCHR)
– Alondra Nelson – Harold F. Linder Professor, Institute for Advanced Study
– Mokgweetsi Masisi – President of Botswana
– Amandeep Singh Gill – UN Secretary-General’s Envoy on Technology
– Achim Steiner – Administrator of UNDP
– Doreen Bogdan-Martin – Secretary-General of the ITU
The speakers represent a diverse range of expertise including government leadership, technology industry executives, civil society representatives, academics, and leaders of international organizations. Their areas of focus include artificial intelligence, digital development, human rights, sustainable development, and global governance.
Full session report
The Global Digital Compact and AI Governance: Shaping a Digital Future for All
This high-level discussion brought together diverse leaders from government, technology, civil society, and international organizations to explore the role of artificial intelligence (AI) in shaping an inclusive digital future. The conversation centered on the Global Digital Compact (GDC) and the need for responsible AI development and governance to benefit all of humanity.
Key Themes and Agreements
1. The Global Digital Compact as a Foundation for AI Governance
There was broad consensus on the importance of the Global Digital Compact as a starting point for global AI governance. Speakers like Carme Artigas and Omar Al Olama emphasized the unique position of the United Nations to lead this effort. James Manyika stressed the need for a multi-stakeholder approach, which was echoed by other participants. Volker Turk noted that the GDC builds on existing human rights frameworks, stating, “The Global Digital Compact is firmly anchored in human rights.”
2. AI’s Potential for Sustainable Development
Speakers agreed on AI’s transformative potential to accelerate progress on Sustainable Development Goals. Felix Mutati highlighted AI’s ability to transform lives in rural areas, saying, “AI has the potential to leapfrog development.” However, many stressed the need to bridge the digital divide to prevent an AI divide, emphasizing the importance of building AI capacity in developing countries.
3. Balancing Innovation and Risk Mitigation
There was general agreement on the need for a balanced approach to AI governance that promotes innovation while mitigating risks. Margrethe Vestager emphasized the importance of enforceable AI regulation, while Carme Artigas highlighted the need to balance innovation and risk mitigation.
4. Human Rights and Community Engagement
Speakers like Volker Turk and Alondra Nelson emphasized the importance of grounding AI governance and development in existing human rights frameworks. Vilas Dhar highlighted the importance of community engagement in AI development, challenging the typical narrative of top-down control in governance.
5. Scientific Research and Understanding of AI
Multiple speakers, including James Manyika, Dr. Wang Jian, and Alondra Nelson, stressed the importance of scientific research to better understand AI systems and their impacts. Manyika proposed “a real-time scientific panel on AI developments,” while Nelson drew parallels to rapid scientific developments during the COVID-19 pandemic.
6. Role of the Private Sector
James Manyika and others discussed the crucial role of the private sector in AI governance. Manyika emphasized the need for collaboration, stating, “We need everybody at the table – governments, civil society, academia, and the private sector.”
7. Capacity Building and Infrastructure
Many speakers emphasized the importance of capacity building and infrastructure development for AI in developing countries. Nnenna Nwakanma’s statement, “Connect the schools. Connect the young people. Connect my children,” refocused the conversation on practical, human-centered outcomes of digital development.
Key Recommendations and Action Items
1. Recommendations from the UN High-Level Advisory Body on AI, as discussed by Ian Bremmer and panelists, including:
– Establishing a global fund for AI for sustainable development
– Creating an international scientific panel on AI
– Developing a global AI capacity-building program
2. Proposal to make an online platform available for public input on the Global Digital Compact after its adoption
3. Emphasis on building AI capacity and infrastructure in developing countries to prevent an AI divide
4. Focus on sustainable and ethical AI development practices, as highlighted by Alondra Nelson
5. Plan to potentially adopt the Global Digital Compact at the upcoming Summit of the Future
Thought-Provoking Insights
1. Vilas Dhar reframed governance as a collaborative process involving multiple stakeholders, not just governments and tech companies.
2. Mokgweetsi Masisi highlighted the interconnection between digital divides, global inequality, and gender disparities.
3. Alondra Nelson acknowledged the limitations of current knowledge about AI systems, emphasizing the need for ongoing research and understanding.
Unresolved Issues and Future Directions
Despite the productive discussion, several issues remain to be addressed:
1. Specific mechanisms for enforcing AI governance globally
2. Details on implementation of the proposed global fund on AI
3. How to effectively balance AI development with sustainability and climate concerns
4. Concrete steps to ensure AI benefits reach marginalized communities
In conclusion, the discussion demonstrated a high level of consensus on fundamental principles and goals for AI governance, providing a strong foundation for global cooperation. The conversation evolved from high-level policy talk to considering concrete actions and their impacts on diverse communities, particularly in the Global South. The Global Digital Compact emerges as a crucial starting point for global AI governance, with emphasis on multi-stakeholder involvement, scientific research, capacity building, and human rights-centered approaches. As Amandeep Singh Gill noted, “The Global Digital Compact is our chance to shape our digital future.” The stage is set for continued dialogue and action on shaping an inclusive, sustainable digital future for all.
Session Transcript
Redi Thlabi: I think the applause was loudest this side. You’re very generous. Thank you. Good afternoon. Honored delegates, ladies and gentlemen. My name is Redi Thlabi. I’m a broadcast journalist, a moderator, an MC from Johannesburg, South Africa, delighted to be a visitor in the United States. I noticed that when the lunch break was announced, many of you did not leave. That tells me that you were in this room this morning when the answer to why we are here was provided. In the morning, we saw the real impact of digital tools, of artificial intelligence enabling human flourishing. Who can forget Adit, a young lady who grew up in a refugee camp, but she was able to access learning. She was able to connect with other young people from other parts of the world because she had the technology to do so. Who can forget how we witnessed the ability to get mobility after an acute injury. The mobility that you and I take for granted, but when you lose it, you need technology, you need innovation to help you be a part of the global community. You were in this room when we saw how technological tools can be enabled to respond to the planetary crisis that we are all facing today. That’s what happened this morning. So what are we doing this afternoon? We are here to ensure that those case studies that we heard about in the morning are not just the exception, but they become the norm. We are here to renew our commitments, to find solutions to the crises that we face, to ensure that we create a global digital architecture, a compact that is human-centered, that is secure, that is efficient, that is accessible to all. Because if we don’t do this, we create other frontiers of inequality. I come from Africa, I’m a part of the Global South, and we see very much how often we feel as if the world is advancing without us, even though we have the expertise, the agency, the tools, the willingness. But without the investment, without being invited into the table as we find these digital solutions, then this inequality will deepen. And so we convene today at a very hopeful moment. In a few hours, the Global Digital Compact may just become a reality. You will hear a lot about it. It has several themes that resonate. It’s about collaboration, creating policy, bringing all the stakeholders together to ensure that the case studies that we heard about in the morning become a global norm so that we all become citizens of a world where technology and AI are accessible, they are free, they are secure, and they are rooted, they are rooted in human flourishing. That’s what today is all about. But to situate us in the moment, let’s watch this very short video about the Global Digital Compact just to get a sense of the process and how it unfolded.
Official Video: GDC has been a very optimistic and constructive process during the past 18 months with broad participation from multi-stakeholders. And with GDC, we see that every country and every member state of the United Nations will have better possibilities of implementing the SDG agenda. Co-facilitators of the Global Digital Compact are so excited that we’ve come to this moment where we can actually indulge the Global Digital Compact. We as co-facilitators have engaged with yourselves. over many many hours. Over hundreds, thousands of delegates have put in their work and now it’s time to really look at this document and adopt it. And so we’re very excited that we’ve really come to this point and welcome you to this event. Thank you very much. The Global Digital Compact provides an opportunity to close the digital divide. It also provides an opportunity for Africa to engage as well as civil society organizations to engage way better at the United Nations level. The Global Digital Compact should be implemented through a multi-stakeholder process so that everyone, everywhere, can thrive in the age of AI. Governments must protect and support the people who build and govern digital public goods, like Wikipedia, which is run by volunteers who share knowledge in over 300 languages. Thank you very much for this outstanding opportunity to share with all of you how private and public collaboration can help achieve the goals of the Global Digital Compact. We at TIGO, we build broadband networks across all the communities we operate in. We call them digital highways because they provide the highways that bring our communities to the digital economy and it takes the work of everyone involved, public, private sector, everyone, so that those digital highways get built for the betterment of our communities are for the inclusion of everyone in them into the digital economy of the 21st century. Let’s make it happen together. I’m delighted to welcome the Global Digital Compact and to see that children’s rights are at the heart of this declaration. Children’s charities across the world have collaborated closely with co-facilitators and the UN Tech Envoy for two years to shape this important compact. We welcome that it now underscores a unified commitment for children’s rights and safety. I hope all will live by its words and will move from words to action. States have made bold commitments. They must now translate them into concrete actions. Equally, tech companies must not be exempt and be held accountable for the services they deliver to children. The Global Digital Compact has been a crucial platform for diverse stakeholders like me to come together and shape the future of a digital world that benefits everyone. It has fostered a sense of shared responsibility and ownership. I believe that the GDC we contributed will play a vital role in shaping a digital world.
Redi Thlabi: Thank you very much. Thank you. You will have an opportunity to make your inputs to ensure that the Global Digital Compact becomes a reality. Once it’s been adopted by world leaders, the online platform will be available tomorrow and you can share your inputs. Ladies and gentlemen, please help me welcome the Deputy Prime Minister of Sweden, Ebba Busch.
Ebba Busch: Excellencies, distinguished colleagues, ladies and gentlemen. I was suggesting earlier here when we were waiting for things to start, soon someone has to get up on stage and start singing. I’m not gonna sing here today but we’re going to talk about the digital era that we have just entered fully on now. And we’re living in an era where digital and emerging technologies, where they’re really reshaping almost every single aspect of our lives. our lives. The digital transformation presents us with unprecedented opportunities to really accelerate our work towards the achievement of the Sustainable Development Goals. To fulfil those opportunities, we need to cooperate across all levels, and certainly, of course, including the UN. Sweden has, together with Zambia, had the honour of facilitating the negotiations on the Global Digital Compact that we are soon going to adopt. The Compact outlines our collective commitment to a digital future that is inclusive, that is open, that is sustainable, fair, safe and secure. And it seeks to close those digital divides and accelerate progress across the Sustainable Development Goals. Sweden is my home country, and Sweden is also home to some of the most innovative companies in the world that are enabling and driving the global digital transition forward. To truly harness this power of digital technology for a better and more sustainable future, we need an approach that involves all stakeholders. It is only by bringing together the excellent researchers, innovative companies, efficient authorities and multilateral organisations that we can create a well-functioning innovation system that works for everyone. Artificial intelligence, AI, plays a central role in this context. It has the potential to revolutionise how we work, learn and connect with one another. Yet, we must also acknowledge the challenges and risks that come with it. Of course, like so many of the new emerging technologies, AI can be used for both good and for harm. This is why it is crucial that we work together to establish common norms and governance structures that guide the use of AI in such a way that it truly, truly benefits humanity. And at the same time, limit its proliferation into areas of use that may threaten our common security, development, and future. We need a global conversation to build a shared understanding of both the opportunities and the challenges of AI. And in this regards, I really like to emphasize the Compact’s initiative to launch a global dialogue on AI governance, which engages governments and stakeholders in developing standards that prioritizes human rights, that prioritizes safety and sustainability. Increased investment will be crucial to scale up AI capacity, building for sustainable development. Taking into account the recommendations of the High-Level Advisory Body on Artificial Intelligence, the GDC encourages the establishment of a global fund on AI that is complementary to relevant UN funding mechanisms. Additionally, an international scientific panel on AI could offer valuable guidance on the global community on AI development. Sweden has long championed an open, free, and secure internet. And we believe that digital technology should be used to strengthen human rights. We have a responsibility to turn our vision of a digital future future into concrete actions that make a real difference. This means we must collaborate across borders and sectors, and we must all take responsibility to ensure that the digital transformation benefits everyone. Sweden is committed to continuing its leadership in this global process, and we look forward to working with all of you to unlock the potential of digitalization and to ensure that we build a future where digital technology truly serves all of humanity. And with that, I’d like to end with somewhat of a more personal reflection and personal note as a citizen of the world, as a mother of two. My two children back home in Sweden, they’re named Elise and Birger, they’re seven and nine years old. I was this much pregnant when I got elected party leader for my party for 10 years ago. And I’m happy and I’m proud to be able to say to them, because they are now, I mean, they are the generation that are growing up not knowing what life was like before internet, you know? Can you imagine? And I’m proud to be able to say to them that we are now truly taking their rights in the digitalized era seriously, because I’ve said so many times that a childhood in freedom requires safety online. And thank you. And it really is so. We’ve said it so many times, but you can’t say it enough times. Children’s rights are human rights. Women’s rights are human rights. And we are now bringing human rights and the sustainable developmental goals online, finally. Thank you.
Redi Thlabi: Deputy Prime Minister, thank you for your energy and inspiring case studies that you shared. Without much ado, let us hear another keynote this afternoon from the CEO of Google, Sundar Pichai.
Sundar Pichai: Mr. Secretary General, President of the General Assembly, Excellencies, ladies and gentlemen, it’s a privilege to join you today. I am energized by the Summit’s focus on the future. We have a once-in-a-generation opportunity to unlock human potential for everyone, everywhere. I believe that technology is a foundational enabler of progress. Just as the Internet and mobile devices expanded opportunities for people around the world, now AI is poised to accelerate progress at unprecedented scale. I’m here today to make the case for three things. Why I believe AI is so transformative. How it can be applied to benefit humanity and make progress on the UN Sustainable Development Goals. And where we can drive deeper partnerships to ensure that the technology benefits everyone. But first, let me share why this is so important to me personally and to Google as a company. Growing up in Chennai, India with my family, the arrival of each new technology improved our lives in meaningful ways. Our first rotary phone saved us hours of travel to the hospital to get test results. Our first refrigerator gave us more time to spend as a family rather than rushing to cook ingredients before they spoil. The technology that changed my life the most was the computer. I didn’t have much access to one growing up. When I came to graduate school in the U.S., there were labs full of machines I could use anytime I wanted. It was mind-blowing. Access to computing inspired me to pursue a career where I could bring technology to more people. And that path led me to Google 20 years ago. I was excited by its mission to organize the world’s information and make it universally accessible and useful. That mission has had incredible impact. Google Search democratized information access, opened up opportunities in education and entrepreneurship. Platforms like Chrome and Android helped bring 1 billion people online. Today, 15 of our products serve more than half a billion people and businesses each, and 6 of them each serve more than 2 billion. There is no cost to use them, and most of our users are in the developing world. Today we are working on the most transformative technology yet, AI. We’ve been investing in AI research, tools, and infrastructure for two decades because it’s the most profound way we can deliver on our mission and improve people’s lives. I want to talk today about four of the biggest opportunities we see, many of which align with the SDGs. One is helping people access the world’s knowledge in their own language. Using AI, in just the last year we have added 110 new languages to Google Translate, spoken by half a billion people around the world. That brings our total to 246 languages, and we are working towards 1,000 of the world’s most spoken languages. A second area is accelerating scientific discovery to benefit humanity. Our AlphaFold breakthrough is solving big challenges in predicting some of the building blocks of life, including proteins. and DNA. We have opened up AlphaFold to the scientific community free of charge and it has been accessed by more than 2 million researchers from over 190 countries. 30% are in the developing world. For example, over 25,000 researchers just in Brazil. Globally, AlphaFold is being used in research that could help make crops more resistant to disease, discover new drugs in areas like malaria vaccines and cancer treatments and much more. A third opportunity is helping people in the path of climate-related disaster, building on the UN’s initiative, Early Warnings for All. Our Flood Hub system provides early warnings up to seven days in advance, helping protect over 460 million people in over 80 countries. And for millions in the path of wildfires, our boundary tracking systems are already in 22 countries on Google Maps. We also just announced FireSat technology, which will use satellites to detect and track early-stage wildfires, with imagery updated every 20 minutes globally so firefighters can respond. AI gives a boost in accuracy, speed and scale. Fourth, we see the opportunity for AI to meaningfully contribute to economic progress. It’s already enabling entrepreneurs and small businesses, empowering governments to provide public services, and boosting productivity across sectors. Some studies show that AI could boost global labor productivity by 1.4 percentage points and increase global GDP by 7% within the next decade. For example, AI is helping improve operations and logistics in emerging markets, where connectivity, infrastructure and traffic congestion are big challenges. Freight startup Gary Logistics in Ethiopia is using AI to help move goods to market faster and bring more work opportunities to freelance drivers. These are just very early examples, and there are so many others across education, health, and sustainability. As technology improves, so will the benefits. As with any emerging technology, AI will have limitations, be it issues with accuracy, factuality, and bias, as well as the risks of misapplication and misuse, like the creation of deep fakes. It also presents new complexities. For example, the impact on the future of work. For all these reasons, we believe that AI must be developed, deployed, and used responsibly from the start. We are guided by our AI principles, which we published back in 2018. And we work with others across the industry, academia, the UN, and governments in efforts like the Frontier Model Forum, the OECD, and the G7 Hiroshima process. But I want to talk about another risk that I worry about. I think about where I grew up and how fortunate I was to have access to technology, even if it came slowly. Not everyone had that experience. And while good progress has been made by UN institutions like the ITU, gaps persist today in the form of a well-known digital divide. With AI, we have the chance to be inclusive from the start and to ensure that the digital divide doesn’t become an AI divide. This is a challenge that needs to be met by the private sector and public sector working together. We can focus on three key areas. First is digital infrastructure. Google has made big investments globally in subsea and terrestrial fiber optic cables. One connects Africa with Europe. And two others will be the first intercontinental fiber optic routes. that connect Asia-Pacific and South America, and Australia and Africa. These fiber optic routes stitch together our network of 40 cloud regions around the world that provide digital services to governments, entrepreneurs, SMBs, and companies across all sectors. In addition to compute access, we also open up our technology to others. We did this with Android, and now our Gemma AI models are open to developers and researchers, and we’ll continue to invest here. A second area is about investing in people. That starts with making sure people have the skills they need to seize new opportunities. Our Grow with Google program has already trained 100 million people around the world in digital skills. And today, I’m proud to announce our Global AI Opportunity Fund. This will invest $120 million to make AI education and training available in communities around the world. We are providing this in local languages, in partnerships with nonprofits and NGOs. We are also helping to support entrepreneurs for the AI revolution. In Brazil, we worked with thousands of women entrepreneurs to use Google AI to grow their businesses. In Asia, where fewer than 6% of startups are founded by women, we are providing many with mentorship, capital, and training. The third area is one where we especially need the help of member countries and leaders in this room, creating an enabling policy environment, one that addresses both the risks and worries around new technologies, and also encourages the kind of applications that improve lives at scale. This requires a few things. Government policymaking that supports investments in infrastructure, people. and innovation that benefits humanity. Country development strategies and frameworks like the Global Digital Compact that prioritize the adoption of AI solutions. And smart product regulation that mitigates harms and resists national protectionist impulses that could widen an AI divide and limit AI’s benefits. We are excited to be your partner and to work with you to make sure bold innovations are deployed responsibly so that AI is truly helpful for everyone. The opportunities are too great, the challenge is too urgent, and this technology too transformational to do anything less. Thank you.
Redi Thlabi: Thank you very much to the CEO of Google, Sundar Pichai, for that very holistic picture of the potential, the risks, and the opportunities. Thank you. Now let’s get to the conversation. Let’s put some meat to it, as we say in my language at home. Let’s just give some meaning to the Global Digital Compact. How do we position ourselves to move from aspiration to action and to take us through that very important conversation? Here is a sister, a moderator, and an international broadcaster, my homegirl, Tumi Makgabo.
Tumi Makgabo: Thank you. All right, we got there in the end. Good afternoon, everybody. Reedy, thank you so very much for that introduction. I feel like we flew a long way to get together in New York, but it’s always a pleasure to be in. in this incredible, exciting, stimulating city. But more importantly, I think it’s really incredible to have the opportunity to be in a room where people are thinking about what tomorrow’s going to look like. How do we create a tomorrow that works for everybody who’s involved in tomorrow? Well, you’ve heard a little bit about the GDC, and in this following conversation, we’re going to try to unpack how do we take the idea, how do we take the thought, how do we take the intent of what the GDC is trying to create and make it real, give it life, breathe it into existence. It isn’t easy, it certainly will be a challenge, but I think it’s a challenge not only that we’re up for, but it’s a challenge that is important to ensure that the society and the world looks exactly the way we hope and intend. Now, ordinarily, I could safely stand up here all by myself, but I don’t think that’s going to be the most exciting thing for you to watch. So please assist me in giving a very, very, very warm welcome to the following. Felix Mutati, who is the Minister of Technology and Science in Zambia. Margrethe Vestager, who is Executive Vice President of the European Union. Rebeca Grynspan, who is the Secretary General of UNCTAD. Omar Al Olama, the Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications in the UAE. Josephine Teo, who is the Minister for Digital Development and Information in Singapore. And last, but most certainly not least, Nnenna Nwakanama, Civil Society Representative. To all of you, thank you so very much for joining us today. And it really is genuinely and truly an honor to have each of you joining me today. And I’m looking forward to having this conversation. I’m going to take a seat next to you. But not too close. I get a little bit nervous because I don’t know what they might do to me if I ask them a question they don’t like. The reason we really gathered here, and let’s talk for a moment about the digital compact. It’s about principles, it’s about commitments, it’s about inclusivity, not just in terms of who negotiated it, but in terms of who it’s supposed to apply to. The intention is to, and I’m going to read this so I don’t get it wrong, to support the achievement of an inclusive, open, sustainable, fair, safe, and secure digital future for all. Ambitious. In addition, there’s something that’s really important that the GDC does, and that is it recognizes the pervasive and existing digital divides, and we know, we can see what the impact of those divides are and have been in the past. And really, it responds to the need for more inclusive digital governance. So we all have an understanding of what it should do, what it shouldn’t do, and how do we deal. So the ambition is there. It’s in paper, in various iterations. How do we make that happen? Perhaps Mr. Minister, if I can begin with you. Developing countries in particular, Reedy mentioned it earlier, and I think the lived reality of most people who exist in the developing world will be able to tell you about some of the challenges that we face whenever we experience digital divides. I mean, the CEO of Google just gave us a perfect example in his remarks. How do you think the GDC will help in particular developing countries, but perhaps you can use your country as an example, to bridge that divide? It’s on. Let’s try again.
Felix Mutati: Thank you. Many thanks for having me. I’ll just tell you a short story in terms of bridging the digital divide, in terms of inclusivity, from a Zambian perspective. A young man called James in the rural part of Zambia, a farmer, farming using traditional methods because he was not included or connected, had a chance to secure a mobile phone, had a chance to get connected to internet. Using those tools, he transformed his farming methods because he had access to weather forecast, he had access to market prices, he had access to information. And our interpretation is that the Global Digital Compact is about a shared vision. Transforming life for that little boy in the rural part of Zambia. That is our simple understanding and that is why we’re here, changing lives.
Tumi Makgabo: Now there’s a particular balance that is always required because we see that sometimes when we change and transform lives, sometimes it can happen really rapidly, sometimes it takes a little bit longer. If I can come to the UAE as an example, what is the thinking about bridging and bringing together that process of rapid adoption of AI, along with making sure that it is a safe environment for all who are going to be involved in digital technology and how it changes their lives?
Omar Al Olama: Thank you very much. I’m very happy to be here and to be very honest, I think the UAE is a good example of what happens when you create a trajectory for digital development that is on steroids, as they say. we’ve experienced it. So we went from not having paved roads, not having university graduates, being a country that was maybe part of the underdeveloped world 50 years ago to being today one of the most advanced countries in the world. That advancement created a lot of opportunities, it made the UAE be able to explore frontiers like artificial intelligence, and I think it also shows that there is no excuse for us not to be able to do that for more countries. We need to move from, and I don’t mean to plagiarize President Obama here, but from yes we can to yes we will. We need to really definitely try to actually implement that vision that we have on digital development and take forward the recommendations that the panel is making towards the global south.
Tumi Makgabo: We need to also have the conversation about inclusivity. The reason we can have a conversation around developed versus developing countries is because growth has not been equitable. There are some parts of the world that have grown and done well economically, et cetera, and those that have clearly been left behind. If we can then talk for a moment, Secretary General, about how do we make sure that this compact is not just a document that is full of ambition, but it actually means that we see a manifestation of that inclusivity of growth when it comes to the digital era.
Rebeca Grynspan: Thank you. Thank you very much, and thank you for that question. First of all, let me say that we all know that we are lagging in the SDGs, yes? That only 70% of the SDGs are enrolled to be accomplished by 2030. So we have to start by thinking that we cannot have linear solutions because we need non-linear ones, pathways, to really get to the 2030 objectives. And I think that the digital revolution in AI can provide those. non-linear path towards DSDG. So it’s a great opportunity because obviously, you know, the digital technologies are transforming life in an exponential way. So that can be really a very important tool. But my second point, going to you, is that when you are in a society where things are changing so rapidly, we have to remember always that not everything changes at the same speed. So it creates tensions. It creates asymmetries. It creates imbalances that we need to deal with. So it’s not enough access. You need really a deliberate digital development strategy because you have to connect. You have to bring the stakeholders. But you have to do a lot of things. You have to create an ecosystem that is, you know, really will bring everybody to the speed, to the level that is necessary. But you start from a very uneven play field, yes? Not everybody is today in the same line to start this career. So you have to make an extra effort. And part of this extra effort is, first of all, for people, it’s not only access, but it’s affordability and quality of their access to the digital technologies. But it’s also not to relegate the developing countries to be users. We want to be producers. We want to bring the digital revolution, not only for our consumption, but we want to really use it for our diversification, for going up the ladder. in terms of the value chains in the world, to add more value, to create better employment, and to bring digital into the productive structure will really require an extra effort from the international community and also from governments to make it, as I said, a deliberate development strategy.
Tumi Makgabo: One thing that also is going to require deliberate efforts is the question of human rights. Margrethe, if I can come to you on that. How do you make sure that there is a respect and a consideration for human rights while at the same time one wants to promote fair competition and keep in mind that we’re coming from such different points of departure, there’s a lot of balancing. How is the EU thinking about that?
Margrethe Vestager: First and foremost, I think the Global Digital Compact is an amazing achievement. It is as if we have a new chance. We have it. There are so many things where we have not succeeded, and I think the Digital Compact shows that we can agree that we’re really going to engage in correcting the mistakes and show much increased effort because if we live up to what is in the Compact, well, then a lot of the things that are haunting us will be a thing of the past, and for us, we want to partner with as many countries as possible, and the fact that human rights are completely core of the Global Digital Compact makes our conversation shorter, focused because we know that we agree on the fundamentals when we digitalize. So, partnerships will be so much easier, and these are really important for us. And I think it also illustrates that there is a commitment to create trust in technology. Because that doesn’t come automatic. Technology can be terribly misused, both for crime and fraud, but also for surveillance and undermining democracy. And here we can focus on the use of technology. I think the example, the story was excellent. It’s such a good illustration of the agency that people get. Because I think that is the underlining ambition here. That all the things that we were not successful with, with trust, with focusing on the use cases and giving people agency, enabling them, then this digital compact will be, you know, a road to a future that is very different from all the bad scenarios that we actually do have ahead of us.
Tumi Makgabo: There is no question, I think, for anybody that this presents a particular opportunity. One through the GDC, but generally through technology and how we can better harness that to achieve all of these things that we wanted to do. The world of work, however, we all recognize is going to look quite different in five years’ time, let alone a decade or two down the road. In Singapore’s case, how are you ensuring that there is better preparedness for a more digitized work in the context of work? And how can we learn from what Singapore has done so that we’re not always having to go back to the beginning in order to ensure we’re better prepared for a world of work that looks so different?
Josephine Teo: Well, thank you very much for this opportunity to participate in this great conversation. My comments will build on what Margaret and the Secretary-General have said. And that is to recognize the fact that unevenness exists even for the workforce. And what it means… is that there will be some parts of the workforce that are closer to the technology frontier because their employers are already using technologies in innovative ways in their companies. And so that creates an environment for them to pick up the right skills to become even more proficient in the jobs and the requirements of the future. But there will be many other members of the workforce who, for example, may be employed by small and medium enterprises who tend to lag in terms of the technology adoption. Then there are also people who are marginalised. Sometimes it is because they have special needs. It could be because they have a disability. We have to be very creative in thinking about how all of the past barriers that put impediments in the path of these individuals to succeed. The way in which we are doing this is to enable every single one of the workers to acquire the skills to be relevant for the future. Part of it involves working with employers because they create the momentum and they create the strongest incentives. But we also need active labour market policies in the form of support for individual learning, putting resources in the hands of individual workers so that they don’t only depend on their employers to provide the training opportunities. Then in order to support this ecosystem, you need also to build up the training infrastructure so that there is a good ecosystem of training providers who not only can deliver training competently, but whose content meet the needs of the market. All of these have to come together and the more we can share with each other how these can be achieved in each of our contexts, I think the better we are going to be. So we are very grateful to the UN for putting together the GDC to create the opportunities for us to do exactly that.
Tumi Makgabo: Thank you very much, Minister Teo. Minister Al-Olama, I believe that we have to bid you farewell, so thank you very much for joining us. Do you want to, is there one more comment and thought that you want to leave us with before you go?
Omar Al Olama: I think the Global Digital Compact is a great starting point for the action to follow. The UAE, we believe that there’s a lot that needs to be done but we all need to work together on it. This technology is very pervasive, it crosses borders, and there needs to be cooperation. So we’re definitely part of this roadmap that the UN is putting forward and we’re definitely going to be a big supporter for it.
Tumi Makgabo: That’s terrific to hear. Thank you for joining us and we look forward to seeing you do that. If you can please just give him a thank you. Thank you. And no, I wasn’t waiting for him to leave, I just have to get closer to the panellists, so don’t think I’m being, I promise I’m not being weird. Nnenna, if I can come to you, from a civil society perspective. You know, the reality is that there sometimes can be a disconnect between what happens on the ground and what happens higher up between policy makers and those of us who have really good intentions. It doesn’t always manifest in the way that we hope. What does the implementation question and what does the monitoring question of the GDC look like in a civil society context from your point of view?
Nnenna Nwakanama: Sankofa, I’ll come back to that word. Fabrizio Hochschild is from Chile. Ninten Desai is from India. Lynn Sentamu is Canadian. Marcus Comer. is from Switzerland, Yanis Karklins from Lithuania, Dee Williams in St. Lucia, Adama Samaseko in Mali, and the journalist Brenda Zulu from Zambia. I’ve met these people over my 25 years of engagement in digital cooperation within the UN. These are people from all walks of life. And my first statement here today is sankofa, looking back from where we’re coming from so we know where we’re going to. The GDC is nothing revolutionary. The success is in the process, and that process is multi-stakeholder. I do believe that as we keep shaking hands between multilateralism and multi-stakeholderism, we can do much. Not just here in New York. I don’t need a visa to be able to implement GDC. I want to be at home and have the same principle of multi-stakeholderism play out in everything at national level.
Tumi Makgabo: I think we understand why you’ve been in this process for so long. We kind of get it. Thank you for that. Minister Mutati, if I can then come back to you. We can look at the broader picture, and I think the GDC is no doubt inspiring. Those who believe it or not, I did actually read it, and I think it is really inspiring, and I think it really is ambitious, and I think it genuinely is asking us to address some of the most fundamental and pressing issues that help us address the human rights challenges we face on the planet. planet. How, though, do we begin to implement that? From a Zambia perspective, what is the translation of that, from paper to reality, actually look like and involve?
Felix Mutati: Thank you very much. One of the pillars of the Global Digital Compact is strategic partnerships. And strategic partnership from a Zambia perspective, I’ll give you two examples. This year, Zambia has got challenges around climate change. Our economy, in terms of GDP, is going down. And we have difficulties and other problems. But earlier on, we had a strategic partnership to look at how we can collaborate among ourselves as Africans. And one of the countries in Africa, we went and lifted a tax innovation, collection innovation, which we started using this year. Now, the consequences of that partnership has been that, whereas the economy is going down, the tax revenue is going up. And for us, we think that is what is called strategic partnership, which is part of the Global Compact. It gives actual results. And this is actually happening. Second example, because of limited resource, to try to extend connectivity of our people, government on one side. Working with the private sector and other partners, providing the necessary incentives, they were able to plant significant infrastructure, digital infrastructure, which has enabled Internet to move from in the 50s to almost 70 percent. That is what we call strategic partnership. So Zambia, in a sense, was already implementing the global digital compact and the key pillar of partnership, and the results are there for us to see. Thank you.
Tumi Makgabo: That’s a really interesting example that you use, because it sounds to me like a lot of this has to do with ensuring that the solutions are specific to what your needs are, no doubt. But when we look more broadly, the challenge for a lot of developing countries is that they have to prioritize where they allocate those resources. So it’s easier for us to sit and say, well, you know, we have to think about ESG, or we have to think about greening, or we have to think about this safety and that health. But the resources that are required to do all of those things are quite limited. What do you think needs to happen to allow developing countries to better strike that balance, and how potentially can the GDC be supportive of that process? We know that within the document itself, it’s quite specific about a need for that to happen. But again, the reality versus what’s on paper.
Rebeca Grynspan: Yeah, it’s such a good question, because, you know, precisely today we were talking about the necessary changes in international financial architecture, really to support development. We were talking about restructuring the debt, because debt doesn’t allow many of these countries, to really have the strategies and the investments that need to be done. I gave today the number that 3.3 billion people live in countries that are paying more in service and debt than on health or education. So if you have that problem, how are you going really to have the investments that you need for making this happen? And the other part of this, I’m sorry to say, obviously, is the responsibility to think about the long-term. I always say we usually forget that the short and the long-term start at the same time. There is no long-term that is a succession of short-termism, yes? You don’t get there by short-term thinking. You need long-term thinking. But many of the systems don’t allow, don’t have the structures, don’t have the institutions like, for example, Singapore has, to really have this long-term view for a policy to stay and to persevere for the objectives. So let me just end saying, you need national responsibility, and the minister has talked about that. You need a government that really thinks about this, that does the right thing, that invests in education, that invests in the people that Nina was talking about, that brings society in an inclusive way with a voice to really harness development, but you need the international community. And that’s why the global digital compact is so important, as we have said. Because you need a framework. And the other thing, and I’m sorry to say this because we are talking about optimism, but this is a very concentrated market, yes? need to spread the opportunities because really concentration is very high. So you need international standards and international norms to really make these technologies to stay within the good and not to go to the bad, like Margrethe was saying.
Tumi Makgabo: So it’s interesting that you’re promoting the global view, which is crucial. We’ve heard from the minister the national view, but there’s that space in between, which is the regional question. Now we’ve seen what the EU has been doing. We understand the EU’s ambition generally to be a leader in many spaces, and this is not unique in that question. What can the world, or what should we be learning about broader cooperation and implementation of such policies when we look at what the EU is trying to do within its space of influence from a policy perspective? Because one size doesn’t fit all, so there needs to be some maneuverability in that regard, but there also needs to be an overview that allows everybody to understand what the rules of engagement are.
Margrethe Vestager: I think that is very well put. And the thing is that there is an asymmetry here, because the individual human being can take the most of the possibilities, but the individual cannot do away with the harm that technology can bring. That is not possible. So there is a societal, regional, global answer here to address things that are systemic in a systemic matter. And this is what we are trying to do. So we have passed legislation, the Digital Markets Act, to keep the market open so that people have choice, and so that the businesses who provide choice, that they are interested for investors. Because, if you depend on a gatekeeper to get to the market, why invest in you? We have the Digital Services Act making sure that digital services are safe to use. That they would not cause you mental health problems or undermine democracy or the integrity of our elections. And that what is legitimately decided in our democracy is also treated as such when online. We have privacy legislation and our AI Act is coming into force. All of that to create a systemic response to the things that people cannot influence themselves individually. And when you have a systemic response, and we enforce in full, because otherwise it’s worth nothing. Enforcement is everything. When we do that, then each and every one of us, alone and together, can grasp the opportunities. And that’s the important thing here, because otherwise nothing will happen. So I think one should be really careful to try to decentralize, to say, you go, you go figure out. No, no. We need that systemic response. We think that legislation is needed, because we see the harm that can be done. And I think that global digital compact is essential, especially when it comes to AI. Because AI is not just any new digital algorithm. It is so much more powerful when it comes to human agency. And that is why the use cases, the trust that we as societies will be responsible, is absolutely key for all these wonderful things that we’re talking about.
Tumi Makgabo: That brings me nicely. Okay, you want to… They keep wanting to clap for you and I keep interrupting them. So I think every now and again, I must remember to give you a chance to clap properly. That brings me nicely to the question of public-private partnerships. So, when we are looking at this process, everybody has to play their part. We need to make sure that the rules of engagement not only exist, but that they are followed and that they are implemented, and that there is consequence for transgression, right? Because it doesn’t help, and we know about, broadly speaking, the challenges of international law when it comes to the implementation and enforcement of consequence. What role, however, do you see, maybe you can give us an example in Singapore, where this public-private partnership can better foster the implementation and the oversight of what this GDC process may look like?
Josephine Teo: Well, since Margaret was talking about AI, that could be where the example arises. I think being a general-purpose technology, we all want to benefit from its transformative potential. And yet, at the level of public services, very often the expertise does not yet exist. And that’s where I think the private sector can be brought into the picture and encouraged to enable policymakers, as well as individuals, teams, organisations that make the rules to understand how this technology is implemented. And that’s exactly how we have done it in Singapore. We encouraged and we invited the private sector to contribute to the development of use cases, as well as our understanding of the guardrails that need to be put in place. But I would go one step further. I would say that the private sector can do a lot more in terms of helping to build capacity. And the capacity is so important because, particularly from the point of view of small states, on the one hand we see the opportunities, on the other hand we are told of the risks. The question is, will we… we’d be left behind as small states. Now, in this process of figuring out what to do, I think we were really appreciative that at the UN level, there was an advisory board at the high level that was constituted in a very inclusive way. And this has given us the motivation to contribute to this process by asking our own chief AI officer to be involved, and then subsequently inviting the whole high-level advisory board to meet in Singapore so that they can also engage with the forum of small states that was meeting there. Now, the result of a process like this is that we now have the ability to say, adopting the principles articulated in the GDC, how to help ourselves as nations, but equally importantly, how we can help each other. And in that regard, I’m very pleased to note that this process created an opportunity for another country that we admire greatly, which is Rwanda, to say, how about the both of us come together to create an AI playbook for small states? So that is something that we have done. And I hope that this will help all of us.
Tumi Makgabo: I just love my panel because everything they say, everybody wants to clap for them.
Margrethe Vestager: Can I add something? Because I would encourage everybody to look at the AI apprentice model that is implemented in Singapore, because that allows businesses to get to use AI while people in all walks of life can learn about how to do that. And you get experts who are embedded in the local community. So this idea of AI apprenticeships, I think the Singaporean model is really, really inspiring.
Tumi Makgabo: Thank you very much.
Josephine Teo: We’re happy to share more.
Tumi Makgabo: They’re happy to share. So everybody come, let’s share. Okay, Nnenna, if I can come to you because believe it or not, we’ve got like four minutes left. What measures do you think specifically we need to be mindful of? And I’m going to limit you in the sense that I’m going to ask you for two of the most important measures we need to make sure are in place to protect human rights as we embark on this journey.
Nnenna Nwakanama: Two measures, capacity to implement. It is okay to come to New York. It is okay to read European papers and all of that, but America and Europe do not make the world. I’m African. I’m Nigerian. I live in Cote d’Ivoire. I’m part of this world and I want that to be down here. So capacity to implement across the whole world, whether it be government, because I have spoken about multi-stakeholder, but multi-stakeholder capacity is needed, financial, human and technological. That is one. We need to balance that. The other one is connecting people. I see people talking about AI. I see, I’ve lived in the days of great technology, emerging technology, and all of the big grammar technology, but please, can we get people connected to me? And please, can we not disconnect the people who are already connected? Because some of you are here and then you go home and you disrupt internet connectivity. We have to talk about shutdown. In the GDC itself, that part has, they’ve been knowing at it. I don’t know what it’s going to be like tomorrow morning. Anyway, let me now, excellences, ladies and gentlemen, friends here and friends who are watching me online, boys and girls, cats and dogs, emojis and avatars, I myself, on behalf of my own self, I would like to endorse the GDC.
Tumi Makgabo: because I want my time back from all this clapping. Like really, I’ve lost like loads of time from the applause. Okay, we’ve literally got two and a half minutes, so I’m gonna do a rapid fire round. I’m going to ask you for two specific things that when we leave this stage and we leave this room, as individuals, we need to consider implementing. We’re not talking broad policy strokes here, we’re talking about things that you think we can do when we leave. Nnenna, you’ve given us a clue, but can you give us two different ones, and I’m gonna start with you and work my way across. We’ve got two minutes.
Nnenna Nwakanama: Connect the schools. Connect the young people. Connect my children. Thank you.
Tumi Makgabo: Okay, okay, okay, thank you. Thank you. Minister Teo.
Josephine Teo: We want to move beyond learning about digital to thriving with digital. And to do that, we can move alone, and we can go very fast that way, or we can go together, and I believe that will go even further.
Tumi Makgabo: Thank you. Even further. Secretary General, you.
Rebeca Grynspan: Embrace not only the global digital compact, talk to your governments for implementing, for supporting, but embrace the path for the future, because there are many things that we have to do for this to be possible. And in the path of the future, we have a lot that can help people to get connected.
Tumi Makgabo: Vice-President Vestager.
Margrethe Vestager: Obviously, first things first, connectivity is everything. If you’re not connected, well, what then? But as we connect, please make sure that we do not sacrifice our children. Their independence, their agency, that they do not get dependent, that they do not get sucked in, in social media that will not serve them well. We have a huge challenge in making. sure that our children, they’re not only safe, but developed, and that they can use digital for their own good and for the good of their community.
Tumi Makgabo: Minister?
Felix Mutati: Thank you. One of the biggest challenges, the skills and literacy, particularly in the rural part of our country, things we take for granted. Let us handhold our people. And let us show them how to press the numbers on the mobile phone. Thank you.
Tumi Makgabo: I don’t know if you can tell, but I thoroughly enjoyed that conversation. And it is because we had such a wonderful panel of speakers with us this afternoon. Can you please give them the appropriate round of applause? I can’t hear it. Thank you so very much. Thank you. And thank you. Thank you very much.
Redi Thlabi: OK, I see your panel doesn’t want to leave the stage to me. OK. Thank you. Thank you so very much to Dumi Mahabo for expertly leading that important conversation. We’re going to watch a very short video speaking to the themes of today about the futures that are possible for us and the kind of decisions we need to make. Let’s just watch this short video, and then I’ll introduce you to the next panel.
Official Video: One humanity, two futures. In one, we embrace AI’s potential for a world of inclusion and equity. In another, AI tools became the catalyst for division and exclusion. The choice between these paths did not lie in circuits, but in human hands. In October 2023, amid heated debates on artificial intelligence and its potential, there was excitement about the future, but also anxiety over its risks and uncertainties. The UN Secretary-General gathered 39 top AI experts to confront this challenge. The uniquely diverse group consulted intensively around the world and engaged with thousands of experts. This uniquely diverse group aligned on guiding principles to propose concrete actions for governing AI for humanity by building common scientific understanding on AI, its opportunities and its risks, by fostering common ground for effective AI policies and standards anchored in human rights, by sharing common benefits through building capacity, mobilizing resources and tackling data dilemmas, to close AI divides, and to support this global action, an AI office at the United Nations, for an equitable and inclusive future with AI. Let’s build this future together.
Redi Thlabi: Thank you very much. And I think the theme of that video links so well with the comments that came from the first panel. We all acknowledge we come from different worlds, but we are one humanity. So how do we create these digital tools, AI for humanity, make it serve humanity, make it accessible for all of humanity? I’m really looking forward. to this next panel discussion, which speaks exactly to that, AI for Humanity. And to moderate this panel discussion is Ian Bremmer, president of Eurasia Group. Ian?
Ian Bremmer: Thank you so much, and also thanks to me, who just crushed it for the last 45 minutes, absolutely, right? So now you’re stuck with me, and obviously I’m honored to be here at the Summit of the Future. We’re going to talk about artificial intelligence. I’m honored to be one of the 39 members of the high-level advisory panel on AI, and you’re going to meet a number of my peers on the panel today. It was back in 2017 that the Secretary General, António Guterres, I remember first told me that he thought that his two most important legacies in global governance would be on combating climate change and responding to the positive implications of disruptive technologies. You have seen the UN engage and lead the work on climate over the past many years, but today is a day we get to talk about and even celebrate a little some efforts in global governance on artificial intelligence. This past Thursday, I think you’ve seen it, we have released our final report, Governing AI for Humanity. It’s right here. It’s the first truly global approach to governance of artificial intelligence, and we’re going to talk today about some of the recommendations, why governance including nations from the global south is so important, and some practical reasons why this roadmap is needed. to ensure progress and greater equity, given the challenges that we face in our digital and physical future. So, with that, let me please introduce our distinguished panelists. Experts and leaders from many sectors required for a multi-stakeholder approach, five of us together on the UN High-Level Advisory Body, and two interlopers who are here anyway. As I mentioned, first of all, our co-chairs. We have Carme Artigas, who is co-chair of the body, along with James Manyka, senior vice president at Google Alphabet. We’ve got Vilas Dhar, also an HLAB member. He’s president of the Patrick J. McGovern Foundation. Dr. Wang Jian is chief technology officer at Alibaba. Volker Turk, the UN High Commissioner for Human Rights. And Alondra Nelson, also an HLAB member, is a professor at the Institute of Advanced Study. I welcome all of you. Please. So, let’s get right to it. Carme, the first question I want to ask, and I’m going to start with our two co-chairs, shockingly, bracketing this whole thing, is why the United Nations, right? There have been a lot of efforts at governance of AI. There’s been a lot of money going into AI. The UN doesn’t have a lot of money, doesn’t have a lot of power, right? But here we are. So, why? I mean, obviously, part of it is because it makes us sit uncomfortably close, and that facilitates cooperation. But leaving that aside, why was it critical for the United Nations to take this on?
Carme Artigas: Yes, so this was the first question we had to answer ourselves in the body. You were independent people, and we came to the conclusion that the UN is uniquely positioned to this effort, because it’s the only global organization that has the mandate, the reach, and the legitimacy to seat all nations and all stakeholders in the table. And it has the historical, I would say, success that has done it in the past, I mean, governing international topics such as climate change or on earth control. And because AI is such a pervasive and horizontal technology, and it’s absolutely cross borders, there is no single nation or region that can solve by itself the potential harm biases, discrimination, and lack of inclusiveness. And of course there are other frameworks that are very, very valuable, but they are limited. They usually leave behind many nations, especially on the global south. So we do not pretend that UN is the right place to regulate AI at a global level. We think it’s the right place to encourage collaboration, to foster inclusive business, and ensure that AI is developed, keeping the human rights in mind.
Ian Bremmer: Now, you’re a European, and the Europeans are known for having governance, even multi-stakeholder governance as a superpower. I mean, Lord knows it’s not building AI companies, right? So given that, how do you, former minister in this field, you know, how did you engage with what can the UN do, and what should the EU really be doing?
Carme Artigas: I would say that people sometimes mix ethics, regulation, and governance. There are three different things. Ethics is how do we should, how should we all behave well, companies, governments. Governance is how do we put in mechanisms, instruments, that ensure that everybody’s behaving ethically. And regulation is one of these mechanisms, and we have done it in Europe, the first international regulation, and nobody can argue against me that regulation is not against innovation. That’s another topic, but I am open to discuss it to anyone. I think regulation builds trust, because it orders a market and gives trust. confidence to the market, the consumers, and the citizens. But there are not only a way to govern through regulation. We can govern through transparency, through oversight, through involving everybody. So governance is beyond regulation itself. It’s one mechanism. We should find also the market incentives so that companies and governments behave ethically.
Ian Bremmer: Just a quick one, because I’m responding to that. Did you say, I mean, when the group first came together, you know, 39 members from all these different countries, different walks of life, that actually coming to agreement on common principles seemed to be one of the easiest things for our group to do? That was quick. Am I right about that?
Carme Artigas: Yeah, of course.
Ian Bremmer: Anyone else want to take that on? James?
James Manyika: No, you’re fundamentally right. I mean, one of the things that was extraordinary when we began our work was how quickly we got to agree on things like, this must be based on fundamental human rights. We all agreed. This must be based on international law. We all agreed. This must benefit everybody. We all agreed. I think the hard work was, how do we all come together to think through how we actually do and achieve those things? But I think getting to the principles was relatively quite straightforward. I’m looking at Alondra here, who was a big, you know, force in getting us to many of the right places we got to, especially on issues around fundamental human rights based on the extraordinary work that she had been doing for many, many years.
Ian Bremmer: Alondra, do you want to jump in?
Alondra Nelson: Yeah, I would just say, you know, to your question of why, why is that the UN provides us with a quite incredible foundation? I mean, the UN Charter, our international accords around human rights are quite powerful kind of cornerstones for thinking about this. And so we had a place to go. And I think, you know, the challenge that we face with technology is particularly powerful and fast moving ones like AI is things are moving around and where do we anchor ourselves? And I think the why of the UN is in part that the world’s countries had agreed. have agreed upon already these fundamental kind of true North values. The challenge becomes what does that mean in a digital world? What does that mean in an AI world in which, you know, society is being kind of re-transformed and reconfigured? But I think those fundamental things are true and that’s been a really core of our work on the committee.
Ian Bremmer: And I want our audience to appreciate this. I mean, getting the Singaporeans to champion rule of law is not exactly shocking, but I mean, we’re talking about the Americans, the Russians, the Chinese, the Europeans, the global South. I mean, all participants here, this was not the hard challenge in this group. Vilas?
Vilas Dhar: I think that’s right. I mean, Ian, I want to start from a fundamental observation. We too often equate governance with control. And it’s part of a conversation that’s much bigger. I think we have followed a narrative that technology companies innovate and governments regulate and somehow in that the rest of us go along. But that’s not the point of governance, right? Governance is to set a shared vision for humanity, is to think about all of the resources we can bring to bear to make shared decisions that put agency with communities, that allow voices to participate and to come forward. When we think about the work of the body, I think this underpins the idea. What we got from the Secretary General was a mandate to think beyond, beyond the forms and functions of the moment, to think about a world where a digital future actually works for all of us. It starts from the fundamental pieces that James and Alondra spoke to. But it requires us to also envision new functions and new forms for a future that’s grounded in the idea of governance for, by, and of the people. And I think AI gives us such an amazing aperture to go back to really fundamental questions about what participatory mechanics should look like.
Ian Bremmer: I’m glad you brought that up because when, you know, so much of the conversation on AI out there is about risks, existential risks, disinformation, all of that. This group, not in any way unconcerned with those risks. but fundamentally thinking about how to use AI for humanity. I mean, climate change in a sense is a much more difficult conversation because there’s so much zero-sumness. There’s so much, you know, like reparations need to be paid because you’ve done this to us. This has been an overwhelmingly positive sum, non-zero-sum conversation. James?
James Manyika: Yeah, it has been, but it also has highlighted something else, including beyond the UN itself, is how important this is for it to be a multi-stakeholder endeavor. That was fundamentally important. Let me tell you why I think that was fundamentally important. If you think about what’s at the heart of this technology, this conversation, and what we hope for it, you point to three things, I think. One are the extraordinary opportunities, the possibility to address so much of our challenges with the SDGs, climate change, there’s so much that we could potentially do that’s transformational, number one. There are complexities and risks. There are so many of them. We have to think about all the kinds of issues that we know could happen and go wrong with this technology. And then third, the idea that this has to benefit and include everybody. If you think about each of those three things, there’s no other way to get that done other than through a multi-stakeholder effort. The opportunities, companies are pursuing those, researchers are pursuing those, NGOs are pursuing those, governments are pursuing those. The risks and complexities, same thing. Governments are thinking about those, agencies are thinking about those, researchers are, civil society is. Get to the inclusion and the opportunities. How do you go after opportunities, especially in countries and places and communities where those are not commercial opportunities? You have to include everybody. So as you think about each of the three things that are at the heart of this, it has to be a multi-stakeholder effort. And that’s why I’ll say one final thing. It’s why I was so thrilled that our body actually represented that multi-stakeholder effort. take hold of you. We had researchers, we had academics, we had activists, we had civil society, we had everybody involved. We debated a lot, argued a lot, and we worked pretty well together, I think.
Ian Bremmer: And I would say that it wasn’t obvious during the conversations who necessarily was wearing each of those hats, because the body was collective, pretty global. But I’m going to ask you, because you do wear one of those hats in real life, when we talk about governance, and Vilas just talked about the way we should think about governance, what are the responsibilities that the core private sector corporations, and even some of them state-owned enterprises are linked, should have when we think about governance of AI?
James Manyika: Well, we have several. First of all, keep in mind that much of the research, fundamental research that’s advanced in this field, is led in the private sector, a lot of the research labs are in the private sector. So that places an incredible responsibility, one which is to make sure we’re developing this technology responsibly, we’re thinking about all the beneficial uses of it, not just the commercial uses of it, we have to think about all of that, and we also have a responsibility to engage with governments and others, who are not only going to govern these technologies, but also think about, because keep in mind that this technology, three things happen to it, it’s developed, it’s deployed, and it’s used. That whole chain involves lots of other actors, so we have a responsibility as a private sector to work with each and every one of those, hear their concerns, and see and work together to think about how we deploy and use this technology responsibly. We have an enormous responsibility. Part of it, I’ll say one last thing, we have a responsibility to be transparent, and to help build trust. If this technology is going to have the impact that we think it’s going to have, the public has to trust it, the public has to feel that we and everybody else who’s developing, deploying, and using it, is held accountable. So we have a profound responsibility.
Ian Bremmer: And an interesting point there here is a technology that frankly a lot of people in the global south are more excited about and trust more than a lot of people in the advanced world also an opportunity. Right a fundamental opportunity thing about governments, but Alondra you wanted to come in and then I’m turning to Dr. Wang
Alondra Nelson: I just I think one of the things that we were grappling with is that it’s a fundamental different moment for different moment for multilateralism Right because of exactly what James said not only because you have if we think about something about multilateral action around nuclear Right, those are often owned by states or utilities. And so you have a whole different ecosystem these are technologies that are often coming out of the private sector almost exclusively or a lot of the R&D is coming out of the private sector and then as James suggested you have this sort of series of Stakeholders along the sort of lifecycle of them and that’s a whole so part of what we were grappling with was not just you know How do you govern a dynamic iterative technology? But how do you do it in a way that also is at the same time trying to reimagine what multilateralism looks like when you have when you have to have a Multistakeholder system in a way that you did not when we were trying to think about how do we do nuclear nonproliferation? it’s a completely different set of Actors with different kinds of different sets of power and different kinds of asymmetry than we’ve had to deal with before.
Ian Bremmer: I mean there are US China arms control agreements on AI that will be required But but that’s not what we’re talking about right here. Now. Dr. Wang you you are a scientist and indeed when when you started out There weren’t that many with PhDs in your field in your company. You’re also in the private sector I’m wondering how you are navigating how you think about those tensions and how those tensions are changing as AI is Moving so much faster is becoming so much more transformative as we’re talking about what governance Multistakeholder governance should look like.
Jian Wang: Yeah, I think there’s a different way to look at it. The first thing, you know think about in the UN level Actually, I feel pretty good because you know of the good structure. Like we have the United Nations, we have UNESCO, we have the ITU, these are part of the global organization. And ITU could be a very critical role in terms of technology development. And UNESCO, dealing with the science, dealing with the education and the culture, I think for any new challenge, particularly from new technology, you have to work with a different party and solve the problem from different perspective. You really cannot just solve the problem just by, you know, involve the government. You have to involve the different level of things. That’s one thing. But the scientists, I think, is very important. Get scientists, get individual involved to solve this problem. So for me, the governing is not just, you know, the responsibility of the organization, of government. It’s actually responsible to every people. Just like in the last couple of years, I’m working with the scientists in UK and the scientists in the United States, working together on the geoscience problem. And the more interesting, you know, eventually, actually not eventually, later this year, we bring this new technology to Africa. So individual could make a great deal to help solve this problem. So for me, just like the conversation today, and technology is not just creating a problem. The technology is bringing the people together, even though today is a different way to bring people together. But eventually, you know, different people love this technology. They will work together and solve the challenge. So I’m pretty confident, you know, any problem, you know, created by the human could be solved by a human being.
Ian Bremmer: So this is the most inclusive, proactive conversation I’ve seen on big governance issues, frankly, in the UN in a long time. I’m gonna now shift to implementation and to someone who’s been tasked with some of the most challenging problems in the world on that. front, Volker, none of us envy your position. As you think about AI and how AI can be used, can be implemented by governments, by non-state actors to allow impunity or to facilitate transformation and effective governance, where do you think it’s going right now and what do you think needs to be implemented as a result of these recommendations?
Volker Turk: Well, first of all, congratulations that you got the report out. I think it’s a minor miracle that you have been able to do it and really congratulations to you. When you mentioned mandate, no, you mentioned legitimacy, reach and mandate. I would add normative framework and you have mentioned it. It’s about human rights. We do not have to reinvent the wheel. We have an existing framework that is dynamic, that evolves, that deals with also the future issues and human rights is at the core of it. Because if you are not aware of the impact that anything that happens in this world on freedoms, on fundamental freedoms or on individual rights, if that is not analysed, it’s going to be a problem. And the advantage is it’s a universal framework. So it’s not about global south, global north, west versus someone else. It is universal and that is still agreed at this point in time by everyone. We had a big event on the Universal Declaration of Human Rights last year. There was no detractor from that, no spoiler. So we have that framework. It’s intergenerational. It’s not just about now, it’s also about the past because in some instances you have to deal with the grievances. of the past, but it is primarily also about the future, so it has this intergenerational dimension and it brings us back to human agency and to human dignity, which is whenever anything happens in this world, including on the digital, on the AI front, you will have to take into account. And it is multi-stakeholder. A human rights framework is by nature multi-stakeholder. We cannot do anything on the human rights front if you didn’t, if it wasn’t nourished by social movements, by civil society, by the private sector, and by member states. And actually, so we have a role model when we look at the implementation of how we can bring this to bear on the norms that states themselves have accepted, that the private sector through the business and human rights guiding principles have accepted, and how we can actually go into the granular detail that is needed in order to analyze how we are going to work.
Ian Bremmer: James wants to come in, but a quick follow-up for you first, which is people outside this room, people in this room know this. People outside this room don’t necessarily appreciate that 194 countries around the world agree on a lot of things. They agree on fundamental human rights, even if they don’t implement them. They agree, but they know what they are. They agree on sustainable development goals and where one would want humanity to go, even if right now most of them are not on track to being fulfilled. And hopefully, they agree on a global digital compact and how one deploys artificial intelligence to help ensure that we can actually get some of this better. So when you think about that, if you had a crystal ball, right now, do you believe over the next two, three years that AI is potentially on track to help actually implement, execute more of the things that we agree on but aren’t doing?
Volker Turk: Look, we are obviously at a very difficult geopolitical moment, no doubt about that. But we hopefully will have the global digital compact and the pact for the future. It’s a good beginning. beginning, it’s not enough, it will require a lot of dedicated attention to it, it will require continued multi-stakeholder conversations, it will require a governance framework that becomes more and more effective. Of course we are divided, polarised, we are not at the best place when it comes at the societal level to bring coherence to things, but this is precisely where whatever we can hang on to that works, including the report that you brought out, it actually shows that it is possible and we need to grab on to that and run with it.
Ian Bremmer: James?
James Manyika: Well you know, as you know well Ian, a couple of things that were on our minds when we were doing the work, one is the need to move and act very quickly, for at least two reasons that were centred in our work, the SDGs, the world’s behind, we’re all behind if you recall, we centred the need to contribute to accelerate the SDGs, the ITU has just done some phenomenal work that highlights that of something like the 169 goals in the SDGs, something like 134 of them could benefit and be accelerated using AI, we have to move. The second thing that was on our minds was the issues around capacity, and this is where especially the Global South comes to mind, because I grew up in the Global South, unless we’re able to give people access to this technology, both to participate and benefit from it, the risk of the digital divide becoming the AI divide is too huge, so we have to act, we have to act, that’s why one of our recommendations is around either the capacity fund or capacity network, we have to bring together a multi-stakeholder group that moves quickly to bring capacity and access to especially the Global South.
Ian Bremmer: I mean climate change, we didn’t really have decades, but the reality was you kind of could kick the can for a while and just let other people pay for it, the kids. You don’t have that time on this issue, which is why I don’t think I’m not surprised that everything happened in a year because, I mean, you need light speed to make that work. Carme, you want to come in and then Vilas.
Carme Artigas: Yes, exactly. I think these recommendations are only as good as our capacity to implement them as soon as possible. So as you have mentioned, and no of these recommendations are built on vacuum. We’re building on existing frameworks that already work, like human rights, but also the excellent work that UN agencies are already doing in their own domains. And that they will keep on doing that, and probably they will have much more burden of work around all these topics on AI. But we need additional instruments because there is still a global governance deficit. And because this is so horizontal, it requires so much coordination. So this is why we did not recommend, as the first thing, an international agency. Because that takes a long time, it’s a big institution, and we will see if that comes.
Ian Bremmer: And the governments, they were not ready to approve that. If you’d announced it, it wouldn’t have happened.
Carme Artigas: I don’t know, but we are proposing things that are actionable, and that we believe that in less than 18 months’ time can be ready for work. Because that’s what we need. And I think that governance is far from an innovator, it’s a catalyzer, and it’s an enabler. And I think that’s what we should be focused on.
Ian Bremmer: An agenda setter?
Carme Artigas: Of course. But I think having this conversation, and these conversations, was not the public opinion one year ago. And I think we are starting a conversation now that I hope is followed beyond the Global Digital Compact, and the companies and the governments and all the institutions will support our recommendation.
Ian Bremmer: I mean, this is the sneaky thing about the UN, right? Which is that, you know, you actually put it together, you imagine it, you start actually having conversations that other people aren’t having, and they will, default, become what people are talking about.
Vilas Dhar: Here’s the power in it, Ian. I think you’re exactly right. There is a way to talk about this that is the law of big numbers. That AI is the story of billions of dollars of investment, millions of lines of code. The foundation models that have the most parameters. And you can almost turn it into a math problem. There were a number of experts on the body with me that were computer scientists. I think we probably would all say, I hated doing math homework as a kid. I certainly don’t want to do it now. It’s not the solution. Instead, what I think about is all of these things we are talking about aren’t really about put all the ingredients together, put them in a stew pot and get an answer. It’s think about the fact that almost all of this comes down to the experience of people on the ground. My brothers and sisters, my cousins, my uncles, my aunts in countries across the planet. And what we put forward in the report is a mechanism to think about real intervention that intersects with people where they are. We don’t think about capacity building as finding a few critical enablers and saying let’s invest in compute. Or let’s just make sure there are data sources. Instead, we think about a holistic network that says let’s actually look with communities at what their needs are and think about a mechanism by which we say there is massive resources across the system. There is those contextual pieces of a normative framework. There is that mandate and that integrity. But it doesn’t happen because any entity, the UN or otherwise, says we are now going to come in and build AI for the public good. Happens because we work with communities to say what do you need to build and want to build? The second recommendation in the report that’s relevant is this idea of a global fund. The idea that we actually need capital resources that sit apart from and outside of our political mechanisms that hold instead a moral responsibility to say we need to take the resources necessary for communities to define their digital agency and make sure that they have the economic resources that let them use that money in the way they need to to build what they do. Now, we haven’t defined the specific form of that fund for a very specific reason. This is something that needs to happen through a participatory mechanism. That through the global digital compact and the implementation that comes, we need to take rights, we need to take frameworks, we need to take capital and turn it into something that actually advances progress.
Ian Bremmer: Alondra, as someone who does public policy for a living, what do you take out of this? If you were in charge of global implementation, what would you, how would you think, not about priority. advertising, but how would you think about your agenda? What would you want to make sure that people are taking away from the next steps?
Alondra Nelson: Well, first I would go to process, because that’s what wonks do. But would it be, just to double click on what Vilash said, I mean, part of this process was a lot of consultation with lots of people from civil society, with the impacted communities. So if we really want to steer and shape these good outcomes, we need to figure out how to do that in part by engaging communities. So any implementation, exactly to Vilash’s point, has to include communities that are impacted, that are going to be impacted, need to have a seat at this table in this conversation, whether or not they have PhDs in computer science or can do math. That’s critically important. I think the other piece is that we don’t know enough. So I would also associate myself with Dr. Jian, and that we don’t know the science. I mean, if we think back about the high watermark of the COVID-19 pandemic, and there were lots of preprints and lots of papers, and I think in that context, perhaps it was okay to say, you know, we’re going to figure out the science as we’re, you know, we’re going to build a plane while we’re flying it. We actually don’t know enough about these systems and tools and models. A lot of what we do know, a few people know, a lot of people don’t know. So I think one of the sort of outcomes of the report is really a commitment to implementing a kind of common understanding. And we’re seeing across the, you know, sort of international ecosystem, different ways for doing that. We proposed in the report, creating an international panel for understanding AI, for the science of AI, that would complement work on AI safety, that would complement some of the other sort of multilateral and regional things that are happening. But even these have to be done in a way that is communicating that information to not only nation states, but sees the public as an audience for how these tools work, what they can do, what their limitations are, and how we can use that information to steer them to the good outcomes that I think many of us hope and want, but are not inevitable and are not unique inherently characteristics of the technology.
Ian Bremmer: And I’d like to believe that this panel right now is actually leading by example specifically on that. That’s what we’re trying to do on this stage right now, right? Volker, you wanted to go and then James.
Volker Turk: Just to, because I think it’s a very important discussion, because if you look at the future and what startups want to do these days, they will want to do something for the for the good, common good, public good, whatever you call it. But you need to fill it then with content. That’s where the human rights side comes in, because you want to do something that is of benefit to humanity. And we often hear that actually from those who are involved in this. That’s important. But there is also the risk side and we cannot avoid talking about the risks. And because risks, we can also look at it from like traffic regulations. I mean you’re going to hit another car if you don’t respect the traffic regulations. And it’s a little bit the same when it comes to innovation, to all kinds of creative work.
Ian Bremmer: I want to give James and Dr. Wang a chance to come in and then we’re going to turn to risks. And I’m going to go to you first, by the way, but go ahead James.
James Manyika: I want to just underscore something that Dr. Nelson just described, which is there’s so much more research still to do in this field. I mean I, in my day job, I oversee the research teams that are researching and building these systems. And the field is moving so quickly, the advance is coming so fast. There’s still a lot more that we still need to learn. Some of that is surprising as being incredibly beneficial. We have all these breakthroughs and landmark breakthroughs in science and other places. But some of them are risks that we’re still researching. So I think the research frontier, that’s why one of the key pieces in our recommendation was this idea of a scientific panel that tries to keep it. But it’s got to be one that works very, very differently than what say the IPCC does. It has to be real-time. IPCC does what, a report every seven years? We can’t do that here. So that’s why the ongoing research both to understand the benefits benefits, the potential, as well as the risks, is so fundamentally important. That’s why many of us are involved in a lot of these AI safety institutes and research to really work on the frontier of the risks.
Ian Bremmer: Dr. Wang, you want to come in?
Jian Wang: Yeah, I think that back to this research challenge, I think it’s something to bring up, you know, at this time. Just thinking about every year, we have more than 5 million paper published, probably some number even bigger than 5 million, that’s a lot of paper. And just like climate change, it’s a very, very complex system, and it takes time for people to really understand. And come to the AI, it’s even more complex than the climate change, okay. So I would say that really needs something new and a framework to bring the whole science community together. Again, I want to emphasize that, and with a UN framework, and otherwise, there’s no single science committee can solve this problem.
Ian Bremmer: And is it fair to say in this field that right now, especially when we look at the two countries that are leading the way in AI, U.S. and China, that the scientific community is actually getting further apart?
Jian Wang: And most of the time, I won’t look at this field based on the countries, okay. So if you look at the people who really pioneered this area, they are from Europe, okay, from Canada. So it is not just country by country, and you have to look at how the science community actually works, okay. So for me, actually, the reason that people are thinking about U.S. and China is just because they have good AI infrastructure, helped people do the research, okay. So I think for the UN, we have to make sure they have the global shared AI infrastructure so everybody could contribute, and everybody could contribute. to solve the problem, okay. This is actually how big tech companies should do as well. You know, it’s not just for your company, but it’s really on a shared infrastructure, particular technology infrastructure, I would say.
Ian Bremmer: For the rest of the people, yeah. Oh, okay. Who was first? No, to focus first. So, only because I want to shift towards, again, we can have a very upbeat conversation about where we want to get, but as you said very eloquently, the geopolitical environment right now, the trajectory is not towards more integration, more global cooperation. It’s actually towards more conflict, and the political and economic models that we thought we could kind of take for granted are themselves under siege. So, when you look at the AI initiatives that are now being put together against that geopolitical conflict, that context, where do you see the biggest challenges?
Volker Turk: Well, it is obviously, once the genie is out of the bottle, how do you control the genie? And I think- Once all sorts of actors have that technology. For instance, and this is a phenomenon that is not just in one part of the world. I mean, we get a lot of it. We actually get a lot of requests for advisory services from member states and startup companies all around the world who want to do the right thing. So, they’re asking us, what type of risk models do we use? How do we regulate? How do we get a multi-stakeholder system in place? And it’s incredibly important that we are very fast in making sure that these advisory services can be provided. We have done it with the big tech companies. I mean, I brought you one of the documents that came out of this, which looks at taxonomy of risks from a human rights perspective, which wants to really complement the existing risk frameworks and really say you need to look at obligations. when it impinges on individual freedoms and rights. And that work is incredibly important. It’s not about ethics anymore, it is about obligations that we have towards people.
Ian Bremmer: All right, please.
Carme Artigas: I just wanted to comment on all the discussions about risk. I don’t know if we all remember that we’re talking about machine learning and deep learning, the conversations were about fairness. All of a sudden, when generative AI came to scene, we forgot about the conversation of fairness, we focused the focus on risks, in most of them existential risk or risk for frontier AI models, and sometimes that is preventing us to look at the existing risks that we already have in the present, more on the sides of fundamental rights. And it’s very interesting, and I recommend everybody having access to the document, and an agenda we have included, which is a risk analysis, a risk survey, involving many countries in the world, different stakeholders, and how interesting it is to see the difference on perception of risks of global north, global south, men and women. And we’re talking about risk because we are not informed that we need this scientific panel on the real facts, sometimes we tend to be dramatic or probably overreacting, and we forget to talk about opportunities. And if we see how risk is perceived in the global south is less perceived, people are more concerned about the opportunities they can miss.
Ian Bremmer: But they’re being left out.
Carme Artigas: Absolutely. So let’s talk also about opportunities, let’s have scientific panel inform us, not only on the risk, more transparency from the private companies, of course, but also on the great opportunities. And I can mention the huge acceleration we can expect on achieving the sustainable development goals, and also how can we allow for education and public health and universality. And I think that is the discussion that we still need to have.
Ian Bremmer: So the principle global risk here is that the lack of resources, the lack of urgency, means the digital divide becomes an AI divide, and we end up splitting apart much farther, right? And humanity doesn’t look like humanity very much in that environment. right?
James Manyika: No, it doesn’t. I mean, I was going to interject very, very quickly. If you remember in our work, one of the fascinating observations for me is when we’re talking about the risks, we often talked about misapplication and misuse. Several members in our body said, please add missed uses. If you remember that word, it’s actually in there. Missed opportunities. And that was mostly some of the members in the Global South thinking about the missed opportunities when this technology could actually transform their lives, circumstances. But all of that hinged on this ability, having the capacity to be able to participate. And we spent a lot of time thinking about the enabling infrastructure, the enablers to enable participation that range everything from very basic things that are in the digital compact like broadband connectivity, even electricity, right? In addition to access to models and compute. So I think this question of access and capacity is so fundamental to the inclusivity part of this conversation.
Ian Bremmer: So addressing the missed opportunity isn’t like, oh, we’re paying you because we’re doing something wrong. It’s because you’re actually creating market opportunities. I mean, it should be additive.
Alondra Nelson: Can I jump in here and just have a slight push back a little bit? I mean, I do think, so we did hear quite a lot from people in the global majority that they didn’t want to be left out. But there were also concerns about climate and sustainability, about the mining of critical minerals, about the extraction of labor that has to be done to train data. So I want to be very clear about what we’re hearing on the sort of landscape of inequality when you think about the entire AI stack and not just the sort of deployed tool or system.
Ian Bremmer: It feels like a race, right? I mean, on the one hand, you need these tools to address the challenges, but making the tools is also going to strain the challenges. Yeah? Please.
Vilas Dhar: I mean, we assume that inertia is the problem, right? We assume that inertia is inevitability, that the ways that we develop are the only ways can do it. Today, in this building, we are showing an AI model in a collaboration with Rafik Anadol, who I know is friends with many of us, a model that’s trained on 100 million pieces of data, sourced ethically with community consent from across the planet, trained on a model that uses only renewable power, that goes slow rather than fast, that generates incredible pieces of aesthetic beauty, and can also be used to build a predictive climate model that lets us test interventions. AI doesn’t have to be an attack against our climate sustainability. What we have to change instead is the why behind our reasons for moving so fast, for what the commercial purposes are that are often putting us in conflict against things like political rights, economic rights, climate issues, and more. There are other ways. Risks are not deterministic. We talk about risks so we can come up with better paths to better futures.
Ian Bremmer: Do you buy that? I mean, I’m asking… Thank you.
Alondra Nelson: I do. I do. I mean, I think that we are, you know, we talk quite a lot about a few organizations, but we have other organizations that are creating different models or trying to think about the sustainability issue. And I think we should be, if we’re really serious about advancing on the SDGs, we should need to be really serious about the sustainability issues and about, I think, a growing conversation that says we just need more energy, full stop, and, you know, whatever happens, you know, so be it. And so we, I think particularly in a place, in a conversation at the UN, we’ve got to figure out a way to hold all those things together and put them in balance, even understanding that it’s going to be very hard to do. And I think this is, that’s innovation, right? I mean, I think that we have had other moments where we said, you know, you’ve got to, you’ve mentioned seatbelts, seatbelts in the cars, we put guardrails on the road that allow you to sort of go where you want to go, steer a little bit faster. I mean, there are other kinds of historical moments in which we have had to make choices about how we want to advance things. And I would, you know, I think one of the challenges that we want to offer to the world, particularly to the scientific community, is how do you build these models more sustainably? How do you build data centers that are cooler, that use less water? I mean, it is a, like, it’s a, it is, these are the scientific challenges. engineering challenges of our time. And I think for many scientists, they’re incredibly exciting to think about as puzzles and how do we incentivize that?
Ian Bremmer: So we have only three minutes left. And I wanna use that for our two co-chairs, if you don’t mind. And I wanna ask both of you, take a step back. Is this a historic moment? In 10 years time, do we go back? Is there a COP process for artificial intelligence? Are we thinking differently about global AI? Are we applying our models in ways that are more inclusive, more integrative because of what is being done right now? Do you believe that? I wanna ask both of you, what it means for you. James.
James Manyika: I think this is a very important moment. One of the things that gives me enormous confidence is the fact that we’re still so early in the development of this technology. The fact that we’re having these debates, these discussions, this early in the development of a technology that still is in its early stages gives me a lot of hope. The fact that we’re able to at least agree on fundamental principles that should guide the development of this technology, that gives me enormous hope. The fact that we can actually have a multi-stakeholder conversation about this and come together to think about, so how do we do this? It goes back to what you said, Ian. The fact that we very quickly got to agree on basic principles and that much of the debate and hard work all had to do with how do we do it, that gives me hope. So I’m actually quite optimistic about all of this. I think, but it’s only incumbent on everybody here and all of us in the room to make sure we progress this with humanity’s best interests at the center of what we do with this technology.
Ian Bremmer: Carme, you get a minute.
Carme Artigas: I’m absolutely confident that here, in changing times, we have managed to develop AI for the good of humanity with more inclusiveness, with more opportunity to all, not only relying to the goodwill of organizations and governments, but we have. created really the governance instruments to make it happen and that we would look back to today of today and say, we were proposing the right thing, but most important, the nations were brave enough to adopt them.
Ian Bremmer: So before we close, I want to thank you to the panel, but I know everybody here would be a little remiss if we didn’t ask our friend Amandeep to stand up, our special envoy who made this process work. Tireless, tireless efforts, incredibly balanced decency, moral guidance and integrity and reflects everything that we are hoping for on this panel would not be happening if he wasn’t there. And I just want to thank you for everybody here. Thanks so much for joining us. We’re out of time and we’ll see you soon.
Redi Thlabi: Thank you so much, Ian, for this marvelous moderating of that panel and to your panelists as well. So much love, respect and affection, I see, but we’ve got to move along to the next segment of the program. Thank you all so very much. Thank you. I’ll introduce our next guest once we’ve all settled down to prepare for the next speaker as we wind down to the final segment of our convening this afternoon. I’d like us to settle down so we can give the president his moment and an opportunity to address us as we take the final steps to our event today. Thank you. Ladies and gentlemen, again, please help me in starting this joint closing. Help me welcome, a warm welcome, he’s travelled a long way to be here. Western Africa is a long way from here. His Excellency, I’m not going to call him up until we’ve all settled. I think it is appropriate. I think it is appropriate to demonstrate our own commitment, our own respect, and a word that Ian used earlier, decency, in describing Amandeep Singh Gill, UN Secretary General’s envoy on technology. So I’d like us to afford the same warmth and decency to our next speaker. It is a pleasure to welcome on stage his Excellency, the President of Botswana, Mokgweetsi Masisi, for his closing comments.
Mokgweetsi Masisi: Mr. Secretary General, Excellencies, I wish to express my profound gratitude to the Secretary General of the United Nations, His Excellency Antonio Guterres, for the invitation to participate in the Action Days session ahead of the summit of the future that is scheduled for 22nd to the 23rd September 2024, particularly on the segment of the digital track. Recommendations go to all the speakers and presenters on the digital future for all for highlighting the significance of digital justice. Digital technology is pivotal in global transformation. The effects of its impact can either be positive or negative, depending on how we harness the opportunities and mitigate challenges. However, the scope of positive impact remains high if we can collectively work towards this end. It is critical to make a link between digital inclusion and digital cooperation to bridge the divide between nation-states and within nation-states. We need to recognize that the digital divide emanates from disparities between the developed and developing countries. Technology has the potential to advance the promotion and acceleration of closing the gap in opportunities between genders and, consequently, can lead to the attainment of gender parity goals. More importantly, digital space has the potential to advance the promotion of human rights, if unimpeded. Furthermore, issues of international peace and security leverage on the use of digital technologies to inform the world of the threats and challenges that need to be addressed. Botswana, therefore, commits to be part of the brigade that flags the criticality of the potential of digitalization and cautions of its threats. Thus, my Administration has prioritized digitalization as one of its priorities within its flagship strategy of the Reset and Reclaim Agenda. I assure you of the Republic of Botswana Government’s commitment to continue to be open and amplify our voice on issues of digitalization. It is also my fervent hope that the global aspirations outlined in the Global Digital Compact would close gaps, create inclusivity, and promote access. by once again extending my sincere appreciation to the Secretary-General and all other key stakeholders for a productive session as we all look towards the summit of the future tomorrow. Thank you.
Redi Thlabi: Thank you very much. And now, for the final segment of our closing, a pleasure to welcome Amdip Singh Jo, UN Secretary-General’s envoy in technology. If you could also join him on stage, please. We heard from you earlier this morning Achim Steiner, Administrator of UNDP. If you can also come with him at the same time, thank you. Thank you. Doreen Bogdan-Martin, Secretary-General of the ITU. If you could also kindly come on stage, please. I’ll pick on you first, Amandeep, to speak, okay? Thank you.
Amandeep Singh Gill: And thank you to all of you for being here with us at this moment, this very important moment. And I want to thank my partners in this endeavour, Doreen and Achim, and their teams for the incredible work that we’ve been able to do together. I have only three points to share with you as reflection from the day. First, the importance of connection. And as we heard in the video, it’s not about connecting the circuits, it’s about connecting the people. So it’s the connections across people, people from different geographies, different backgrounds, different sectors, different lived experiences. We can only get the digital future right. if we connect people. The other second point that I take away from the day is the importance of not retreating into silos. Everything is connected. We can’t deal with AI without dealing with data. We can’t deal with either without dealing with digital public infrastructure and connectivity and so on. So we need to take a holistic view. And the last point I want to share is the importance of humility. I think we need to listen more than we speak. All of us who are in the policy space need to be very, very humble about what their understanding of technology is, what its implications are. We need to work together. We need to constantly update ourselves and hang out with the right people so that we can bring their insights, their valuable insights, into our policy work and improve the quality of our policy responses. So thank you very much. It’s a very exciting moment. It’s a very sobering moment at the same time. There’s a lot of work ahead. But with you, we can get there. Thank you.
Redi Thlabi: Thank you so much. I think you can speak at the podium or on your microphone. It’s up to you.
Achim Steiner: I’ll just use the microphone. And thank you, I will not use the teleprompter because it’s really just two things that I want to say. One is a really big thank you. You and we and all of us in the UN today had a treat. We listened to presidents, to CEOs, to young entrepreneurs, to artists, to people who, together with science, engineering, technology, are able to walk again, at least, with the help of technology. We’ve had an extraordinary day. And I hope that what you can take away from this SDG Digital Day and also this prospect of AI that to all of us is still somewhat unknown, even though we know it is going to be central to our lives as we think into the future, is this age of possibility. There is so much in the world right now that makes… everyone feel like they live under a cloud and sometimes you lose perspective. I think today I hope you all got a sense of what an extraordinary age we live in and if we make the right choices what an extraordinary age it can be for the next generation and for everyone. In that spirit I want to thank Amandeep, I want to thank Doreen, our staff who’ve actually been working for weeks on all of this and everybody else who supported this day today by turning it into something that I hope the United Nations will always be known for. Even in the darkest days there is hope and it will be done and it will be led by people. Thank you so much.
Doreen Bogdan-Martin: Thank you, thank you Achim and thank you Amandeep and indeed it has been an extraordinary, extraordinary day. Sustainable, inclusive, responsible. Three concepts at the heart of our digital track during the summit of the future action days and I would like to add to that hope because nothing gives me more hope for our shared digital future than all of you. Our brilliant innovators, our partner to connect pledgers, our digital game changers, you showed us technology can be co-created with the people it’s built for involving them directly as decision makers in design. You showed us how to make digital work with the lived realities of people in developing countries and underserved and vulnerable populations. You showed us how emerging tech from augmented reality to AI can help boost our planet’s resilience while supporting climate action. You showed us how digital skill building can lead to decent work and economic prosperity. in the unlikeliest of places against all odds. You showed us what peace tech can do to rescue the SDGs. You even showed us how much it will take, literally, an investment to connect everyone everywhere by 2030 through the Connecting Humanity Action Blueprint mentioned by Saudi Arabia. And you showed us your commitment to do what it takes through new Partner to Connect pledges. And I thank you for those new pledges. Ladies and gentlemen, we are the SDG generation. A digital future full of hope, possibility, and ambition is in our hands. And I want to thank each and every one of you for giving us a glimpse today. You gave us a glimpse of what is possible. We may have come to the end of our first Digital Action Day, our second SDG Digital, but the action certainly does not stop here. It can’t. Because too much is at stake. Fired up by hope, let’s take everything that we’ve learned today, let’s go out there and let us build a more sustainable, inclusive, and responsible digital future for all. And let’s build it together. Thank you. Ladies and gentlemen, as we wrap up, and as Akeem already mentioned, I think it’s important to understand this really was a team effort here. And I also want to acknowledge all of the staff, and if I may, can I ask the staff to just stand up? Because this wouldn’t have happened without our amazing teams. I know it’s dark in the room. Thank you.
Redi Thlabi: Thank you very much. Now that’s leadership, because often we say we leave no one behind, but we forget the people who are doing the groundwork, who perhaps don’t have the opportunity to shine on the global stage. So I find that very inspirational indeed. Thank you. Ladies and gentlemen, let me thank you, all of you, for being here today. It’s been a long day. I’ve got nothing to add to all the challenging, inspiring messages that we’ve heard today as we journey together towards a digital future for all. For all. Now, the last thing I’m going to tell you is that that online forum or platform where you can make your inputs is going to be up tomorrow after world leaders have adopted the Global Digital Compact. Please speak honestly, share what you know, what you think, what you’ve experienced, and take the learnings from today as you make your input. We look forward to them. Thank you so very much for today. Goodbye.
Carme Artigas
Speech speed
172 words per minute
Speech length
963 words
Speech time
335 seconds
Unique UN position to lead global AI governance
Explanation
The UN is uniquely positioned to lead global AI governance due to its mandate, reach, and legitimacy. It can bring all nations and stakeholders to the table, building on its historical success in governing international issues.
Evidence
Examples of UN’s past success in governing climate change and arms control
Major Discussion Point
The importance and role of the Global Digital Compact (GDC)
Agreed with
Omar Al Olama
James Manyika
Tumi Makgabo
Volker Turk
Agreed on
Importance of the Global Digital Compact (GDC)
Balancing innovation and risk mitigation in AI governance
Explanation
AI governance should focus on both opportunities and risks, not just existential risks. There is a need to balance innovation with risk mitigation, considering the different perceptions of risks across global north and south.
Evidence
Risk analysis survey showing differences in risk perception between global north and south
Major Discussion Point
Governance and regulation of AI
Disagreed with
James Manyika
Disagreed on
Focus on risks vs opportunities in AI governance
Omar Al Olama
Speech speed
191 words per minute
Speech length
254 words
Speech time
79 seconds
GDC as starting point for future action on AI
Explanation
The Global Digital Compact is seen as a great starting point for future action on AI. It provides a framework for cooperation and action on AI governance.
Evidence
UAE’s commitment to be part of the roadmap put forward by the UN
Major Discussion Point
The importance and role of the Global Digital Compact (GDC)
Agreed with
Carme Artigas
James Manyika
Tumi Makgabo
Volker Turk
Agreed on
Importance of the Global Digital Compact (GDC)
James Manyika
Speech speed
181 words per minute
Speech length
1479 words
Speech time
489 seconds
Need for multi-stakeholder approach in AI governance
Explanation
AI governance requires a multi-stakeholder approach due to the diverse nature of opportunities, risks, and inclusivity challenges. This approach involves companies, researchers, NGOs, governments, and civil society.
Evidence
Composition of the UN advisory body representing diverse stakeholders
Major Discussion Point
The importance and role of the Global Digital Compact (GDC)
Agreed with
Carme Artigas
Omar Al Olama
Tumi Makgabo
Volker Turk
Agreed on
Importance of the Global Digital Compact (GDC)
Need to bridge digital divide to prevent AI divide
Explanation
There is an urgent need to bridge the digital divide to prevent it from becoming an AI divide. This requires providing access to AI technology and building capacity, especially in the Global South.
Evidence
Recommendation for a capacity fund or network to bring AI access to the Global South
Major Discussion Point
Opportunities and challenges of AI for development
Agreed with
Tumi Makgabo
Sundar Pichai
Agreed on
Addressing the digital divide to prevent an AI divide
Role of private sector in responsible AI development
Explanation
The private sector has a significant responsibility in AI development, including conducting fundamental research, developing technology responsibly, and engaging with governments and other stakeholders. They also have a duty to be transparent and build public trust.
Evidence
Examples of private sector research labs leading AI development
Major Discussion Point
Governance and regulation of AI
Need for real-time scientific panel on AI developments
Explanation
There is a need for a scientific panel that can provide real-time insights on AI developments, both in terms of benefits and risks. This panel should work differently from existing models like the IPCC, given the rapid pace of AI advancements.
Evidence
Comparison with IPCC’s seven-year reporting cycle, which is too slow for AI
Major Discussion Point
Governance and regulation of AI
Addressing both risks and missed opportunities of AI
Explanation
AI governance should address not only the risks but also the missed opportunities, especially for the Global South. There is a need to focus on enabling infrastructure and capacity building to ensure inclusive participation in AI development and benefits.
Evidence
Inclusion of ‘missed uses’ in the advisory body’s risk discussions
Major Discussion Point
Ensuring AI benefits humanity
Agreed with
Sundar Pichai
Felix Mutati
Agreed on
AI’s potential to accelerate progress on Sustainable Development Goals
Disagreed with
Carme Artigas
Disagreed on
Focus on risks vs opportunities in AI governance
Tumi Makgabo
Speech speed
166 words per minute
Speech length
2102 words
Speech time
757 seconds
GDC addresses digital divides and inclusive governance
Explanation
The Global Digital Compact aims to address existing digital divides and promote more inclusive digital governance. It recognizes the need for a more equitable digital future.
Major Discussion Point
The importance and role of the Global Digital Compact (GDC)
Agreed with
James Manyika
Sundar Pichai
Agreed on
Addressing the digital divide to prevent an AI divide
Volker Turk
Speech speed
162 words per minute
Speech length
854 words
Speech time
315 seconds
GDC builds on existing human rights frameworks
Explanation
The Global Digital Compact builds on existing human rights frameworks, which provide a universal and dynamic foundation for addressing AI governance. This approach ensures that human rights considerations are central to AI development and deployment.
Evidence
Reference to the Universal Declaration of Human Rights and its continued relevance
Major Discussion Point
The importance and role of the Global Digital Compact (GDC)
Agreed with
Carme Artigas
Omar Al Olama
James Manyika
Tumi Makgabo
Agreed on
Importance of the Global Digital Compact (GDC)
Focusing on AI use cases that benefit humanity
Explanation
There is a need to focus on AI use cases that benefit humanity and contribute to the common good. This involves filling the concept of ‘public good’ with content that aligns with human rights principles.
Evidence
Mention of startups focusing on projects for the common good
Major Discussion Point
Ensuring AI benefits humanity
Sundar Pichai
Speech speed
136 words per minute
Speech length
1405 words
Speech time
618 seconds
AI can accelerate progress on Sustainable Development Goals
Explanation
AI has the potential to accelerate progress on the UN Sustainable Development Goals. It can be applied to benefit humanity in various areas such as health, education, and climate action.
Evidence
Examples of AI applications in language translation, scientific discovery, and disaster prediction
Major Discussion Point
Opportunities and challenges of AI for development
Agreed with
James Manyika
Felix Mutati
Agreed on
AI’s potential to accelerate progress on Sustainable Development Goals
AI enables economic progress and entrepreneurship
Explanation
AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across sectors and create new opportunities for businesses.
Evidence
Example of Gary Logistics in Ethiopia using AI to improve operations and create job opportunities
Major Discussion Point
Opportunities and challenges of AI for development
Agreed with
James Manyika
Tumi Makgabo
Agreed on
Addressing the digital divide to prevent an AI divide
Josephine Teo
Speech speed
141 words per minute
Speech length
795 words
Speech time
338 seconds
Importance of building AI capacity in developing countries
Explanation
There is a need to build AI capacity in developing countries to ensure they can participate in and benefit from AI advancements. This involves working with employers, providing individual learning support, and building training infrastructure.
Evidence
Singapore’s approach to enabling workers to acquire relevant skills for the future
Major Discussion Point
Opportunities and challenges of AI for development
Felix Mutati
Speech speed
98 words per minute
Speech length
411 words
Speech time
251 seconds
Potential of AI to transform lives in rural areas
Explanation
AI and digital technologies have the potential to transform lives in rural areas by providing access to information and services. This can lead to improved farming methods and economic opportunities.
Evidence
Example of a young farmer in rural Zambia using a mobile phone and internet to access weather forecasts and market prices
Major Discussion Point
Opportunities and challenges of AI for development
Agreed with
Sundar Pichai
James Manyika
Agreed on
AI’s potential to accelerate progress on Sustainable Development Goals
Margrethe Vestager
Speech speed
137 words per minute
Speech length
792 words
Speech time
345 seconds
Need for global cooperation on AI governance
Explanation
There is a need for global cooperation on AI governance to address challenges that individual countries cannot solve alone. The Global Digital Compact provides a framework for such cooperation.
Major Discussion Point
Governance and regulation of AI
Importance of enforceable AI regulation
Explanation
Enforceable AI regulation is crucial to create a systemic response to the challenges posed by AI. This includes legislation to keep markets open, ensure digital services are safe, and protect privacy.
Evidence
Examples of EU legislation like the Digital Markets Act and Digital Services Act
Major Discussion Point
Governance and regulation of AI
Alondra Nelson
Speech speed
207 words per minute
Speech length
1177 words
Speech time
340 seconds
Centering human rights in AI development
Explanation
Human rights should be at the center of AI development and governance. This involves anchoring AI governance in fundamental human rights principles and international law.
Major Discussion Point
Ensuring AI benefits humanity
Need for sustainable and ethical AI development practices
Explanation
There is a need for more sustainable and ethical AI development practices. This includes addressing issues of climate sustainability, labor practices in data training, and the extraction of critical minerals.
Evidence
Mention of concerns about climate impact, labor exploitation, and resource extraction in AI development
Major Discussion Point
Ensuring AI benefits humanity
Vilas Dhar
Speech speed
217 words per minute
Speech length
859 words
Speech time
236 seconds
Importance of community engagement in AI development
Explanation
Community engagement is crucial in AI development to ensure that AI solutions meet the needs of the people they are intended to serve. This involves working with communities to understand their needs and involving them in decision-making processes.
Evidence
Proposal for a global fund to support community-defined digital agency
Major Discussion Point
Ensuring AI benefits humanity
Agreements
Agreement Points
Importance of the Global Digital Compact (GDC)
Carme Artigas
Omar Al Olama
James Manyika
Tumi Makgabo
Volker Turk
Unique UN position to lead global AI governance
GDC as starting point for future action on AI
Need for multi-stakeholder approach in AI governance
GDC addresses digital divides and inclusive governance
GDC builds on existing human rights frameworks
Speakers agreed on the critical role of the Global Digital Compact in addressing AI governance, digital divides, and promoting inclusive development while building on existing frameworks.
Addressing the digital divide to prevent an AI divide
James Manyika
Tumi Makgabo
Sundar Pichai
Need to bridge digital divide to prevent AI divide
GDC addresses digital divides and inclusive governance
AI enables economic progress and entrepreneurship
Speakers emphasized the importance of bridging the digital divide to ensure equitable access to AI technologies and prevent further inequalities.
AI’s potential to accelerate progress on Sustainable Development Goals
Sundar Pichai
James Manyika
Felix Mutati
AI can accelerate progress on Sustainable Development Goals
Addressing both risks and missed opportunities of AI
Potential of AI to transform lives in rural areas
Speakers highlighted AI’s potential to contribute to sustainable development and improve lives, particularly in developing regions.
Similar Viewpoints
Both speakers emphasized the need for a balanced approach to AI governance that promotes innovation while mitigating risks through enforceable regulations.
Carme Artigas
Margrethe Vestager
Balancing innovation and risk mitigation in AI governance
Importance of enforceable AI regulation
Both speakers stressed the importance of grounding AI governance and development in existing human rights frameworks.
Volker Turk
Alondra Nelson
GDC builds on existing human rights frameworks
Centering human rights in AI development
Unexpected Consensus
Multi-stakeholder approach to AI governance
Carme Artigas
James Manyika
Vilas Dhar
Unique UN position to lead global AI governance
Need for multi-stakeholder approach in AI governance
Importance of community engagement in AI development
Despite representing different sectors (government, private sector, and civil society), these speakers unexpectedly agreed on the necessity of a multi-stakeholder approach to AI governance, emphasizing the importance of inclusive participation from various sectors and communities.
Overall Assessment
Summary
The main areas of agreement included the importance of the Global Digital Compact, the need to address digital divides, AI’s potential for sustainable development, the necessity of human rights-based approaches, and the importance of multi-stakeholder governance.
Consensus level
There was a high level of consensus among speakers on fundamental principles and goals for AI governance. This consensus suggests a strong foundation for global cooperation on AI development and regulation, which could facilitate more rapid progress in implementing the Global Digital Compact and related initiatives. However, the specific mechanisms for implementation and balancing various interests may still require further negotiation and refinement.
Disagreements
Disagreement Points
Focus on risks vs opportunities in AI governance
Carme Artigas
James Manyika
Balancing innovation and risk mitigation in AI governance
Addressing both risks and missed opportunities of AI
While both speakers acknowledge the need to address risks, Carme Artigas emphasizes the importance of not overlooking opportunities, especially for the global south, while James Manyika stresses the need to address both risks and missed opportunities equally.
Overall Assessment
Summary
The main areas of disagreement revolve around the balance between focusing on risks versus opportunities in AI governance, and the specific approaches to ensuring sustainable and ethical AI development.
Disagreement level
The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental principles and goals of AI governance, with differences mainly in emphasis and specific implementation strategies. This suggests a generally unified vision for the Global Digital Compact, which bodes well for its potential implementation and effectiveness.
Partial Agreements
Partial Agreements
Both speakers agree on the need for ongoing research and monitoring of AI developments, but James Manyika focuses on the speed and real-time nature of the panel, while Alondra Nelson emphasizes the importance of sustainability and ethical considerations in AI development.
James Manyika
Alondra Nelson
Need for real-time scientific panel on AI developments
Need for sustainable and ethical AI development practices
Both speakers recognize the potential of AI for development, but while Sundar Pichai focuses on the positive impacts, Alondra Nelson emphasizes the need to address sustainability and ethical concerns in AI development.
Sundar Pichai
Alondra Nelson
AI can accelerate progress on Sustainable Development Goals
Need for sustainable and ethical AI development practices
Similar Viewpoints
Both speakers emphasized the need for a balanced approach to AI governance that promotes innovation while mitigating risks through enforceable regulations.
Carme Artigas
Margrethe Vestager
Balancing innovation and risk mitigation in AI governance
Importance of enforceable AI regulation
Both speakers stressed the importance of grounding AI governance and development in existing human rights frameworks.
Volker Turk
Alondra Nelson
GDC builds on existing human rights frameworks
Centering human rights in AI development
Takeaways
Key Takeaways
The Global Digital Compact (GDC) is seen as a crucial starting point for global AI governance and cooperation
AI has significant potential to accelerate progress on Sustainable Development Goals and enable economic development
There is a need for inclusive, multi-stakeholder governance of AI that involves developing countries
Balancing innovation with risk mitigation is key in AI governance and regulation
Centering human rights and community engagement in AI development is essential
Building AI capacity and infrastructure in developing countries is critical to prevent an AI divide
Resolutions and Action Items
Launch of a Global AI Opportunity Fund by Google to invest $120 million in AI education and training globally
Proposal to establish a global fund on AI for sustainable development
Recommendation to create an international scientific panel on AI
Plan to make an online platform available for public input on the Global Digital Compact after its adoption
Unresolved Issues
Specific mechanisms for enforcing AI governance globally
Details on implementation of the proposed global fund on AI
How to effectively balance AI development with sustainability and climate concerns
Concrete steps to ensure AI benefits reach marginalized communities
Suggested Compromises
Using existing UN frameworks and agencies to implement AI governance rather than creating new institutions immediately
Focusing on both risks and opportunities of AI to address concerns of developed and developing nations
Balancing regulation with market incentives to encourage ethical AI development by companies
Thought Provoking Comments
We too often equate governance with control. And it’s part of a conversation that’s much bigger. I think we have followed a narrative that technology companies innovate and governments regulate and somehow in that the rest of us go along. But that’s not the point of governance, right? Governance is to set a shared vision for humanity, is to think about all of the resources we can bring to bear to make shared decisions that put agency with communities, that allow voices to participate and to come forward.
Speaker
Vilas Dhar
Reason
This comment reframes the concept of governance in a more inclusive and participatory way, challenging the typical narrative of top-down control.
Impact
It shifted the conversation towards considering governance as a collaborative process involving multiple stakeholders, not just governments and tech companies. This perspective was echoed by other panelists throughout the discussion.
We don’t think about capacity building as finding a few critical enablers and saying let’s invest in compute. Or let’s just make sure there are data sources. Instead, we think about a holistic network that says let’s actually look with communities at what their needs are and think about a mechanism by which we say there is massive resources across the system.
Speaker
Vilas Dhar
Reason
This comment provides a nuanced view of capacity building, emphasizing the importance of community needs and holistic approaches.
Impact
It deepened the discussion on implementation strategies, moving beyond technical solutions to consider social and community contexts.
We need to recognize that the digital divide emanates from disparities between the developed and developing countries. Technology has the potential to advance the promotion and acceleration of closing the gap in opportunities between genders and, consequently, can lead to the attainment of gender parity goals.
Speaker
Mokgweetsi Masisi
Reason
This comment highlights the interconnection between digital divides, global inequality, and gender disparities.
Impact
It broadened the scope of the discussion to include considerations of global equity and gender equality in digital development.
We don’t know enough. So I would also associate myself with Dr. Jian, and that we don’t know the science. I mean, if we think back about the high watermark of the COVID-19 pandemic, and there were lots of preprints and lots of papers, and I think in that context, perhaps it was okay to say, you know, we’re going to figure out the science as we’re, you know, we’re going to build a plane while we’re flying it. We actually don’t know enough about these systems and tools and models.
Speaker
Alondra Nelson
Reason
This comment acknowledges the limitations of current knowledge about AI systems and draws a parallel to the rapid scientific developments during the COVID-19 pandemic.
Impact
It introduced a note of caution and humility into the discussion, emphasizing the need for ongoing research and scientific understanding alongside policy development.
Connect the schools. Connect the young people. Connect my children.
Speaker
Nnenna Nwakanma
Reason
This simple yet powerful statement cuts through complex policy discussions to highlight a fundamental priority.
Impact
It refocused the conversation on the practical, human-centered outcomes of digital development, particularly for young people and education.
Overall Assessment
These key comments shaped the discussion by broadening its scope beyond technical and policy considerations to include community needs, global equity, scientific understanding, and practical human outcomes. They challenged conventional narratives about governance and implementation, emphasizing the importance of inclusive, participatory approaches and acknowledging the complexities and unknowns in the field of AI. The discussion evolved from high-level policy talk to considering concrete actions and their impacts on diverse communities, particularly in the Global South.
Follow-up Questions
How can we ensure AI benefits are distributed equitably and the digital divide does not become an AI divide?
Speaker
James Manyika
Explanation
This is critical to ensure AI does not exacerbate existing inequalities between developed and developing countries.
How can we build AI models and data centers more sustainably to address climate and environmental concerns?
Speaker
Alondra Nelson
Explanation
This is important to ensure AI development does not conflict with climate goals and sustainability efforts.
How can we create a real-time scientific panel to study and report on AI developments and impacts?
Speaker
James Manyika
Explanation
A rapid, ongoing research effort is needed to keep up with the fast pace of AI advancement and inform governance efforts.
How can we implement capacity building and create a global fund to support AI development in the Global South?
Speaker
James Manyika and Vilas Dhar
Explanation
This is crucial to enable developing countries to participate in and benefit from AI advancements.
How can we better involve impacted communities in shaping AI governance and development?
Speaker
Alondra Nelson
Explanation
Ensuring diverse voices are included is essential for creating AI systems that work for all of humanity.
How can we create a shared global AI infrastructure to enable more inclusive research and development?
Speaker
Jian Wang
Explanation
This could help democratize AI development and reduce concentration of power in a few countries or companies.
How can we balance discussions of AI risks with equal focus on opportunities, especially for the Global South?
Speaker
Carme Artigas
Explanation
A balanced approach is needed to fully realize AI’s potential while mitigating risks.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.