Welcome address

29 May 2024 14:00h - 14:10h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

ITU Secretary-General Advocates for Inclusive AI Governance at AI for Good Global Summit

At the AI for Good Global Summit’s AI Governance Day, Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union (ITU), delivered an insightful welcome address, which was introduced by Robert Trager from the University of Oxford’s AI Governance Initiative. Emphasising the summit’s commitment to inclusivity, Trager noted the availability of interpretation in all UN languages for participants.

Bogdan-Martin’s address focused on the crucial role of AI governance in leveraging the technology’s potential to benefit society and achieve the Sustainable Development Goals (SDGs). She highlighted the productive morning sessions where AI experts and government leaders discussed the evolving AI landscape, the implementation of governance frameworks, and the importance of inclusion and trust within these frameworks. The discussions debunked the notion that governments are passive in regulating technology, as evidenced by the active engagement and contributions from developing countries.

The Secretary-General reflected on ITU’s seven-year endeavour to promote AI for good and its pivotal role in leading the UN system’s engagement with AI, including the establishment of a multi-stakeholder community of 28,000 members from over 180 countries. Bogdan-Martin stressed the shift towards a more pronounced focus on governance in response to the risks associated with AI, which are a global concern.

Drawing a parallel with the early days of the internet, Bogdan-Martin noted that while the full potential and governance of the internet are still being explored, AI introduces new governance challenges and opportunities. She cited the Internet Governance Forum and the WSIS Forum as successful governance models from the World Summit on the Information Society, illustrating that governance can progress even as we continue to understand emerging technologies.

Bogdan-Martin identified three essential components for AI governance efforts:

1. Technical standards development: ITU’s significant role in developing over 200 AI-related standards was highlighted, in partnership with IEC and ISO, to ensure AI systems are transparent, explainable, reliable, and secure. These standards are vital for market certainty and innovation, including in developing countries.

2. Human rights and inclusion: AI governance must prioritise human rights, inclusion, and core UN values. Bogdan-Martin expressed concern over the concentration of AI power and the ethical implications of this. She advocated for policies that reflect diverse perspectives and meet the needs of all countries.

3. Inclusive development through capacity building: The importance of upskilling global workforces to address AI’s challenges and risks was emphasised, with ITU’s initiatives supporting countries with low technological capabilities on their AI journey.

Bogdan-Martin also shared that an ITU survey revealed that 85% of member states lack AI regulations or policies, underscoring the urgency of governance discussions. She called for an iterative, multi-stakeholder governance process and the transformation of governance principles into practical implementation.

In conclusion, the Secretary-General urged active participation in AI governance activities at the ITU, framing governance as an enabler of AI for good. She called on the community to responsibly and equitably harness the power of AI, ensuring that governance efforts are collective and inclusive.

The address was a call to action for a global, concerted effort to address AI governance, with a clear message that good governance starts with listening, idea exchange, and building on areas of convergence. It was a rallying cry for all stakeholders to engage in shaping the future of AI governance, aiming to use AI as a force for good in the world.

Session transcript

Robert Trager:
Hello everyone and welcome to the AI Governance Day, AI for Good Global Summit. I’m Robert Trager of the AI Governance Initiative at the University of Oxford. Something to note is that we have interpretation in all of the languages of the UN. You can see the channels right up there, so feel free to avail yourselves of those with your headsets. Without further ado, I can now have the pleasure of handing over to Mrs. Doreen Bogdan-Martin, Secretary-General of the ITU for her welcome address. Ms. Bogdan-Martin.

Doreen Bogdan Martin:
Good afternoon everyone. Welcome to day zero of the AI for Good Global Summit. Our eagerly anticipated governance day is off to a running start. We’ve already put our AI experts, our government leaders to work this morning early. We’ve spent the entire morning exchanging ideas on three critical topics. We’ve been surveying the AI landscape, understanding how it might evolve. We’ve been looking at how to implement AI governance frameworks, and perhaps most importantly, we’ve been discussing how we can ensure inclusion and trust as we implement those frameworks. So this morning, we heard about various governance efforts, the areas that they have in common, as well as some of their differences. I think crucially, we learned from developing countries because we want to ensure that they are not left out of the process. All this challenges the argument that governments lack initiative when it comes to tech regulation. In just a few moments, you’re actually going to be hearing from some of our amazing roundtable participants who will be sharing the outcomes of their work. But first, let me tell you why we’re doing this. So why are we here today? What is AI Governance Day all about, and why are we at the ITU going to keep doing it? So ITU, as many of you know, is the UN agency for digital technologies, and we have been working to harness AI for good for the past seven years. We’ve been convening the UN system around AI, and we’ve been co-leading an interagency coordination mechanism, as we call it, with UNESCO since 2021. Through our AI for good platform, which is a multi-stakeholder community of 28,000 people from over 180 countries, our focus has been putting artificial intelligence at the service of the sustainable development goals. That’s been our compass. What’s new is this much sharper, stronger focus on governance. Because it’s not the benefits, it’s the risks of artificial intelligence that keep us all awake at night. So much has been said about AI governance in the media, in academic circles, from startups to tech giants, from local governments all the way to the United Nations, which recently adopted a historic resolution that recognized AI’s potential to advance the SDGs. But ladies and gentlemen, at the heart of all of this is a conundrum. How do we govern a technology? How do we govern technologies if we don’t yet know their full potential? There is no one answer to that question. But what we do know is that we have been there before. It was 20 years ago. The Internet was met with a sort of similar mix of shock, awe, skepticism. It raised the same questions about how our economies, our societies, our environment would transform for better and for worse. And we’re still grappling with those questions two decades later. In fact, we still don’t know the full potential of the Internet because a third of humanity is actually never, ever connected. But before we could even realize the potential, generative AI came along. And yet, even with the convergence of these world-changing, interdependent technologies, governance efforts have emerged. They may not be perfect, but we’re not starting from scratch. The Internet Governance Forum, the WSIS Forum, were born out of the World Summit on the Information Society. And some of you, like me, were there when this all happened 20 years ago. And I remember how then, as now, we didn’t even have the vocabulary to describe what we were dealing with. But that didn’t stop us. It didn’t stop us from moving forward. And what we’ve learned from the WSIS process is that we actually can take steps towards governance even if we’re building the plane as we fly it. We can come together as a community. We can share experiences, practices, lessons learned, barriers, challenges, knowing that, once more, there is no one size that fits all when it comes to balancing the benefits and reconciling regulatory risks. Knowing, yet again, that we must look at governance from many different angles and knowing that the only way forward is through a multi-stakeholder approach. And that’s why I’m so glad that today, gathered in this room, we actually have our WSIS community with us. So welcome to the WSIS community. We hope that you will help us, guide us, through these many complex questions and challenges. And again, after listening very closely to this morning’s discussions, I think there are sort of three key pieces that I believe must be part of any AI governance effort. And I would say the first piece, and obviously this is very relevant to the ITU, is the technical standards development. As we heard this morning, those working on AI governance, they already recognize how technical standards can help implement effective guardrails and help to support interoperability. And as I said, this is where ITU has such a key role to play. As an international standards development organization, we actually already have over 200 AI-related standards that we’ve either developed or we’re in the process of developing. As part of the World Standards Cooperation, which is a high-level collaboration between IEC, who’s in the room, ISO, who’s also in the room, and the ITU, we’re helping to advance the development of global standards that can make AI systems more transparent, make them more explainable, more reliable, and of course, more secure. And this provides certainty in the market and eases innovation for both large and small industry players everywhere, including in developing countries. The second element is putting human rights, inclusion, and other core UN values at the heart of AI governance. All stakeholders deserve a voice in shaping AI’s present and future. But who can afford the compute resources that go into producing AI applications? Who is on the teams that design the foundational models? Right now, the power of AI is concentrated in the hands of few too many. It is risky and it is ethically precarious to be in this kind of position for humanity. So ladies and gentlemen, we must work towards an inclusive environment where diverse perspectives, including those on gender, that was a key element raised this morning, including those on gender, are reflected in the policies that ring true to UN values. International AI governance efforts must account for the needs of all countries. And that’s why the United Nations, together with governments, with companies, with academics, with civil society, with the technical community must play a key role in ensuring that power is distributed equitably. And this is not going to happen automatically. And that brings me to my third element, inclusive development through capacity building. ITU has a long history of bringing the voices of the global south to the emerging technology table. And part of this means making sure that every, every workforce in the world can deal with the challenges and the risks being brought about by artificial intelligence. And that’s why we’ve been integrating AI capacity support in our digital transformation offerings. We’ll continue to roll out those initiatives with many of our UN partners in the room, including with UNDP, where we’re focused on countries that have low technological capabilities. And we want to make sure that we help upskill them, no matter where they are in their AI journey. Ladies and gentlemen, governance is not, it’s not a given. An AI readiness survey that the ITU recently conducted amongst the 193 member states, demonstrated that a majority, actually 85% of our member states, don’t actually have any AI regulations or policies in place. But today, some might at least start thinking about the policy elements, about what to do next. And I think that makes the work that we’re going to do today and beyond absolutely fundamental and essential. All good governance starts with listening, listening to experts, exchanging ideas, experiences with peers, identifying gaps, and building on potential areas of convergence. And governance is never a sort of one and done. It’s actually an iterative, sometimes frustratingly slow, but ultimately necessary multi-stakeholder process. Taking stock of the landscape and facilitating deep discussions, as we did this morning, is actually the first step in transforming principles into practical implementation. And implementation, ladies and gentlemen, is what today is all about. I know everyone in this room actually has a stake in seeing AI used as a force of good, as a force for good in this world. And as we heard from the UN Secretary General’s high-level advisory body, they joined us remotely this morning, we need to take bold decisions. And we need to view governance not as an inhibitor, but as an enabler, an enabler for AI for good. And that’s why today I’m calling on all of you to get involved, take an action, participate actively in the AI governance activities that are happening here, now, here at the ITU. Let’s harness the power of this AI community to govern AI with and for the world. Let’s show them what it looks like. Let’s show them how it’s done. And let’s show them together. Thank you very much.

DB

Doreen Bogdan Martin

Speech speed

133 words per minute

Speech length

1703 words

Speech time

768 secs

RT

Robert Trager

Speech speed

139 words per minute

Speech length

101 words

Speech time

44 secs