UN Human Rights Council: High level discussion on AI and human rights

3 Sep 2024, 10:00h - 12:00h

The President of the UN Human Rights Council, Ambassador Omar Zniber, conveyed a high level of Informal Presidential Discussion on new technologies, artificial intelligence, and the digital divide. 

Bellow, you can consult DiploAI assistant on AI and human rights as well as transcript and report from the session at the UN Human Rights Coundil.

Learn more about this AI Assistant

It is developed by DiploAI by using the following elements:

Full session report

UN Human Rights Council debates AI’s impact on human rights and global order

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

The high-level informal presidential discussion at the UN Human Rights Council focused on the intersection of artificial intelligence (AI), emerging technologies, and human rights. The session underscored the transformative potential of AI and its implications for various sectors, including the global order, warfare, and the workplace. The discourse highlighted the need for a balanced approach to harnessing AI’s benefits while safeguarding human rights and addressing ethical concerns.

Key points and arguments from the discussion included:

1. AI’s transformative potential can revolutionize sectors such as health and education, contributing to the achievement of sustainable development goals. However, without proper frameworks, AI could exacerbate discrimination, privacy violations, and inequalities, deepening the digital divide.

2. There is an urgent need for clearer guidelines on applying human rights standards in the digital age, involving the Human Rights Council, treaty bodies, civil society, academia, and businesses.

3. The development of AI must be rooted in human rights to prevent the technology from undermining these rights and worsening global inequalities.

4. The disparity in the benefits of AI between developed and developing countries must be addressed to ensure that technological advancements do not exacerbate existing inequalities.

Evidence and observations presented included:

– The rapid deployment of AI systems, even in comparison to other recent technologies, necessitates the establishment of regulatory guardrails that do not stifle innovation. – The affordability and simplicity of AI technology mean that it can be embraced and developed by organizations and countries with limited resources. – The digital space is emerging as a new dimension in geopolitics, with digital power reshaping the global order and facilitating a worldwide process with far-reaching institutional consequences.

The conclusion drawn from the discussion emphasized the need for proactive and constructive engagement with the human rights framework for AI. This includes revisiting core human rights in the context of AI, such as freedom of opinion and expression, and considering the Sustainable Development Goals (SDGs) as guardrails for AI development.

Noteworthy insights included the proposition to develop bottom-up AI, which is technically feasible, ethically desirable, and financially viable. This approach would ensure that knowledge remains a central asset defining humanity, shared with but not centralized in a few hands. Additionally, it was suggested that even organizations with limited resources can effectively utilise AI, and the UN is the appropriate platform to foster a multi-stakeholder approach to address the challenges posed by AI and emerging technologies.

The session called for a multi-disciplinary approach combining international law, strategic studies, intelligence, technology, computer science, and economics to address the transformative nature of AI and its impact on warfare and international relations. The need for new modes of thinking was highlighted, with the UN in Geneva identified as the right place to bring together stakeholders for this purpose.

Session transcript

Unoffocial transcript of discussion provided by DiploFoundation 

President of the UN Human Rights Council

Your Excellencies, Distinguished Participants, Today’s event marks a significant moment in our efforts to confront one of the most pressing challenges of our time on how to harness the transformative potential of artificial intelligence, emerging technologies, while safeguarding human rights. Here at the Human Rights Council, the UN Secretary General’s urgent call for a global AI-focus rooted in human rights is clear. You may remember he sent to us, while he was present here in February, the declaration before the Council. 

Indeed, in his Roadmap for Digital Cooperation, Secretary General Guterres stressed the urgency of developing clearer guidelines on how to apply human rights standards in the future. in the digital age. This task involves the human rights council, special procedures, treaty bodies, the Office of the High Commissioner, and a wide range of stakeholders, including civil society, international organizations, academia, and business. 

Furthermore, the call of Mr. Guterres to action for human rights highlights the need to ensure that artificial intelligence advancements do not undermine human rights, worsen inequalities, or reinforce difficulties in innovation. 

Allow me here to extend my deepest gratitude to our esteemed co-facilitators, whom I named from the beginning of my mandate last January, particularly on behalf of their excellent work achieved so far, and thoughtful recommendations. It is important that we assess and consider these recommendations, as it have also of course, been shared with you in the report of the 16 of January. 

It is important that we assess these recommendations and their effectiveness in a constructive manner to advance our shared goals of this part of our partnership. I would also like to underline in these introductory remarks the valuable work of the Office of the High Commissioner in this regard. The recent mapping exercise confirms that key human rights principles, such as equality and non-discrimination, participation, accountability, legality, legitimacy, necessity and proportionality, and inclusion, all remain highly relevant in the digital realm, in particular with regard to economic, social, and cultural rights. 

Artificial intelligence and other digital innovations have the power to transform society. They can revolutionize health, education, and contribute to achieving the sustainable development goals. But they also come with solutions. Without the right frameworks, AI can be misused, exacerbating discrimination, violation of privacy, displacing jobs and creating new forms of inequality, including deepening the digital divide. 

This is why I am pretty sure that the human rights process is not only timely, but fundamental and essential. Its rationale is to examine how we, as the human rights process, can play a wider role in shaping a future where technological progress is aligned with our core human rights values, thus joining the ongoing initiatives, such as the ITU’s reaction to UNESCO recommendations on the ethics of AI, while important work and continuing work on this major subject, particularly for innovation and the question, of course, of the intellectual property. 

And, of course, needless to raise also and to refer to the Secretary General’s general high-level advisory body on AI and the perspectives we do have for the very few next week on the discussions which are ongoing in New York and to which we have already – I have already contributed as your president, as president of the Human Rights Council, by sharing, as I said, the outcome of the co-facilitators and some of my own ideas to the president of the General Assembly and the co-facilitator on this issue in New York. 

We must ensure that these frameworks, all highly effective right now, address the risks and opportunities presented by artificial intelligence from a human rights perspective, leaving the groundwork of global goals that we need to advance. So this is why we are here this morning, Excellencies, distinguished participants. I am really more than honored and privileged to have such an important high-level panel with us this morning, to whom I will directly now give the floor so that they can share with you, of course, their thoughts, their accomplishments, their perspectives, their hopes, but also the – mostly their own. 

What is important for the Human Rights Council is to have such partnership, to have such cooperation, more than ever, to be open because we do have, of course, our own mandate, but we need clearance to achieve our own mandate. And to answer all the people of the world. So, I have now the honour to invite you, Dorin Bogda of the ITU. 

Doreen Bogdan-Martin, Secretary-General of ITU

Excellencies, ladies and gentlemen, good morning. Let me start by thanking the President. Thank you so much, Mr. President, for the invitation, and also thank you for your outstanding leadership at this Council. You have prioritized the pressing issue of artificial intelligence and the digital divide to ensure that technological advancement doesn’t come at the cost of human rights and human dignity. The national and regional initiatives and legal frameworks to establish protections around the development, the deployment, and the use of AI that we have witnessed at this the past several months, I think are a positive step in that direction. 

Progress, as we all know, is also happening at the United Nations, from the General Assembly resolution that was adopted on AI last March, and the subsequent one adopted in July, and the upcoming UN Summit of the Future, and in particular, its global digital compact. As we sit here this morning, member states in New York are currently discussing a Rev. 5 of the global digital compact. ITU and other UN agencies are closely coordinating to support the follow-up of that compact. 

As a considerable share of the substantive work to implement that GDC will, of course, happen right here in Geneva. Ladies and gentlemen, we’re just 19 days away from the summit. We’re also six short years, I would say, from the deadline to achieve the Sustainable Developing Goals. This is a turning point. It’s a turning point for the SDGs, and it’s something that the PGA President has called a fragile and special moment. 

As head of a UN agency for digital technologies, I also think it’s a moment of reckoning. Cyber insecurity is amongst the top 10 most severe global risks. Photos of children are being scraped off the web to create powerful AI tools without the knowledge or consent of those children or their families. Deep fakes and misinformation are blurring reality by eroding trust in our elections and in our institutions. AI systems are showing gender bias and increasingly impacting our environment. Meanwhile, not one, not one of the top 100 high-performance computing centers in the world that are capable of training large AI models is hosted in a developing country. 

These are just some of the most complex and pressing challenges that we are facing today. ITU is tackling many of them head-on. Front and center are the 2.6 billion people, the 2.6 billion people that are still offline around the world, even as the pace of AI development continues unabated. Even among those connected, far too many people lack the means, the high-speed connectivity, the digital skills, the trust to truly benefit from new and emerging technologies. 

Closing the digital divide is central to the non-paper on AI that was presented by the co-facilitators and permanent representatives of the Gambia, of Luxembourg, and the Republic of Korea to, of course, the President. I do want to recognize the Council’s work in this critical area, and I want to reaffirm ITU’s support. 

Ladies and gentlemen, the actions that we take now will have a lasting impact, will have a lasting impact for generations to come. We have a historic, but also a narrow, opportunity and window before us. To succeed, I believe that we need to focus our energy on three fronts. Harnessing digital and emerging technologies to rescue the SDGs, balancing innovation with safeguards that respect and protect human rights, and ensuring international cooperation by bringing all stakeholders, including those from the developing world, to the table. Let me break those three fronts down quickly. 

So first, rescuing the SDGs. Only 17% of the SDG targets are on track to be achieved. Do you accept 17%? Well, I refuse to accept 17%, especially when we have shown that game-changing digital solutions like AI can actually accelerate progress on 70% of the SDG targets. Our latest UN Activities Report on AI highlights more than 400 projects that are covering all 17 SDGs. I choose early using AI to connect schools and our work with UNICEF, to improve early warning systems in our work with WMO, UNDRR, IFRC, and to advance healthcare with my friend Darren at WIPO and WHO and, of course, much, much more. 

At our AI for Good Global Summit, and I see some of you in this room that were with us some weeks ago, we did see very concretely dozens of applications that are changing people’s lives every day. 

If I had to choose one from what we saw in July, I think I would choose the device that was connected to Luis’s brain, for those of you that might remember that moment. Luis is 37 years old. He has ALS, a disease that makes communicating impossible. And when he told us from his home in Lisbon, fighting back the tears, that he was optimistic, that he believed he could start to have a much more normal life and be able to communicate using tools like AI. And his intervention drew both tears and applause from CICG here in Geneva. 

So my second front is balancing innovation with safety safeguards that respect and protect human rights. ITU is a strong proponent of human-centric and rights-based approach to emerging technologies, one that reflects core UN principles of peace, justice, respect, human rights, tolerance, and solidarity. And we will continue to build capacity to have meaningful multi-stakeholder engagements around all of these topics. OHCHR’s report on human rights and technical standards also sends a very powerful message. If we’re going to harness the full potential of new and emerging technologies, we need to root them in a rights-based approach and also engage all stakeholders. It builds on the resolution of the Human Rights Council that calls for closer cooperation and collaboration between the UN Human Rights Office and standards development organizations like ITU. The need for collaboration comes at a time when we’re all witnessing this strong call for more harmonized AI standards. 

When the UNSG visited Geneva some weeks ago, he also visited us at the ITU, and he said very clearly, harmonizing AI standards will be crucial for both the regulators and the industry. He warned that fragmentation would be especially harmful to the developing world. The ministers, regulators, and leaders from the UN, from industry and academia, joined us for our first governance day, our AI Governance Day that took place during our summit, and they all expressed the need for greater interoperability. 

And I think that’s why standards feature so prominently in the recent AI Governance White Paper. It’s a UN-wide AI Governance White Paper that was led by the ITU in UNESCO. It’s also why we’re working very closely with the High Commissioner, with Volker Terling’s office, and Peggy in particular, as well as our long-time partners, IEC, ISO, so that we can work to develop standards that are based. on a rights-based approach, and that addresses safety, security, and, of course, ethical practices. 

And then the third front, and my final front here that I wanted to raise, is the importance of bringing all stakeholders to the table. In December of last year in this building, during the 75th anniversary celebrations of the Universal Declaration of Human Rights, I pledged, on behalf of ITU, to advance universal meaningful connectivity through a multi-stakeholder approach, an approach that’s grounded in respect for human rights. 

Because we need to bridge the gap between policymakers, the technical community, and the human rights community. And because in today’s increasingly polarized world, open, inclusive, and secure access to means of communication is absolutely essential. And it’s essential to ensuring that all voices are heard, and that those voices are respected, and that they are empowered. Today, I renew this pledge, and I sincerely hope that you will join us in making dignity and equality for all the cornerstone of a truly inclusive and empowering global digital space. 

Ladies and gentlemen, when I was a young graduate just a few years ago in Washington, D.C., I would often go and think at the Thomas Jefferson Memorial. If you’ve ever been there, it’s quite a beautiful place. It’s quiet, it’s near the Potomac, and it’s just a few blocks from the hustle and bustle of Congress. There’s a quote on one of the rotunda walls that I think is quite relevant to today’s discussion. As new discoveries are made, new truths discovered, and manners and opinions change. Laws and institutions must advance also to keep pace with the times. And that really does bring more truth today than, I think, ever before. So as we look to the summit in the future and its global digital compact, we must keep these things in mind. 

But also as we look to the ITU-WTSA, the World Telecommunications Standardization Assembly that will convene in October, and also as we look forward to next year’s 20-year review of the World Summit on the Information of Society. So we must keep pace with the times. This is an opportunity to not just follow where technology leads us, but to actually lay out our vision and to actively shape the path towards a digital future that’s safe, that’s inclusive, and that’s equitable. sustainable for all. 

So ladies and gentlemen, let’s rescue the SDGs because I firmly believe that we can do it. Let’s balance innovation and regulation. Let’s give everyone a seat and a voice at the table. And above all, let’s ensure that human rights are the bedrock of our collective digital future. Thank you very much.

President of the UN Human Rights Council

Thank you so very much. I want to synthesize all what you have presented, but probably just to say one word. We should not accept the disappearance of SDGs from our screens. It’s a fundamental cause. 

Probably the government don’t know all that you are here in Geneva for a while, but we do have the chance to have you here, Mrs. Secretary-General, at the head of this important organization since these past few years, and we are very, very happy. Thank you so much. 

We have the chance to have with us Daren Tang, Director General of WIPO. I know you lead  an amazing organization.  You are making your house an open house for all those who are here in Geneva, and for other stakeholders, which is vitally important. This is why we are so content with your presence here today. Please join me. 

Daren Tang, Director General of WIPO

Good morning, Excellencies, Ambassadors, dear friends, dear colleagues. Thank you so much to Ambassador Zniber, for prioritizing technology. during his presidency of the Human Rights Council. Some of you may know that he was my ex-boss. He was the chair of the WIPO General Assembly for two years. So, I’m very grateful for your support during the opening years of my term as Director Genera. 

And let me start by saying that it’s a pleasure to address all of you here. It’s not often that I come into this very, very special chambers. But as an ex-human rights lawyer myself, I’ve seen this ceiling many times on pictures and screens. But it’s nice to be here in person to share with you my thoughts on AI and human rights. 

Let me start by talking about the Universal Declaration of Human Rights, which was adopted almost 80 years ago in 1948. And let’s maybe put things in perspective. This technology that we are dealing with has been just the latest in a series of many different waves of technological changes since the 1940s. Television, VCRs, CDs, the internet, smartphones.  So in a way, I think we can find comfort and find some sense that what we’re dealing with is not entirely new, not entirely novel, though the manifestation in the ways demonstrated can be new. 

But like previous technology waves, today’s Gen AI and digital technologies bring both risks and opportunities. And I’m very happy to hear that both the President as well as Doreen Martin have mentioned that we have to engage in this balancing act between risk and opportunity. And I believe very strongly that the UDHR and human rights framework, which centers around human dignity, addresses not just civil and political rights, but also economic, social, and cultural opportunities and incorporates a sense of rights as well as responsibilities. 

It’s a very, very important for taking a balanced approach to any technology, including Gen AI. But let me share with you some data, since I’m coming from WIPO, and you’ll see the global trends of innovation and what’s going on in this area. I mean, first, I want to share with you that the explosion of Gen AI in the last two years is part of a larger trend of the explosion of digital technologies, which I believe has been accelerated after the pandemic. 

In 2022, the world found 3.5 million patents, one-third were connected to digital technologies, not just AI, but on-computing, cybersecurity, 5G, 6G, and so on and so forth. So really, a predominant number of patents, or the largest quantity of patents found in the world, are connected to digital technologies. 

The second big trend we’re seeing at WIPO is that these technologies are merging with industrial technologies. In other words, digital innovation is combining with industrial innovation. And I think, I can think of no better example than a car that is no longer a machine or an engine with an axle and four wheels, but increasingly a data center, an entertainment center, or a sophisticated laptop with four wheels. And this trend of the merger between digital technologies and industrial innovation and industrial technologies is becoming pervasive to all forms of technologies. 

Let’s look at the phone, or look at when you go to the hospital and you see the machines around you, right? They’re as much driven by software, algorithms within, as they are by the hardware and the actual machine itself. What’s interesting, though, is that WIPO recently did a patent landscape report on generative AI. 

And we found that, unsurprisingly, many of the Gen AI patents are held by big companies. Doreen mentioned that the 100 labs are all developed countries. And in fact, even within these developed countries, they are concentrated within a handful of the very biggest companies. It takes a lot of money, it takes a lot of effort, it takes a lot of energy, and it takes a lot of computing power to run these companies.

But what’s interesting is that, and what gives us cause for hope is that, while Gen AI itself is held by a very small group of big companies, the application of Gen AI to different areas of life, to healthcare, for example, is much more diverse. And we see many small, medium enterprises. many entrepreneurs, both from developed and who are developing countries, who are using GenAI to change lives and solve problems in their local and digital markets. 

So I think when Doreen has spoken so eloquently, speaks about the digital divide, we need to be aware that it’s not just a divide of access to infrastructure, but also a divide of opportunities. So a lot of the work that we need to do here in the UN system, amongst us as member states, is how do we close the divide, not just in terms of the gaps, but also in terms of giving people the opportunities. And I think that what is interesting is that the data shows that there’s a lot of hunger from developing countries to use this technology. 

So let’s avoid looking at the global south as a community of hapless individuals who are waiting to receive it, but instead let’s look at the global south as a community of very energised individuals who want to use AI and digital technologies to build a better life for themselves. And here I’ve got two very interesting sets of data. One’s from Stanford. 

Stanford did a survey of attitudes towards GenAI, I think it was released last year, and it showed that the attitudes were the most positive in many of the emerging economies, Malaysia, Mexico, and Turkey, had more positive attitudes towards GenAI and its ability to build a better life than in developed countries. 

And last year, WIPO surveyed 25,000 laborers in 50 countries. We asked them what they thought about IP and whether IP would be good for their societies and economies. And what was surprising was that attitudes towards IP were more positive in Africa and Asia than in Europe and North America. 

And I believe this shows that, and echoes my own experiences when I travel around the world and I meet many of the entrepreneurs on the ground, which is that in many developing countries, they’re looking for ways to use AI and technology to improve their life. as the UN climate is to give them the training and the tools and the confidence that the new set of rules provides. 

On the IP system itself, let me just share with you what my views are of GEN-AI. And I believe that the IP system feeds from the same waters as the human rights system because both are centered around human dignity, human creativity, and human intendedness. Let me just, for example, cite Article 27 of the UDHR which states that everyone has a right to the protection of the moral and material interests that arise from the scientific, literary, or artistic production. 

So  in the human rights situation, we have the recognition that IP and the protection of the creations of the human mind are important. And of course, that’s why IP recognises creation states from tools. 

For example, when we take a photograph using a camera, recognise that you as a human photographer owns the copyright. When AI is used to create new protein structures, the lab technician or the inventor owns the patent. But there’s not been a single case that the IP system has recognised the machine or the machine creation as having IP rights. And I think that has to remain because we need to make sure that the human being remains at the centre of innovation and creativity. 

And here, it is my personal belief, so it may not be an official one for me, but it’s my personal belief based on my understanding of GEN-AI that it is a highly efficient and effective replicator, but it is not genuinely creative or innovative. It can learn from Monet to produce millions of paintings in the style of Impressionism, but it can never make the jump from Monet to Picasso, from Impressionism to Cubism.  

So I think what we need to do is to make sure that AI, like other tools that come before it, is used as a tool to empower, enable, and enhance human creativity and innovation, but not to undermine that. And I think we do this in three ways. 

First is that, like, what doing is doing in a much larger way than you think. in general, in digital technologies, but where IP is concerned, we want to be a forum where people from all over the world can come and talk about these issues, not just from the developed countries, but from all over the world. We’re very pleased that it’s been five years since WIPO established the Conversation on IP and Frontier Technologies, and our nine conversations, many of them which have been centered around AI, have reached nearly 9,000 people from over 170 countries in a very multi-stakeholder approach. Many of you are part of that conversation. Ambassador Kah  is the chair of the IP Frontier Tech Conversation, so I believe that many of you really, and your affiliates or your stakeholders in your countries, have the chance to come together every few months and talk about how AI affects the innovation system, and we’ll continue doing this. 

The next thing that we do at WIPO is that we provide the information and tools, because it’s important that as we navigate this new technology, decision makers, policymakers, and others, right, have the right insights to react and to regulate or to do what they need to do to harness the power of AI. So for example, WIPO has released an AI policy toolkit that’s targeted at IP offices and regulators, as well as a toolkit for small and medium enterprises to navigate the world of AI, and so I think these are the things which are important. 

Of course, WIPO contains a global repository of innovation information. We do release a lot of reports as to what’s happening in the world of gen-AI information, but the part that I want to emphasize is that we need to translate all these into underground impact. 

I spoke earlier in this conversation about how there’s a lot of hunger from developing countries to use and harness the power of social technology to make better living. So one of the things that we’re doing, and I’m very pleased that Doreen mentioned this, is that we are partnering with other UN agencies, in particular with the ITU and the World Health Organization, to mentor and train AI entrepreneurs from developing countries to apply their entrepreneurial energy to solve healthcare problems in their country. So this is a very practical example of how different UN agencies are coming together to apply different activities to help people on the ground in a very, very specific kind of way. We are also starting to provide AI management clinics for, sorry, IP management clinics for small and medium enterprises in the Middle East region. Why? 

We find that many of these entrepreneurs need help harnessing the power of the IT system to bring their ideas to the market and to use their entrepreneurial energy to change lives. But they need training, and they need help. So what Y4 is doing is that we are providing mentoring support for them over sometimes a 9-12 month period in order to understand the business journey and help them to use this in order to bring their ideas to the market. And of course, we are also working on very interesting projects. I see that the investor from Mexico is here. One of the projects we are doing is the IP for disabilities project in Mexico. We are working with the Mexican Polytechnic as well as the Mexican authorities to harness the power of innovation, including digital technologies and AI to support those with disabilities in Mexico. And lastly, of course, the re-invention of the SMEs. 

Here, we believe that it’s important to use the power of AI in order to be able to address the common global challenges that we face. So Y4 has a platform called Y4Green, which is the UN’s largest technology matching platform for climate change technologies, where we offer 130,000 technologies from over 140 countries and we matchmake these technologies to those that need them. 

So in these different ways, I think that to being a place where people come and talk about AI issues, to being a place where we provide tools and information and reports, and most importantly, by providing on-the-ground country projects, we can work with you and with others in the UN system to really harness the power of digital technologies to be able to help build a better world. 

So let me just conclude by saying that in just two weeks’ time, we will all be having the UN General Assembly, some in the future, as well as the Global Digital Compact. And I think that there’s no better frame than to use Y4Bright’s central frame for us to build. to be able to balance the risks and the opportunities that AI presents. 

So WIPO pledges its support to work with all of you in order to find very very concrete ways in which we can translate the some of the future and aspirations expressed in these documents into actual projects and programs on the ground that can really really mitigate the risks and provide the opportunities that people around the world want to have in order to be able to use AI and other digital technologies to make the world a better place. Thank you very much. 

President of the UN Human Rights Council

Thank you very much. We spoke about Picasso. Picasso has been known for a lot of opportunities in Europe. So welcome to Picasso. Thank you very much. As you said, both of you, there is a need for multistakeholder partnerships and associations. And this is why my colleagues have directed me to invite you, Mr. Werner Vogels, as a representative of a major company dealing with these issues at the world level. And I am pretty sure, considering the level of your responsibilities, that you will share with us this perspective.

Werner Vogels, Chief Technology Officer and Vice-President of Amazon (AWS).

Thank you, Mr. President, for inviting me. I making this assumption that most of you do not have a computer science degree. And it’s probably, for most of you, the word AI was either something that you read in a book by Asimov or in a Schwarzenegger movie. 

Something changed about two years ago. But before we go into that particular part, AI has been around for much longer than that. You can almost go back sort of 3,000 years where Plato and Aristotle were actually sort of battling over, you know, can we actually make machines, that kind of thing. 

And Plato in the Republic even described a household with robots that would take your house up to. Nothing happened, of course, for many thousands of years until we got computers, which were also symbolically reasoning machines. 

And since the first steps were taken, for example, by Turing in his famous paper on can machines think? Well, it turns out machines don’t think. But when artificial got into life, which was in a famous workshop in Dartmouth in 1956, the quote was still, the idea was that, oh, we need to emulate the brain. And it turned out that that went nowhere. So what did go somewhere was those that actually came to the world of robotics. 

They started thinking about, can we build human capabilities from the bottom up? Language, vision, sound, touch. And most of those actually have resulted in extremely valuable, well-working technologies. Natural language processing, translation. If you drive a modern car today, your car is full of AI. 

Now one of those, or if you’ve been an Amazon customer for the past 20 years, you’ve been using AI for the past 20 years. systems like recommendations, fault protections. 

A typical warehouse of Amazon has 30,000 robots running around. They’re all autonomous. It’s all AI. Now one of those founders in 1956 in Drapav was McCarthy. And he made a famous statement later on that said, as soon as it works, we don’t call it AI anymore. Now much of these technologies that have been available and have become really mature over the past decades are being used by technology, by younger businesses and technology companies around the world to do good. 

Next to, you know, having a good e-commerce business or building new self-driving cars. But take, for example, an organization like Foreign. Foreign is an organization that wants to battle child sexual trafficking. What they’ve done, they have this massive database of hundreds of thousands of children and women that have disappeared. And it matches those images of those women against about 100,000 advertisements a day in the U.S. for prostitution. 18,000 women have been rescued. 6,000 children have been taken away out of this sex trafficking. All of that using AI. But it’s a much broader than that. And that’s a real important topic, I think. 

But if you also think about, you know, if you look at sort of all the digital imagery that we’ve had in Africa over the years. NASA, ESA, JAX, have been actually bringing all of their satellite imagery over the years together, and it’s called Digital Earth Africa. And using that imagery and using AI technologies, we’re looking at illegal mining, mangrove deteriorations. All of those are sort of technologies that work really well today. 

And think about healthcare. In Sweden recently, Sweden has a national program where women over 50s can get every two years a mammogram to battle breast cancer, and detect early forms of breast cancer. A radiologist that looks at these images, looks at thousands of these images a day. At the end of the day, these eyes aren’t that good anymore. 

AI, using this old-fashioned AI, detects 30% more breast cancer than a single radiologist. And is as good as two radiologists. You can imagine what the impact is of these kinds of technologies on healthcare around the world. These are technologies that are available for everyone to use. 

And especially younger businesses these days that are looking at sort of solving really hard human problems in their society, whether that is in Africa or Southeast Asia, in India, or in South America. And they’re using these technologies to build sustainable businesses that solve hard problems, whether it’s around food or, for example, there’s a company called Hara that provides identity to smallholder farmers who didn’t have that before. 

And with this identity and measuring that plot of land and that data using some AI, providing this data to both governments and banks, now these smallholder farmers no longer need to go to a loan shark and get charged 50%. Actually, these banks are eager to give these smallholder farmers a loan because they get a 100% repayment. 

So these are businesses, younger businesses, using AI technologies to do good. However, because it works, we don’t call it AI anymore. So what changed three years ago? As we see quite often with AI, steps in underlying technology have improved the technology significantly. 

One of the things that happened a few years ago was a technology called Transformers that allowed us to build very large models based on text. And quite successfully. Now, instead of having things like APIs and things like that, programming interfaces, you have text as an interface to the system. 

And it has made quite a significant impact. You know, there isn’t a company today that will have some form of chatbot that is based on these large laboratories. But also, many of my customers actually tell me, wait, we’ve now built these chatbots. What do we do now? Where are these efficiencies that we have promised to see? 

Now, why is this, Why are we talking here about this new form of AI? It’s because if you look at the traditional technology adoption cycles, there is a level of education that happens before new technologies get introduced. However, in this particular case, this technology was brought into the marketplace, into consumers’ hands, without this education. And without really understanding what are the capabilities of the technology, what are the risks, where can we apply it best, and introduce it like that into the general public. 

It didn’t happen. immediately organizations like the Human Rights Council looking at what is this technology? Where do we actually have to make sure that there’s equal access to this technology for everyone? And we see many countries actually having new direct reactions and protections. No, we are the only ones who can have access to this technology. 

From Amazon’s point of view, our importance is, on one hand, education. We need to make sure that as many people are educated about the capabilities of these technologies that haven’t been educated before. So that can make informed decisions. What are the capabilities of this technology? What are the things it can’t do? Where can we see major improvements happening in our lives? 

Now, fairly important in all of this, of course, is that this new technology is centered around language. These are large languages. The dominant models at this particular moment are trained on Western data. They’re trained in English. And even though you can access these models through a different language, in essence it gets translated under the corpus into English and then it gets executed like that. And you get an answer in a different language, but it’s still English. 

And it’s not just about the language. It also incorporates culture. If you’d ask an LLM, a Western LLM, let’s say, to give you a review of a book for Isabella Allende, you get a very different answer from a South American LLM. Now, for us at AWS and Amazon, it’s crucial to democratize access to any and all of these LLMs. 

And for example, one of the largest open source models, Falcon, created in the UAE, is actually available through AWS. Or Sea Lion, that’s created in Singapore, also available through AWS. We’ve been working, for example, in Japan and Korea, in building these natural language systems that incorporate not just the language, but also the culture and the history that is incorporated. in them. 

And we need to make sure that we democratize access to these technologies across the board. Now, it’s very important that we enable continuous innovation. We think about sort of regulatory requirements, we have to think about what are the areas that have most risk? What have we regulated in the past? Think about financial services, think about health care. 

But we need to make sure that those young businesses that want to use this technology to do good, to really improve the lives and the big issues that they see in their communities. We need to make sure that those companies can continue to thrive. 

And so for us at AWS and Amazon, it’s crucial to provide democratized access to this tech technology. As such, we don’t just have one LLM, or we invite everyone, every other organization, every other company, every other country, to make their models available through our platform. So that customers and other companies and organizations can experiment with this, can find out where to use it, how to use it best, and where it has most impact. 

Because what we’re talking about here, AI, is the potential. Not necessarily exactly what we’re doing today. Old-fashioned AI already takes care of that. One important, very important part in all of this, is sustainability. We need to make sure that these technologies are available in the most sustainable way possible. Because if every country around the world will start building their own data centers, we’ll actually start depleting the natural resources in this world much quicker than is necessary. Let’s make sure that everyone has access to these technologies in the most sustainable way possible. 

Amazon AWS is the largest corporate purchaser of renewable energy in the world. There are 450 projects going on around the world to make sure that we can ingest as much green energy as possible without depleting natural resources. 

And so sustainability, together with these technologies, we cannot see these two things separate from each other. We need to make sure that customers and companies can experiment. Is a 700 billion parameter model really that much more applicable for this 7 billion parameter model? Is an open source model like Falcon not much better appropriate for one of these commercial models that are out there? It’s still very early days in the use of large language models beyond the chatbots and beyond the immediate efficiencies that we’re seeing. Let’s make sure that we have a platform availability as broad as possible in terms of sustainability but also in terms of cultural awareness and make sure that everyone is actually served by this technology and we don’t turn the world into a machine that is English investment at its center. Thank you. 

President of the UN Human Rights Council

Thank you very much for your contribution, very optimistic ones, but nevertheless you also underline some problematics and difficulties we are all facing. While preparing this event, I discovered many elements. I want just to refer to one because you spoke about and the world in which we live today: 3 billion people have never experienced medical intervention. This is one of the subjects that we have chosen to consider very seriously, and this is why, by the way, we’re organizing this meeting. 

Thank you very much also for drawing our attention to the necessity to democratize the accessibility to the technology. Now, excellencies, dear participants, we will turn to the ethic dimension, and for such purpose, I have invited her excellency, the Director General of UNESCO. She will be with us through a video message. So please, would I request that the video message be launched. 

Audrey Azoulay, Director-General of UNESCO

Dear Omar, Excellency, ladies and gentlemen. Our world is changing thanks to a major technological revolution driven by artificial intelligence. It is a turning point that is also anthropological. an opportunity to develop our societies, but there are also existential risks that we must face, very tangible risks. Today, half of the world’s population is connected to the internet. This revolutionizes access to information, but in an unregulated universe. And so our societies are more and more confronted with disinformation, with the dissemination of 

conspiracy theories.  And it is a threat that weighs ever heavier on our social contracts. 

Thus, in an investigation carried out in 16 countries, an UNESCO-Ipsos investigation, of the countries where elections are held in 2024, 87% of the people interviewed estimated that disinformation would have a major impact on the upcoming elections in their countries. 

The generative AI can also reproduce and amplify prejudices, multiplying their effects in all fields of social life. A report that was published in March last year highlighted the sexist and racist biases experienced by the platforms employing artificial intelligence. It can also sometimes pose a new threat to the transmission of facts, but also of history. This is the subject of a new report published on the history of choice. 

Excellencies, ladies and gentlemen, it is for all these reasons that, for several hours already, as part of our mandate on the future of sciences, UNESCO has engaged in a pioneering dialogue on the ethical risks of the development of artificial intelligence. 

And I remember the time when I was invited by the University of Belgium to Rabat in 2018, when we launched this reflection. And in 2021, after long negotiations based on the work of experts from all over the world, our Member States have developed the first universal normative instrument on the ethics of artificial intelligence, through a recommendation adopted during our General Conference in November 2021. A text anticipates crucial questions such as transparency of information, responsibility, energy use, inequalities, and  in particular, issue of gender. 

A text that lays the foundations of an information and media education based on critical thinking at the time of COVID-19, so that the verification of the facts is an integral part of the basic skills of a civil society. As we strive to guide this new technology, this recommendation is not only an ethical compass, it is also a tangible tool for the elaboration of public policies in states. 

We are thus accompanying fifty countries at the moment, from Chile to Brazil, from Morocco to Senegal, in taking into account, in their national strategy, the great principles of this recommendation. And also on the African continent, because you know that Africa is one of the major transversal priorities of the UNESCO. We have worked hand in hand with the African Union to establish a continental strategy for artificial intelligence. 

We have also worked at the same time on the first global principles for the governance of digital platforms, as part of an unprecedented global consultation. And we have recently gathered here in Troyes regulators to create a new review. 

Mr. President, I would like to thank you, because our action on COVID-19 allows us to go further, whether it is for the ethics of artificial intelligence, but also for the right to education, or, more recently, in the fight against crime, against human rights. 

You know that you can count on UNESCO to work with the Council on Human Rights to create a new plan on fundamental rights, which the European Union needs so much. And I would like to conclude with the words of an American researcher, Sally Fahey, a pioneer in artificial intelligence, who told us that, despite its name, this technology has nothing artificial. It is made by humans, so it must be guided by human concerns. 

President of the UN Human Rights Council

Thank you, excellency, Excellency, dear participants, we are here for Mr. Stéphane Decoutère, the Secretary-General of GESDA. GESDA is an institution, a very important one, trying to combine science and diplomacy. And in our discussions here among diplomats from time to time, we say that for the next generation of diplomats, very, very quickly, the diplomat in the future will have to be, in a way, another scientist. So thank you very much for the work you are achieving, and please take the floor. 

Mr. Stéphane Decoutère, Secretary General of the Geneva Science and Diplomacy Anticipator (GESDA).

Thank you very much, Mr. President, for your kind introductory words and for the invitation, of course. Excellencies, ladies and gentlemen, I would like to start my remarks with three observations that will guide my few words. 

Number one, we have been observing since 75 years a great acceleration of science and technology. Number two, the advent of artificial intelligence, accelerates this acceleration. It bears the potential to span the boundaries of what is possible in science and technology. 

And why this? Because AI might foster the convergence of various fields of research. For instance, the so-called Info-Bio-Nano-Code convergence, one of the current game-changers in knowledge production. This has consequences. 

The increasing convergence in science and technology requires addressing, at the same time and at the same pace, artificial intelligence and other emerging fields, as well as the relationships between them. For instance, with regard to neurotechnology, for instance, that AI influences and accelerates. 

Number three, I’m very pleased to say this, your Council has already undertaken several steps in addressing the human rights implications of new and emerging technologies, including neurotechnology, as well as digitization. Now, for instance, you stated in your resolution of July 14, 2013, just to take one of them, that these technologies, especially the digital technologies in this case, have the potential to facilitate efforts to accelerate human progress and to ensure, and this is the most important thing, that no one is left behind. 

And in that same resolution, you stress the need for all stakeholders to be cognizant of the impact, opportunities, and challenges of rapid technological change and the promotion of human rights. 

So the question is, how can governments like yours, and stakeholders like us, help implement the vision of your Council? Let me give an insight on how we are doing this, or we are trying to do this, at the Geneva Science and Diplomacy Anticipator Foundation. 

I’m representing here today as Member and Secretary General of all those directors. In response to the acceleration and convergence of science and technology, we firmly believe that anticipation becomes indispensable in order to realize the pillars of the United Nations goals, be it peace and security. be it development and prosperity, be it the human right to science, as stated in Article 27, paragraphs 1 and 2 of the Universal Declaration of Human Rights. 

The rationale behind our work and the reason why the Swiss government and the Geneva authorities created us five years ago can be summarized as follows: we are all living in a world accelerated by science and technology. There is no plausible indication that the pace of change is going to slow down. But not everybody, and by far, benefits quickly from these advances. And this is not sustainable. And it’s contrary to the Universal Declaration of Human Rights. 

Therefore, in our understanding, the path to a global sustainable future of people, society, and our planet is to democratize, not the first time these words are used this morning, unfortunately, it’s to democratize as much as we can, both the early understanding and the early uses of emerging science breakthroughs, and well beyond the developers of those breakthroughs. 

If we succeed in doing this at scale, because everything at the end must be the question of scaling up, it seems, if we succeed in doing this at scale, all of society will have the time it needs to prepare for these changes with the best possible transitions. So preparedness is key, and preparedness in this sense means, first, increasing science literacy by detecting and monitoring the major emerging scientific and technological advances that will change the way we live, think, and behave.

 Second, developing, supporting, and funding concrete initiatives based on these emerging trends that leverage technology for development, security, and the right to science, meaning the right to benefit people to profit from the advances of science and technology. Third, in doing so, nurturing policy with up-to-date information and concrete experience. 

So how do we implement this at GESDA? How do we operationalize this framework? The first step we took three years ago was to test the anticipation mechanism. It was to work on a yearly science breakthrough radar to present an overview of emerging trends in five different areas of science and technology development. Five and not one or two, because we are all scientists, also in science, and we think that it’s important to… to have some overview of what is happening in the field of emerging power, in the field of science and technology. 

The fourth edition of our Radar, to be released in one month, will give insight into 40 emerging scientific topics and 348 potential breakthroughs at 5, 10, and 25 years, as seen by 2,100 scientists from 87 countries. 

The highlights this year in Natural Sciences range from Eco-augmentation to Orbital Environments, from Unconventional Computing to Neuro-augmentation, Life, Lifespan, Extension, and Synthetic Biology, while the Behavioral Science of Groups and the Future of Archaeology are this year’s highlights in Human Sciences. 

But we will also add something to this work now. We will add an intent to our Radar, an intelligence tool powered and enhanced by artificial intelligence to support decision-makers in understanding not only where the science is heading but also what regulations do or do not exist, whether R&D is already hitting the market or not, and where public opinion lies, and it was already stated several times this morning, which I do share, we did also some kind of this work in our Radar, and we came exactly to the same conclusions that you presented. 

We will present this expanded tool at our upcoming October Summit in Geneva, as part of our new Knowledge Augmentation Initiative to democratize science literacy for future leaders, future diplomats as well, current authorities, and citizens from all over the world, and our final topics are neurotechnology and quantum computing. 

Notwithstanding, effective multilateralism means action and results. It’s not enough to get access to future-oriented knowledge if we do not act upon it. Therefore, last year, we launched another initiative on the potential uses of quantum computing, we call it the Quantum for All Initiative. It includes the Open Quantum Institute, OQI, to which President Ziben has been contributing since its very first steps in 2022. 

So, President, thank you for helping us lift up this initiative and your continuous support as well for your invitation there. The Institute is now embedded in CERN with the financial support of the mainstream global bank, UBIAS. Its aim is to democratize access, to develop capacity building on quantum computing for good, while harnessing human talent from across all geographies, and develop use cases that benefit humanity, and to accelerate the SDGs implementation. May they help save the SDGs, as Dorenda mentioned this morning. The OQI works as a multi-stakeholder platform, meaning encompassing scientists, companies, representatives of civil society, foundations, and of course politicians and diplomats. It counts as participants in around 20 of the permanent missions based here in Geneva and several governments. And of course the institute is open for all of you to join at any time. 

To boost the OQI development, we also launched in March 2014 a worldwide contest with the XPRIZE Foundation and others for teams to accelerate the development of use cases from all over the world in the most promising fields for quantum computing according to quantum experts, water sanitation, food, water, food, or materials for carbon catch. 

So far, 240 teams have applied to participate in this contest. The winner of the contest will be disclosed in January 2027. Finally, and I conclude with this, the Quantum for All initiative also addresses questions of governance. We will release this fall the second edition of this intelligence report on quantum development and quantum diplomacy for SDGs. 

It will be based this year on the collective work with 24 scholars, 10 quantum companies, including the chief scientist of Microsoft Quantum, permanent representatives of 31 countries, and 17 international organizations from the Geneva ecosystem and beyond. Several of them are with us today. ITU, UNIDO, OECD, UNESCO, WIPO, ISO, and UNEP. 

We hope this yearly report will nurture your own discussions on the governance of emerging technologies, and in this spirit, GESDA Foundation stands ready to contribute to the work of your Council as you may wish. Thank you for your attention, and I look forward to our discussion. 

President of the UN Human Rights Council

Thank you, Mr. Secretary General, for this presentation; highly appreciated, I’m sure by your kind. I can testify to your openness towards the diplomatic community here in Geneva. And it is something we have mentioned a lot. Even the representative, while representing also the host country, and the role of the host country are almost at most important.

You gave us the opportunity to sit down with top-level scientists from all over the world on these issues, educating us on these topics. These are probably one of the reasons that drove me to propose this debate today in the Human Rights Council. Thank you so much for that. 

Let us shift to the social dimension, which will be introduced by Ms. Celeste Drake, ILO Deputy Director-General.

Ms. Celeste Drake, ILO Deputy Director-General.

Thank you very much. Mr. President, Excellencies, fellow panelists, esteemed participants, and guests, good morning. It is a pleasure to be with you here today at this high-level, informal presidential discussion on a topic that is captivating our attention and generating both hopes and fears, and I will continue to do so for the foreseeable future. 

Beyond scrolling on social media and perhaps vacuuming our apartments with robot vacuums or purchasing and shopping on Amazon, for many people it’s really in the workplace where we will experience firsthand the potential uses and possible effects of artificial intelligence. 

For many, this experience will be exciting. It’s a new challenge. It’s an opportunity to do our jobs better, to be more productive, more efficient, more effective, and more impactful in our jobs. For others, it will come. anxiety?  Will I still have a job in the future? If I do, will I have the skills that I need to do the job well? 

And after all, these fears are understandable because for most of us, no matter where we live in the world, our job is our only source of income and wealth. So we want to make sure that as we think about the human rights impacts of AI, we are addressing not only the real risks, but the perceived risks so that there is a level of trust in adopting this rapidly developing technology. 

And this is why the ILO welcomes the call for the prioritization of human rights in discussions and responses to the opportunities and the risks posed by the development and adoption of AI and other emerging digital technologies. It seems that every day there is a new headline on the possible job losses from AI. Both me and the ILO recognize that the story is much more complex than just jobs disappearing, as critical as that issue is.

 AI’s impact on the world of work isn’t just about the quantity of employment, but on how workers will be recruited to their jobs, how they will carry out their work, and under what conditions and circumstances. Whether their fundamental rights, which are indeed human rights, will be respected, as well as how AI can be leveraged to tackle poverty, inequality, and reduce and even eliminate the digital divide. 

With respect to possible job losses, ILO research finds that 2.3% of total global employment, or about 75 million jobs, are at risk of disappearing if generative AI is fully implemented. However, at least six times that many jobs have the potential to be augmented or transformed. But these projections fundamentally depend upon access to technology. 

Our latest report on the effects of generative AI in Latin America, done jointly with the World Bank, suggests that the significant digital divide in the region could prevent workers from fully realizing the benefits of GenAI. Beyond these effects, we must consider potential regulatory gaps and responses that address different dimensions of decent work. We must ensure that the development and deployment of AI doesn’t infringe upon labor rights. And I’ll focus on three major themes. 

First is this critical issue of international labor rights and standards. AI systems are being deployed at an unprecedented rate, even in comparison to other recent technologies. And you’ve heard about this speed of deployment very effectively from prior speakers. But we cannot let the speed of change deter us from protecting and supporting human rights. 

As highlighted in the UN System White Paper on AI Governance, released earlier this year, the ILO Declaration on Fundamental Principles and Rights of Work applies to all working environments, including those impacted by AI. In more concrete terms, we need to ensure that all workers are awarded protections and benefits in accordance with national labor laws and international labor standards, including the millions of workers involved in the development of AI systems who painstakingly tag, test, and moderate content, most of whom are located in the global south. 

Given the enormous changes ahead, we also need to consider the adaptation and development of new government regulations and government structures, grounded in social dialogue, of course with representatives of workers’ and employers’ organizations. We also need to think about how labor administration can be improved here. How can governments, for example, use and deploy AI and related technologies to improve their labor and employment administration?  Can they deploy AI to more effectively reach workers with the social protections that they are entitled to? Payments to expected new mothers, unemployment insurance, and pensions, for example.

 On this note, the ILO is working with the government of Albania on a very promising development cooperation program. program that is testing the use of artificial intelligence by the labor inspectorate to use massive data models to determine which workplaces should be inspected. And the results are promising. 

The inspections are about 30% more effective than random inspections. In this context, the ILO’s 2025-2026 standard setting discussion on decent work in the platform economy represents an important step to consider how international labor standards can address new opportunities and challenges posed by technological advancements. 

Second, we need to better understand the impact of algorithmic management on the rights of workers. We need to ensure that the use of algorithms and data-driven systems to make decisions, allocate tasks, direct work, manage, and coordinate workflows and organizations doesn’t infringe upon workers’ rights, reduce their autonomy, or harm their well-being. 

A safe and healthy working environment is a fundamental right that should be respected in all workplaces. Achieving such a commitment requires transparency from technology developers and users, along with dialogue and cooperation between employers and workers to promote the benefits and curtail the risks of AI. We cannot allow, for example, that workers’ private communication to their union be monitored through digital technologies, as this would be an infringement upon the fundamental right to freedom of association. 

And third, let me come back to the issue of the digital divide. The ILO’s recent report with the UN Office of the Secretary General’s Envoy on Technology called Mind the AI Divide underscores the importance of enhancing digital infrastructure and technology transfer and promoting AI skills and social dialogue, including collective bargaining. Investing in skills development and lifelong learning is vital for equipping workers with the skills and knowledge required in this rapidly changing environment, including those impacted by job loss and job transformation. Engaging industries through sector skills bodies as well as financial incentives and technical assistance can support these efforts. Micro, small, and medium-sized businesses will continue to need specific support so that we are creating an enabling environment for sustainable businesses that work for employers and workers alike. 

To showcase our work in these areas, we are launching an ILO observatory on AI and work in the digital economy at the end of this month. The observatory aims to enhance our knowledge leadership on the world of work dimensions, including AI, algorithmic management, digital labor platforms, and workers’ personal data. We believe these aspects are under-researched and not yet sufficiently. understood by policymakers, employers, or workers themselves. 

As reflected by our close cooperation with OCED, the UN High Level Advisory Body on Artificial Intelligence, and others, we are committed to working with the UN system and all willing partners to strengthen our understanding and capacity to respond to the implications of AI for the world of work. We look forward to working together with the Human Rights Council to address the world of work dimensions of AI and other new technologies. Thank you. 

President of the UN Human Rights Council

It is my pleasure to introduce Peggy Hicks from OHCHR, who has been leading activities on human rights impacts on digitalisation and AI. 

Peggy Hicks, Director of Thematic Engagement, Special Procedures, and Right to Development Division of OHCHR.

Thank you President of the Council for pulling us together in such an important way. I think we’ve already heard a convincing case from my fellow panelists on both the immense opportunities and the unprecedented perils that are posed by the rapid evolution of digital technology, including the spread of general purpose AI in particular. Within a sea of change and promise, the human rights framework can be and must be a lifeline. The human rights framework brings to it three palpable advantages. 

First, it’s multifaceted. It addresses all of the key challenges that have been raised related to dignity, privacy, labor issues, free expression, non-discrimination, equality, and justice. 

Second, this framework builds on ethics, the moral principles that have been generally agreed, but it goes beyond them by enforcing their need as legal obligations that they are committed to. And that’s the third point. These are principles that have been turned into law and have been agreed across continents and across contexts. So they are applicable across all of the different things we’re talking about. That said, we’re here today in part because it’s easier said than done. It’s challenging to implement the human rights framework in this very complex space, and you only need to read the day’s news, big stories in places near and far, that raise fundamental questions and debates about how we are doing in terms of the regulation of business and the upholding of rights with regard to digital technology and artificial intelligence. 

So it’s absolutely clear we need to do several things.

First, we need, as has been said, I think first by our first speaker, an advice to you, that we need to set guardrails, but we need to do it in a way that avoids stifling innovation, because there is so much promise associated here. 

Second, we do pretty well at identifying one of the problems, and I think we’ve heard about many of them in the remarks that have been made so far. But I have to say that my analysis of solutions is sometimes hard. 

The solutions that we need in this complex area are multifaceted, diffused, and different from place to place and from issue to issue. But instead, we tend to get simple fixes. The idea that we can flip a switch and eliminate disinformation from the online environment that we’ve created. That won’t work. We need longer term approaches, compound ideas that allow us to work on issues and make progress, recognizing that we will not be able to address and solve all of these issues in short form. 

And third, the environment that we work in is often spurred by competition rather than cooperation. The reality is there’s a race to use this technology, get it out there and use it, and be the ones that take the lead in these areas. And that, of course, brings with it a lot of problems in terms of trying to ensure that we have the right guardrails in place. So we need to address that. 

The final thing I’d say is that it’s crucial that this conversation gives equal attention to the role that the private sector and governments have to play here. From a human rights office perspective, I have to say, we believe that both companies and governments need to do better in this space. Although there’s often sort of a bit of finger pointing between the two about who’s really responsible for the problem that we see in the space. 

So we do need tech regulation, but what we’ve seen is that the laws that have occurred has brought with it a number of laws that are particularly problematic from a human rights perspective and need to be fought. So I’ve said that the human rights framework is what brings us together, and I’ve said a bit about why it’s so important. 

But I think we also have to acknowledge where we on the human rights side need to do better as well. 

And the first is that the human rights framework is generally applicable, but applying it in practice to the challenges we face on the ground is very difficult. And so we need to help guide that application and ensure that the recommendations that are being made in the human rights system are really accessible and implementable for those decision makers who are addressing these challenges on a day-to-day basis. 

We also, and this is a point that this panel really exemplifies, we need to figure out ways to bridge conversations that are happening in this room with those that are happening in related spheres around cybersecurity, around the use of technology and the peace and security issues, around technical standards setting, as has been mentioned, around the intellectual property issues. 

All of those conversations have a human rights component, but we don’t necessarily have the ability to always bridge and make our way into those conversations in the way that we would like. I also want to emphasize, before moving on to some ideas about how we can move forward, that the concern here is not only about ensuring that a human rights foundation is built in, but also that the promise of AI and digital technology is realized by all people everywhere. 

Opportunities abound, but if the past is precedent, these advantages, the advantages that we see will be concentrated first and most in a larger way in the developed world. And the corollary to that is that those most in need are likely to access the benefits that we see less.

We need to rectify that inequality and how it’s, as has been said so eloquently, ensure that the SDGs are brought into this conversation and focus on how we can better achieve them through use of technology, but that will not happen without concerted effort and real change in the way that we’re currently working. the Global Digital Compact has been referenced.

 And I would emphasize that it’s very clear in its current draft and all the drafts about some of the systems and support that will need to be in place in all countries to be able to mine AI opportunities while managing risk. But I have to say, having spent time in some of the countries that are perhaps furthest along and have some of the greatest advantages with regard to those challenges, even they are struggling with how to move these things forward, setting up new AI safety institutes. 

How will they work? What kind of structures are needed? Those are difficult things to do. And I think the business concentration that has been referenced and the incentives that exist for businesses to engage in some of the ways that I’ve mentioned do not favor the rapid expansion needed to ensure that AI addresses the needs of all people in all contexts and that it’s used in a way that addresses rather than enhances inequality globally. 

So I think much more needs to be done on that front. So those are some of the issues. I would like to say a bit about how my office, the United Human Rights Office, is addressing some of this and how we work with this body, the Human Rights Council, to do so. These are the issues that we’re really working to address. First of all, one of our big projects is working to flush out the responsibilities of companies on which we’re going back and forth on business and human rights. In doing so, we need to take that whole size approach that I mentioned. 

We’re encouraging companies to race to the top, trying to differentiate between those who really are rejecting the need to be better on their platforms, for example, and those that are really moving forward to implement stronger measures to address the legal content and hate speech. But we also are working to try to inform government regulatory efforts through the B-TECH project. And we’ve done that in the area of Gen-AI, where we have a specific project there. 

We have a community of practice involving some of the largest tech and communications companies. And we’re really looking to expand the efforts and discussions. We have work ongoing in Africa. We’re looking to expand that to India and South Asia and really see this as a conversation that needs to be taken on globally.  

Secondly, one of our big priorities is to bridge these conversations that I’ve talked about by injecting human rights and working with our partners, many of whom are represented here, in key areas. We’ve already mentioned the technical standards setting area, where we work very effectively with ITU and other partners. 

We are engaged around cybercrime prevention and how we can ensure that that document is both adopted and implemented in a way that brings in the human rights concerns. We’re also looking at things like digital public infrastructure, building smart cities, and how to make them human rights-based. 

The third big area, one that you’ve heard quite a bit from us about, is that we’re expanding and trying to work to advise and support states in the development of tech regulations and policies. 

The director of UNESCO noted that they are working with 50 states in terms of implementing the ethics on regulation of AI. We, too, are trying to work alongside and with UNESCO to ensure that we are bringing into that conversation the strong human rights framework that I mentioned, with all of its advantages to those discussions as well. 

We are also, of course, mainstreaming efforts in the UN system, working not only with UNESCO and the GDC, but also through the issues of the work around a human rights and diligence policy for the UN itself, in terms of how we use digital technologies, with the idea that that could help both within our own ecosystem, but also be a model that can be used elsewhere. So I’ll close by just saying a bit, in response to the question I was asked, about how can we take over this board? What do we expect out of these conversations following on this important panel? 

The first is that we believe there is a need to bring that human rights framework home to governments and businesses in a better way. And that means both expanding the ability to give tailored support to governments that are addressing some of these different government and legislative and policy issues, but also to expand the work that I mentioned with regard to the business sector as well, and advising in that regard as well. 

We do see, as well, a real opportunity, based on the mapping report that the president mentioned that we’ve done, to bring together the work that’s happening within the Human Rights Council. It has proliferated and grown immensely in a very positive way. I think nobody here will probably have read the entire total of 135 different special procedures reports that have taken a focus to digital issues. 

But let me tell you, there’s a wealth of information, ranging from digital impacts on older persons, to the issues around violence against women that I mentioned, to basically economic and social rights implications, and how to roll out technology in areas like the right to health. 

Across the whole spectrum of issues that the council addressed, everything has a digital component. And the council is engaging around that. But with that comes the need to figure out a way to coordinate and ensure that that work is brought together and that there is the ability to build on rather than simply replicate some of the findings and recommendations that are going forward. 

So we’re looking for ways to enhance cooperation within the mechanisms through potentially, a tech coordination group is one of the recommendations. One of the key other points that will come up I’m sure in the conversation is around the information management system. As I said, there’s an enormous body of work out there, but how do we all access it? How do we know what recommendations have been made by special procedures, by my office, by other parts of the UN. system on a particular pressing issue of human rights and digital technology? 

We need to use digital technology to help answer that question. And there’s a feasibility study that’s been done about how the Human Rights Council can better use information management. We want that to be broadened and used in a way that will allow us easier access to the work that’s been done within the council and beyond. We want to continue the work that we’re doing around bridging and breaking silos. And it’s not only our office, but the Human Rights Council itself that can help in that. 

And can have conversations like these where we can dive deeper into some of the challenges. We’ve already done work through the Human Rights Council mandate on technical standards settings that I think was very helpful to our partners and can be continued. But we can do the same in other areas where there are siloed conversations happening and we haven’t had as much of a human rights focus yet. That’s something the Human Rights Council can help with. We’ve also talked about ways to expand and deepen peer review. We’re all part of the universal periodic review system, and we know that it’s not easy to cover every issue within that forum. 

But yet that forum is a very important way of bringing attention to conversations across states. So we’re talking about ways that we might be able to have greater efforts around the peer review side in the digital tech space that will also allow greater opportunity for states to benefit from other experiences and practices and potentially get greater support in those conversations as well. 

And of course there were also proposals put on the table by the Ad Hoc Committee looking at how a task force could be created to have multi-stakeholder conversations in the context of the Human Rights Council. and we’re interested to see how that idea can be developed as well. 

This discussion itself illustrates that the Human Rights Council can contribute to addressing these challenges. The consensus here is clear. I’ll echo what the President said, that what I’ve heard is a lot of alignment amongst the statements. While we will have to continue to explore and understand the impacts of AI on our society, it’s now time to do something. The HRC will be taking up these issues, and our office stands ready to support it. Thank you. 

President of the UN Human Rights Council

Thank you, I would like to invite Prof. Jovan Kurblaija, director of DiploFoundation, to present his views. 

Jovan Kurbalija, Executive Director of DiploFoundation.

Thank you. Mr. President. I have to oblige to your point that there were no controversies and different opinions. But unfortunately, I agree with much of what was said so far. But my controversy will be in bringing some good news and some concrete points that may help us to take forward our discussion. What is the good news? 

The first good news is that we don’t have anymore fear mongering about AI as we had throughout the last year. You can recall those titles, stop the AI, AI will, as I tend to say, ‘eat us for breakfast’. Fortunately, and it was coming from the companies, I won’t go now on the motivation why it was done, but now we have much more balanced discussion. What is AI, what are the realistic risks, long-term risks, medium-term risks, and immediate risks? This is the first good news. The atmosphere is prone for good discussion. 

The second good news is that AI is much simpler than we thought. A few years ago, when I was, about eight years ago, when I was going to deliver a lecture on AI to the UN, I was thinking myself, how should I explain to fellow diplomats? And I started writing the book, Understanding AI Through UN Flags. Essentially, all AI concepts, neural networks, and everything else, can be explained and understood by walking in the alley around the flags. You can download the explanation, but this is the key point. Patterns, key of AI, and concept of probability. Those are two foundational concepts of artificial intelligence. This is second news, good news. It’s not as difficult, as complicated as we were told, or confused with the terminology. Be always careful.

The third good news is that AI is affordable. I run the organization whose annual budget of across two million Swiss francs, provided by our donor countries, Switzerland and Malta, is equal to the cost of the AI processing of the open AI per a day. 

Therefore, the annual budget is equal for the processing of AI per a day. And doesn’t make these models powerful. And you will see in a few moments why AI is affordable, and why your missions, your countries, and your ministries should embrace it and develop it with limited resources and very fast. This is the point. We have to walk the talk. 

And one of the things about the talk is that I’m recording for myself, and you can receive report, AI-driven report from this session. Now, it won’t be like a usual Zoom report that you can get that’s now a commodity. AI is becoming a commodity, let’s admit it. But the difference is in the quality of annotated texts. Diplo knowledge is based on the half million expert annotations on thousands of documents. 

And that cannot be beaten by Google or any other company. Therefore, our knowledge, if it is properly organized and captured with a very small organization, can be a powerful tool. 

After these good news, let me make one thinking proposition. Think about our meeting today as a unique knowledge exercise. We had excellent panelists. I’m sure we’ll have great questions. You will be reporting back to capital. And can you imagine what volume of knowledge is generated today? 

And I’m highlighting knowledge, not data. This is what has to come more in the language of the UN family, knowledge, because knowledge is an add-on element to the simple data, insights, reflections, and our shared values. Now, if we think about knowledge, that is generated today on AI, we have to think about your thinking, your reflections, while I’m talking or while other panelists are talking about, we start thinking about bottom-up AI. 

And this is my first proposition. We must develop bottom-up AI for a few reasons. Reasons for bottom-up AI are fundamental for the work of this Council. Knowledge defines our core humanity, today and throughout the history. If you read foundational books of major religion, and let’s start from China, you have in Taoism the way, and you will find the knowledge is a critical concept. When you move to Buddhism, you will find the concept of Annata, of no-self, again built around knowledge. In India’s civilization, in Islamic tradition, Avicenna’s concept of floating man. And we come to Europe, and obviously Descartes, Cognito Ergo Sum: ‘I know therefore I am’. But slightly different notion in Africa, in Ubuntu tradition, which tells ‘I am because you are’. And those are just a few points where our knowledge, our awareness, defines who we are. 

And frankly speaking, if this council will serve us well, and I’m sure under such able leadership it will, it has to bring knowledge back across the board. Knowledge that could be centralized in a few hands, and that can be prevented by bottom-up AI. Bottom-up AI is technically feasible, ethically desirable, and financially viable. 

Therefore, my call here, first call, first concrete point, let us develop bottom-up AI, starting with discussions that we have at this meeting, and keeping our knowledge as our knowledge, shared with humanity, but as our knowledge. 

My second point is that we have to revisit some of the core human rights in the context of AI. Famous Article 19, if you read it carefully, you have element of expression of opinion, freedom of opinion, but you have in the more than two-thirds of Article 19, the question of holding and forming opinion. 

Now, what does it mean if I express an opinion, but that opinion is shaped by answers provided by ChatGPT. Will our opinion be shaped by a few big companies without reflecting cultural differences or not reflecting cultural differences? Gap in knowledge is already major. For example, only 5% of producers of the content on Africa, on Wikipedia, are from contributors from Africa. I love Africa, spent quite a bit of my time working in capacity building in Africa. It doesn’t mean that only people from Africa should write about Africa, but less than 5% is a bit of shocking statistics. 

That’s therefore, that’s the second point. Let’s revisit, including Article 19, including other instruments around the centrality of knowledge. We should have more knowledge of the language of the council and human rights community. 

The third point, which I would like to advise and propose here, relates to SDGs. We have been hearing a lot about AI as a tool for achieving SDGs. We’ll use it to achieve concrete SDGs to help people. We never thought of using SDGs as a guardrail for A governance. And this is a solution in front of our faces. 

SDGs are currently the most up-to-date codification of societal priorities, values, and other elements. And they’re very quite detailed with indices and other elements. It is always for me puzzling why we don’t ask tech companies and AI developers to basically follow SDGs. 

Why would it be really helpful to use SDGs as AI guardrails?  First, SDGs are available, and they can be made operational immediately.

Second, new use of SDGs as AI guardrails will give new life to SDGs. We know that the race for 2030 is a bit in a delicate phase. There is ‘SDG fatigue’. There is a need for new policy adrenaline. It can come from the use of SDGs as AI guardrails. 

In the conclusion, I would like to invite you to consider the centrality of knowledge, your knowledge that you are creating, knowledge of your children, parents, and relatives. This is our key asset that defines us as human beings. And there is no better place to revitalize that relevance of knowledge in the context of the AI era than the Human Rights Council and Human Rights Community. Cogito ergo sum should not become. with paraphrasing, a bit wrongly in Latin, AI ergo sum. Ubuntu is saying, I am because you are, should not become, I am because AI is. Thank you. 

President of the UN Human Rights Council

I would like to introduce our next speaker, Professor Ulrich Schlie from the University of Bonn.

Ulrich Schlie, Henry Kissinger Professor for Security and Strategic Studies at the University of Bonn.

Thank you very much. Mr. President, Excellencies, Monsieurs, Dames. As the last in line, I promise to be brief and limit myself to three points. I also, perhaps, can probe the contradictions that have been suggested by our President. My first point is, digital power will reshape the global order and facilitate a worldwide process that will have far-reaching institutional consequences. So far, states are actors. I think this is a challenge, and it might change. For a long time, states have taken control of all aspects of society. But now we see with AI that we have a new power. emerging actors on the field of international politics. And this will also have a relation and a consequence for science. I’m talking about international companies. Amazon is sitting here at the table. 

And this will have an enormous impact on the question of world order and how we will shape the future. European companies do not have the size or the geopolitical influence to compete with American and Chinese companies. The biggest competitors are today already similar to states. They bring resources, and they will shape global affairs. Digital space is a new dimension in geopolitics. Social, economic, and political institutions will continue to shift. And governments will recognize that they will be, in some regard, out of control. 

The transformative nature of AI will also transform warfare. It will transform warfare, create destructive new capabilities, and change the way military commanders train, deploy, and equip their forces. These changes will shape the military balance. As China advances in the field of AI, the United States sees themselves faced with a challenging new competitive. And when we look at the further development of the world system, we have to take this into consideration, and we must reflect about the relationship between the states. AI is clearly a strategic rival for global powers.

When we look at the future, we also see possibly the arrival of the so-called singularity of the battlefield. This means that we might not keep up with the speed of the decision-making process in the military. When some authoritarian states opt for a fully automatic approach to war, this will raise a number of ethical and operational risks. And what does that mean for the protection of civilians? In support of military commanders, this could mean that new problems might emerge when encouraging offices to rely on programs that are prone to error. 

Let me conclude. What is required is a discussion of the necessary strategic and organizational implications. This could have an impact on international humanitarian law, and we will also need an interdisciplinary approach which combines international law, strategic studies, intelligence, technology, computer science, and economics. It has been truly said by my predecessors that we need a multi-stakeholder approach. Albert Einstein once remarked that the advent of nuclear weapons has changed everything but our modes of thinking. 

And in adaptation of the quote from the former U.S. Secretary of State George Shultz, I would like to say and conclude by saying that we need realities, we need new modes of thinking, and that the United Nations here in Geneva is the right place to bring together this new multi-stakeholder approach. Many thanks for your attention.

COMMENTS FROM THE FLOOW

Unknown female speaker from Amazon

Commercially available gen A.I. systems score up to 14 percent. Specially trained ones, 38 percent. Humans typically around 85 percent. Recently, some A.I. enthusiasts created a new setup that achieved 50 percent, yet it requires 1,000 times more compute, generating 8,000 programs per question, then selecting the best one. That’s around four cubic meters of water per question. Meanwhile, colleague six-year-old got 63 percent in eight questions. That’s one cookie versus 32 cubic meters of water. The question should not be whether gen A.I. can solve a problem, but whether it is the right tool for the job. 

At Amazon, we actually use a process called working backwards to stay focused on our customers’ challenges while remaining flexible in how we approach them. The aim is to find a solution for a problem, not the other way around. Before I end a question, both the concept note and last week’s press conference mentioned a new social contract for A.I. Given some of the extravagant ideas in the gen A.I. field, what rights will we, and in particular the global South, be asked to give up, and to whom? What part of our property will be sacrificed, and why? Will we be asking citizens to give up on privacy and intellectual property, to feed systems that, once we look behind the curtain, are but mundane technical aids, useful tools like so many things that came before them? 

The science fiction author Arthur C. Clarke stated that any sufficiently advanced technology is indistinguishable from magic if these systems were specifically designed to foster an illusion. So I hope that this session serves to start demystifying them and to show that the rabbit was in the hat all along. 

President of the UN Human Rights Council

Thank you, Madam for  your contribution. I mean, I do consider it as the inner part of it, of course, as the reverse of the coin of artificial intelligence also. Thank you for that. I now give you the floor, Mr. Robert Chu, our distinguished discussant, please. 

Robert Chu

Thank you, Mr. President. Excellencies. We’ve heard a lot about AI, but I’m sort of coming more from the quantum technology side of things. And so in that sense, I want to elaborate a little bit what quantum technologies means and expand a little bit, maybe, your understanding of that definition. So I principally work in quantum communication, which is not quantum computing. There is also quantum sensing. 

And in that context, I was part of a team that several years ago put together what is now the multi-billion dollar European quantum flagship program. And that really is to try and push these technologies of computing, communication, and sensing forward to industrialize these technologies. And in that timeframe, we’ve also seen this influx of investment from large companies, small companies, and a plethora of startups taking place in the field. So we see this has accelerated rapidly. Investment has accelerated rapidly.

And its impact on society is accelerating. And we’re still only just starting to think about what that means. So I think this discussion here today is very critical. So quantum computing is one of the things that you will have heard a lot about. Some of the biggest multinationals are invested in this. It has unprecedented power, potential for power, maybe not yet attained. But its main claim, or one of the most concerning factors, is its ability to break encryption algorithms that we use for secure communication. 

So this is just one of its potential applications, but it is perhaps the most critical to both individuals and nation states, and everyone in between. So quantum actually provides the solution to fight against this in terms of quantum communication, which is a way of encoding or encrypting. information using quantum physics. And so that’s more my field of research, but not only do we provide the quantum computing, which apart from encryption, has some great potential for humanity. We can ensure that the encryption and the privacy and security of our communications, our medical records, our infrastructure can be maintained and sustained. 

So I think there’s sort of like two sides of the coins for quantum communication and quantum computation that you’re sort of are in conflict, but globally work together for a better solution potentially for all of us. Communication is not a terribly expensive technology compared to computation. So this is something that’s actually sort of not being sort of restricted, let’s say to the global North as much. 

And in terms of job creation and startup industries, it is a much more accessible technology front in the quantum domain. So this is something that is an advantage. On the computation side, we see a lot of cloud operated systems. I think some people at this desk have some of those. Jester, actually we heard from before doing some very nice work in making this accessible to everybody. And I think this is a great progress for that field. 

The third aspect of quantum technologies is really in quantum sensing. And this is something that is again, really not as expensive technologies as quantum computing and potentially has the impact to improve our health through imaging, diagnosis, medicines. And this is something that I think is a technology that’s rapidly progressing and will be a good candidate for the global South for just inclusion of the rest of the planet and not just the global North. And why that’s important is because a lot of these technologies, whilst they’re cheaper and simpler, the concepts behind them are the same as quantum computing. 

So this gives us a way of educating. And we’ve heard a little bit about educating and reducing the fear of technology and having a much more sort of open sort of attitude in general at scientific literacy that accepts quantum technologies. AI is aware of what that means.

 And I think some of these, let’s say cheaper, more accessible technologies is a good opening point to expand this and understand how this may be useful in the future. And maybe just one of the challenges that I see still at the moment on all of this is because it is such a critical technology, is developing frameworks for international collaboration. We heard about cooperation versus competition. And I think we’ve tried to sort of, you know, straddle this from, let’s say, an academic level for many years, quite easily. But as this becomes increasingly seen as a critical technology for sovereign states, the challenge of making cooperation and collaboration is one that’s increasingly a barrier for research and innovation. So thank you very much. I’m looking forward to the discussion. 

President of the UN Human Rights Council

Thank you, Mr. Tu. As a researcher, you brought also your vision on this very high-speed, evolving technologies, and particularly the quantum one. And it has been said, I mean, by many predecessors that all our efforts should be sustained in front of, you know, the impact of these technologies, and particularly the AI, but also the quantum technology on enjoying human rights. Thank you very much for that. I now give you the floor, dear Mrs. Maria Dimitriadou, World Bank Special Representative to the UN and WTO.

From the beginning of my mandate, I insisted a lot on bridging with the international finance organizations. I visited your headquarter in Washington last spring, where we have a lot of discussions about what to do more, what to do better with the Human Rights Council and such institutions. I do think that we have common concerns and hopefully common purposes also. I mean. So, we spoke there also about artificial intelligence and what the World Bank is trying also to develop in its programs, so please take the floor and exchange with us and share with us on whatever you wish, but particularly also on this aspect. 

Maria Dimitriadou, World Bank Special Representative to the UN and WTO

Thank you very much, Mr. President, Excellencies, Distinguished Representatives, it is an honor to join you today to contribute our perspective and experience. Digitalization is the transformational opportunity of our time. Digital technologies, including AI and data, present a unique opportunity to accelerate development, poverty reduction, and climate action in an inclusive way that benefits everyone, including women, vulnerable groups, and the poor. The critical services that support development, like hospitals, energy, and agriculture, rely increasingly on connectivity and data. 

The infrastructure that underpins these connections must be available, affordable, inclusive, and safe for countries to boost their development prospects. AI and the data revolution are accelerating digital capabilities for many, but low-income countries are falling further behind. As we’ve heard from the Secretary General of ITU, about one-third of the global population, or 2.6 billion people, remained offline in 2023. 

While more than 90% of people in high-income countries used the Internet in 2022, only one in four in low-income countries did so. In addition, 850 million people lack any form of identification. The global community needs to do more to help developing countries catch up, accelerate digital adoption, and ensure that everyone can reap the benefits. At the same time, fulfilling the promise of digital requires balancing risks and opportunities. As the world goes digital, safeguards are crucial to promote trust. 

Data protection, cybersecurity laws, and solid institutions must be in place to develop and enable strong, interconnected digital systems that can verify identities, quickly and safely transfer payments, and responsibly exchange data, while ensuring that the rights to privacy, free expression, and due process, just to name a few, are maintained. 

At the World Bank, we are working with governments in developing countries to help them build the foundations for digital transformation, including their transition to digital economies, governments, and societies. Our work is currently focused on increasing access to fast, reliable, safe, and affordable Internet. We seek to stimulate demand for digital applications, digital skills, and digital platforms that support governments, businesses, and individuals to support and participate more fully in the digital economy. Digital components are increasingly included in projects across diverse sectors, such as transport, education, health, agriculture, and public sector management. 

For example, the Digital Economy for Africa program supports the ambition to ensure that every individual, business, and government in Africa is digitally enabled by 2030. Through this initiative, the World Bank helped increase access to broadband Internet in Africa from 26% in 2019 to 36% in 2022. 

Another example is a project focused on strengthening data infrastructure to close the digital divide in Argentina, aiming to benefit 350,000 residents in areas without Internet connections across 300 communities. Inclusion is at the heart of these investments. 

Our environmental and social framework, which is the Bank’s sustainability framework for investment lending, supports countries in attending to issues of inclusion and non-discrimination, thus helping to open the door for counterparts and stakeholders from different backgrounds to have their voices heard and work together towards ensuring the best outcomes from development projects. 

Additionally, through our Human Rights, Inclusion, and Empowerment Trust Fund, the World Bank continues to work to inform how human rights relate to World Bank’s core development mission. The Fund supports teams in identifying opportunities to advance human rights in our work. Grants provided by the Trust Fund have informed policy and provided guidance to partners and clients in different regions of the world. Specifically in the case of AI, this work has provided recommendations for identifying and meeting human rights risks that arise from bank-funded projects with AI components. 

These recommendations will provide the foundation for future AI governance frameworks for the bank’s operations. Let me conclude from how the Honorable President has introduced my intervention with our gratitude for his leadership and also visiting the World Bank to discuss these issues and also how we can contribute to your important efforts. 

While our approaches on this issue will differ and reflect the diversity of partners here today, I believe that the challenge ultimately unites us all. And this is how to address today’s challenges while laying the foundation for a better future. We look forward to continuing the discussion with you. Thank you.

President of the UN Human Rights Council

Thank you so much, dear Mrs. Maria. Yes, indeed, bridging the gap, I mean, reducing the digital divide and inclusion are fundamental, and the support of institutions like yours in programming and, you know, realizing these programs on such a direction are of utmost importance. It goes without saying. The last discussion we do have is Mr. Maxim Stoffer, co-CEO of the Simon Institute in Geneva. Please, sir, you have the floor. 

Maxim Stauffer, co-CEO of Simon Institute for Longterm Governance

Mr. President, thank you very much. Excellencies, good morning or good afternoon. It’s a pleasure to be here. As a discussant, I will allow myself to discuss some of the points made so far quite spontaneously. I will start with some of your points, Mr. Kurbalija. You said that AI is less complex than we thought. I actually think it is the opposite. 

As Professor Schlier said, it will likely change the world order. It will give more power to companies that will make them compete with states, that will challenge the state-based system that we currently rely on. AI is not just a tool. It is how, probably, we will decide the way we operate societies, which values we encode in those systems, how we process information, how we make decisions. 

The fact that there is less fear-mongering is likely good news. However, it might also mean that we are starting to take the risks a bit less seriously. Recently, the lobbying landscape in Washington, D.C. shifted away from a risk-based focus to an innovation-based focus. This was also funded by private funding. Mr. Tang highlighted that AI is not entirely new. It is the continuation of existing previous tech. Indeed, it builds on previous tech. However, something is new about AI in the sense that it is the first technology that removes power from humans. All technologies today have given power to humans for themselves, between each other, or over the environment. 

With AI, we start outsourcing decisions, outsourcing processes to systems that are black boxes that we cannot entirely control or predict. Therefore, it is a big question of values. Which values do we encode in those systems that we outsource powers to? Mr. Fogles rightly highlighted that the current AI systems, or at least the leading ones, are ultimately biased. But they’re not just biased towards Western norms and values. A recent report by Anthropic, one of the leading AI companies, highlights that AI systems primarily run on American norms and values. 

So you can imagine that it is already hard to build those systems based on American values. Think about how those systems operate for norms and values and cultures that are not meant to respect. Now, there’s a question of solution. Mr. Fogles highlighted that democratizing AI open sourcing is a way to inject variety, a diversity of cultures in AI systems. Here, I would appreciate more nuance. It is not that easy to open source AI systems or models. There’s a trade-off. There’s a risk with open sourcing. It can give powerful models models to the hands of terrorists. 

Recently in New York, we showed a demonstration of how terrorists can use open source models to commit attacks in New York City. So there’s a delicate balance to find when choosing to open source or not. And I think it is very important that we approach this conversation with nuance. 

There’s an alternative, guardrails have been mentioned multiple times, but guardrails, when it comes to values and ensuring that AI systems are aligned, primarily apply to the R&D phase of AI development, not the application level. Yet, if we look at an example, the recent Council of Europe treaty on AI, it explicitly excludes R&D, so it is not in territory. Now, we’ve heard quite a lot about quantum computing. I would appreciate that we talk a bit more about normal computing. 

Normal computing has historically driven AI capabilities. It is the best predictor for more and more capabilities from AI systems. But compute is also about infrastructure, internet, data servers, and therefore, the inequality in terms of compute is also what drives the digital divide. So looking at compute might be a way to improve access, to reduce the divide, but looking at compute and as a predictor of capabilities, might be a way to introduce guardrails, thresholds that companies probably should not pass in case they would develop dangerous AI systems. 

Now, Mrs. Bogdan-Martin highlighted the need to harmonize standards, and this is extremely important. And I would like to highlight another point, is the harmonization of layers of governance. For successful AI governance, we probably need a combination of three things. One, speed, because technology is fast. Two, efficacy, because technology is technical. And three, inclusivity, because it affects everyone. 

And no governance layer will satisfy all three of those criteria. The OECD might be fast and technical. The AI Safety Summits might be fast and technical. But both of them will not be as inclusive as the UN is. But the UN, will it be fast and technical enough to address the challenges? Likely not. And that’s why we need to look at the harmonization between the layers of governance. 

Maybe one last point, because I’ve sounded maybe negative so far, or critical, is that there’s indeed a turning point. Mrs. Bogdan-Martin, you mentioned that the stimulus of the future is a turning point, and I agree with that. The turning point that I see is that we have this conversation today, and we see a lot of efforts at OHCHR and other UN agencies in tackling AI governance, despite the fact that we haven’t seen major incidents. 

Policy change historically happens in reaction to disasters, to extreme media coverage after harmful even happened. In this case, we have these conversations, and by virtue of anticipation, Mr. de Coutere, we actually look forward, we look at the risks and opportunities ahead of us, even though nothing drastic has happened. And this is encouraging, and I think it is at the core of the spirit of the summit of the future, and I look forward to having more conversation of this kind. In this spirit, the Simon Institute, which is a think tank based here in Geneva, is at disposal, so if you have any questions or want to talk more, I’m here to talk to you. Thank you very much. 

President of the UN Human Rights Council

Thank you. Mr. Stauffer, so far, presenting your view on what I do consider the necessity of good governance of AI globally, I just want to refer to one of your observations about the role of the United Nations as such. Yes, United Nations has to be flexible. It should have the necessary speed to follow all these complex issues. But I do think it is the best place to take mandatory decisions, if need be, again, for a good governance of AI globally. So thank you very much. And now, dear participants, excellencies, we will open the floor for all among you who will wish to take the floor with the help of my.

AA

Speech speed

152 words per minute

Speech length

650 words

Speech time

272 secs


Report

As we navigate through a historical juncture, artificial intelligence is redefining our existence, indicating an epochal shift in both technology and anthropology. This era prospects societal advancement but concurrently poses existential threats like disinformation and ingrained biases in AI systems.

The internet’s ubiquity, linking over half the global populace, has inadvertently fostered disinformation proliferation, threatening our social fabric. An alarming UNESCO-Ipsos survey reveals that 87% of respondents from 16 countries fear the impact of disinformation on their 2024 elections, illustrating digital media’s sway on democracy.

Generative AI’s potency exacerbates entrenched prejudices, magnifying their impact on society. A 2022 report exposed sexist and racist biases in AI platforms, risking the fidelity of factual and historical accounts. In this ethically fraught landscape, UNESCO has spearheaded discussions on AI’s ethical evolution, aligning with its commitment to science’s future.

This dialogue, initiated at Belgium’s University of Rabat in 2018, bore fruit with UNESCO’s Member States formulating the inaugural global normative framework on AI ethics by November 2021. This guideline, integrating transparency, accountability, sustainability, and gender equality, fosters media literacy as an antidote to the mis/disinformation surge amplified by the COVID-19 pandemic.

These principles are not merely moral compasses but foundations for robust public policy. UNESCO now aids 50 countries, including Latin American and African nations, in embedding these ethical precepts into their national AI strategies. Collaborative efforts with the African Union aim to craft a continental AI blueprint, while efforts to establish global governance of digital platforms are underway through innovative consultations and regulatory frameworks.

Parallel to UNESCO’s endeavours, the European Council on Human Rights also focuses on safeguarding educational rights and human rights against AI’s ethical quandaries, aligning with the EU’s commitment to fundamental rights. In summary, artificial intelligence, as Sally Fahey aptly noted, is a human construct and must echo human ethics.

UNESCO remains dedicated to guiding AI’s trajectory towards enhancing human welfare and societal progress. Kind regards, [Your Name] *Note*: The text adheres to UK spelling and grammar conventions. Long-tail keywords like “ethical development of AI”, “UNESCO’s global AI ethics framework”, “combatting disinformation and biases in AI”, and “advancing media literacy in the digital age” have been incorporated without compromising quality.

CD

Speech speed

132 words per minute

Speech length

1011 words

Speech time

623 secs


Report

The speaker examined the critical juncture of artificial intelligence (AI), the workplace, and human rights. The dialogue acknowledged divergent perspectives on AI’s expanding role in the workplace, with some viewing it as a path to heightened productivity and others worrying about job displacement and skill obsolescence.

This underscored the complexity of emotions surrounding AI’s influence on work. The speaker stressed the significance of prioritising human rights in AI implementation, aligning with the International Labour Organization’s (ILO) stance and advocating for a balanced approach weighing benefits and challenges.

Regarding AI-induced job disruption, ILO research was cited, projecting the loss of around 75 million jobs, equating to 2.3% of global employment, with the comprehensive adoption of generative AI. Conversely, there’s optimism that AI could enhance or transform jobs sixfold, provided that access to technology is equitable.

A report by the ILO and the World Bank on Latin America showcased how a pronounced digital divide could prevent workers from reaping the benefits of AI. As AI is swiftly deployed, the talk emphasised the need for protective measures for labour rights, as set out in the ILO Declaration on Fundamental Principles and Rights at Work.

These rights should also cover workers generating data for AI systems, often in the global south. The speaker advocated for new regulations and governmental structures founded on social dialogue to prepare for the changes ahead. An AI-driven project in Albania was discussed, pointing out that labour inspections backed by AI were about 30% more effective than random checks, showing that AI can positively affect government administration and labour standards.

However, the adverse effects of algorithmic management, including threats to worker autonomy and well-being, were not overlooked. The talk called for transparency to protect against potential rights infringements, particularly concerning privacy and free association. Addressing the digital divide, the need for digital infrastructure, skills development, and AI literacy was underscored.

Industry contributions through skills initiatives and technical support were acknowledged as vital to a culture of lifelong learning and workplace adaptability. The speaker announced the ILO’s upcoming observatory on AI and the digital economy, reflecting a commitment to an improved understanding of the relationship between AI, algorithmic management, digital labour platforms, and workers’ personal data.

This initiative signifies broader collaborative efforts with organizations such as the OECD and the UN High-Level Advisory Body on Artificial Intelligence, aiming for a collective approach to the repercussions of AI on employment. In conclusion, the ILO’s pledge to work alongside the Human Rights Council and other entities was reaffirmed, ensuring that technological progress aligns with the protection and promotion of workers’ rights in the evolving digital arena.

DT

Speech speed

179 words per minute

Speech length

2316 words

Speech time

857 secs


Report

Throughout history, technological innovations have echoed the progression of the Universal Declaration of Human Rights (UDHR), and artificial intelligence (AI) represents the modern pinnacle of these advancements. Predicated on the concept that technology harbours both potential risks and benefits, the speech highlighted the applicability of the UDHR in mediating the rise of generative AI (Gen AI).

A striking feature was the reference to data trends from WIPO, showcasing a surge in Gen AI development over the last two years, possibly accelerated by the COVID-19 pandemic. Of significance was the revelation that a third of the patents filed in 2022, amounting to 3.5 million, pertained to digital technologies.

This underscores the convergence of digital and traditional industrial technologies, igniting a revolution in various sectors, such as the automotive industry’s integration of data systems and entertainment. The official raised concerns about the monopolization of Gen AI patents by large corporations, primarily within developed nations, hinting at the unequal distribution of resources necessary for AI advancements.

Nonetheless, an optimistic perspective was offered, illustrating the widespread application of Gen AI across diverse industries and regions. Promisingly, smaller entities in both developed and developing countries are leveraging AI to navigate distinct challenges. The digital divide was recognised beyond mere access to technology and extended to encompass differences in opportunities afforded by technology.

Developing countries are eager to exploit technological advantages, as suggested by surveys conducted by Stanford and WIPO. The outcomes highlighted positive dispositions toward AI and intellectual property across burgeoning economies—particularly noted were Malaysia, Mexico, Turkey, and regions within Africa and Asia—indicating an active engagement with AI and technological advancements.

The speaker likened the core values of intellectual property to those inherent in human rights, particularly the respect for human dignity and creativity. The address firmly argued for the continuation of recognising human authors and innovators above machine-generated creations within IP regulations.

The role of WIPO in shaping AI involvement in IP was outlined, featuring the promotion of dialogue, toolkits for policy development, and direct assistance like healthcare AI entrepreneur courses and IP management workshops for small to medium-sized enterprises (SMEs). WIPO also introduced ‘Y4Green’, a platform designed to align climate change technologies with worldwide requirements, reinforcing the importance of cooperative AI applications to tackle global issues.

In closing, the official assured WIPO’s commitment to collaborating with other United Nations agencies and member states. The objective is to transform ambitions articulated in upcoming UN forums, such as the UN General Assembly and the Global Digital Compact, into practical projects.

These initiatives aim to guide us through the precarious landscapes of AI, balancing the prospects and perils it presents. This dedication exemplifies WIPO’s intent to harness AI to promote equitable development and wealth, resonating with the timeless tenets of human rights.

DB

Speech speed

129 words per minute

Speech length

1599 words

Speech time

822 secs


Report

The speaker praised both national and regional strides in establishing legal frameworks for AI application and acknowledged the United Nations’ considerable advancements, referencing General Assembly resolutions and the forthcoming UN Summit of the Future.

This anticipated Summit will deliberate on the global digital compact, a document significantly shaped by Geneva’s discussions. With the Summit fast approaching in 19 days and only six years left to accomplish the Sustainable Development Goals (SDGs), the speaker depicted this juncture as crucial for redirecting SDG efforts.

They emphasised the stark reality of rampant cyber insecurity, the exploitative use of childhood imagery via AI, and the proliferation of deep fakes that erode democratic processes. The speaker then highlighted the skewed distribution of AI technology, pointing out the absence of leading high-performance computing centres in developing countries, a situation that mirrors the broader struggles in addressing complex global challenges.

They drew attention to the 2.6 billion people still without internet access and the many who, despite connectivity, lack the means or skills to harness emerging technologies effectively, noting the proactive measures taken by the ITU in this regard. The speaker outlined three core areas of focus: 1.

Leveraging digital technologies to salvage the SDGs—a commitment to improving upon the current 17% achievement rate of SDG targets, with AI’s potential to expedite progress on up to 70% of these targets. The ITU’s contribution was exemplified by the AI for Good Global Summit, notably a communication aid for individuals with ALS which showcased AI’s capacity to notably enhance quality of life.

2. Fostering innovation alongside safety and human rights protections—endorsing a human-centric model aligned with UN principles and urging capacity-building for multi-stakeholder engagement. Calls for a rights-based approach to technology adoption, critical for human rights compliance, were highlighted, demanding cooperative efforts, especially in the realm of standard-setting.

3. Promoting international cooperation and inclusive dialogue—the speaker reaffirmed ITU’s dedication to a multi-stakeholder perspective, ensuring voice and representation for those from less developed regions. They stressed the necessity of inclusive, respectful engagement in our increasingly divided world and reaffirmed their commitment to these endeavours.

Referencing Thomas Jefferson’s observations on the need for legal and institutional evolution in response to new circumstances, the speaker underscored the need for proactive adaptation to technological progress. They pointed to the upcoming World Telecommunications Standardization Assembly and the 20-year review of the World Summit on the Information Society as opportunities to realign our technological vision.

In their concluding remarks, the speaker urged for a united effort to secure the SDGs, advocating a harmonised approach to innovation and regulation, and emphasising the centrality of human rights in our digital future. They called for action towards a secure, inclusive, and fair digital environment for all future generations.

JK

Speech speed

143 words per minute

Speech length

1507 words

Speech time

633 secs


Report

In his address to the Human Rights Council, the speaker provided an optimistic view on the recent discourse concerning artificial intelligence (AI), offering proposals for harnessing AI in ways that benefit humanity. The speaker began by acknowledging a noticeable shift in the discussion around AI.

Fearmongering headlines of the past year, which cautioned about AI’s perils and its destructive potential, have given way to a more balanced examination of AI and its implications. This change has allowed for a nuanced understanding of AI’s risks over different time horizons and has led to a more productive debate.

Next, the speaker addressed the simplicity of AI, advocating the idea that fundamental AI concepts can be demystified and made comprehensible through simple metaphors. For instance, the speaker referenced their own book, which elucidates AI principles using the analogy of UN flags.

By simplifying complex ideas like neural networks, the speaker stressed that non-experts could comprehend AI through common sense questions about pattern recognition and probability. The affordability of AI was the third point. Drawing on personal experience, the speaker demonstrated how their organisation could leverage AI effectively with its modest budget provided by donor countries such as Switzerland and Malta.

They compared the costs of AI processing for large companies to their organisation’s yearly budget to show that impactful AI applications can be created with limited financial resources. The speaker then articulated the concept of “bottom-up AI,” which aims to harness AI for gathering and organising knowledge in a way that uplifts individuals and preserves collective humanity.

This approach respects the historical and cultural significance of knowledge and presents a financially viable method to uphold it within AI systems. Focusing on human rights, the speaker pointed out the necessity of re-evaluating aspects like freedom of expression and opinion, as delineated in Article 19, in the context of AI.

Concerns were raised about the impact of large AI systems on public opinion and the formation of beliefs. The speaker went on to highlight the lack of African perspectives in AI-generated content, using Wikipedia as a case study, and called for active steps to ensure cultural diversity and representation.

An innovative idea was put forward to employ the Sustainable Development Goals (SDGs) as a moral compass for steering AI development, in light of how SDGs reflect current societal values and priorities. The speaker posited that focusing AI efforts towards these goals could invigorate their achievement by 2030.

In closing, the speaker emphasised the critical role of knowledge in discussions about AI and the imperative to preserve human intellect and wisdom in the age of AI. Referencing Cartesian philosophy and African Ubuntu ethics, they underscored the necessity of ensuring that our humanity remains at the core of our existence, rather than being overshadowed by AI advancements.

The address was a clarion call to embrace AI’s potential to enrich the human experience rather than undermine it. The summary has been checked for grammatical errors, sentence structure, typos, and UK spelling and grammar. It reflects accurate details from the main analysis text, incorporating key information without compromising on the quality of the summary.

PH

Speech speed

181 words per minute

Speech length

2676 words

Speech time

889 secs


Arguments

The human rights framework is essential for navigating digital and AI impacts

Supporting facts:

  • Addresses challenges related to dignity, privacy, labor, free expression, non-discrimination, equality, and justice.
  • The framework is based on ethics and legal obligations.


Implementing human rights in digital spaces is complex

Supporting facts:

  • Regulation of business in technology is currently under debate.
  • News stories frequently question our success in upholding rights within digital tech regulation.


Guardrails are necessary but must not stifle innovation

Supporting facts:

  • Solutions must be multifaceted, and the complexity of the issues must be considered.
  • Regulatory efforts should avoid simple fixes.


The private sector and governments must both improve

Supporting facts:

  • There’s finger-pointing on who is responsible between companies and governments.
  • Tech regulation often has problematic aspects from a human rights perspective.


Human rights guidance must be accessible and actionable

Supporting facts:

  • Applying human rights frameworks in practice is difficult.
  • Recommendations need to be practicable for those making day-to-day decisions.


AI’s potential may exacerbate global inequalities

Supporting facts:

  • Developed nations are likely to access AI benefits first and foremost.
  • Efforts must be made to ensure equitable distribution of AI’s advantages.


Report

The discourse offers a comprehensive examination of the intersections between human rights, digital technology, and artificial intelligence, highlighting the pivotal role of ethical and legal human rights frameworks in navigating the challenges of the increasingly digital era. These frameworks are acknowledged as crucial in upholding dignity, privacy, labour, freedom of expression, non-discrimination, equality, and justice—a reflection of the aims of Sustainable Development Goals (SDG) 10 and 16, focused on reducing inequalities and endorsing peace, justice, and strong institutions, respectively.

While there is a positive attitude towards the influence of human rights frameworks on the digital and AI sectors, there is also an acknowledgement of the complex nature of realising these rights within digital regulations. The debate about how technology businesses should be regulated continues, with frequent news stories questioning the effectiveness of our current digital technology regulations in upholding human rights.

The discourse also touches on the importance of regulations that protect against potential harm but do not impede innovation—the essence of SDG 9, which promotes industry, innovation, and infrastructure. This balanced view recognises the need for comprehensive solutions that address the intricacies of the digital technology and regulation landscape, avoiding overly simplistic measures that could be ineffective or restrive.

Furthermore, there is a call for human rights guidance that is both accessible and actionable, emphasising that recommendations and frameworks should translate into practical interventions for everyday decision-making. This reflects an understanding of the gap between theoretical human rights considerations and their practical applications in digital technology and artificial intelligence.

The potential of AI to exacerbate global inequalities is another concern highlighted, in line with SDG 10’s focus on reducing inequalities. The observation that developed countries are positioned to benefit from AI advancements first underscores the need for intentional efforts to ensure the equitable distribution of AI’s advantages.

The Office of the High Commissioner for Human Rights (OHCHR) is commended for its proactive engagement with the challenges posed by digital advancements, advising governments on tech laws and policies, and assisting companies in adopting responsible tech practices. These actions reflect a commitment to enshrining human rights principles in digital technology and align with the objectives of SDG 16.

The Human Rights Council is also noted for its proactive efforts in coordinating various human rights initiatives related to technology, emphasising comprehensive guidance. Although not explicitly linked to a specific SDG, the inclusive approach to integrating human rights in digital technology reflects broader human rights goals.

The discussion acknowledges the tensions between protecting human rights and promoting technological innovation. It suggests a need for a collaborative approach between the government and the private sector to navigate this challenging landscape effectively. While the value of theoretical frameworks is recognised, their practical application requires careful, flexible, and collaborative regulation, with human rights organisations playing a crucial role in shaping a future where technology respects and upholds human rights.

PO

Speech speed

141 words per minute

Speech length

1518 words

Speech time

764 secs


Arguments

Harnessing AI must safeguard human rights

Supporting facts:

  • The UN Secretary General called for a global AI focus on human rights.
  • The Human Rights Council is urged to apply human rights standards to digital technologies.


Need for global guidelines on AI and human rights

Supporting facts:

  • Secretary General Bucharest emphasized clear guidelines for AI in the digital age.
  • The Human Rights Council is tasked with developing these guidelines.


Acknowledgement of stakeholders’ contributions

Supporting facts:

  • The president expressed gratitude towards co-facilitators and acknowledged their recommendations.
  • A report with recommendations was shared on the 16th of January.


AI’s transformative potential should align with human rights values

Supporting facts:

  • AI can revolutionize health, education, and contribute to the SDGs.
  • AI’s misuse could lead to discrimination and inequality.


Upskilling for human rights in the digital age

Supporting facts:

  • Human rights principles are relevant in the digital realm.
  • There is a need for legal and legitimate frameworks for AI.


The necessity to address inequalities magnified by AI

Supporting facts:

  • Improper AI frameworks can exacerbate discrimination and privacy violations.
  • Technological advancements shouldn’t deepen the digital divide.


Promotion of international cooperation on AI governance

Supporting facts:

  • The International Telecommunication Union’s response to UNESCO recommendations is ongoing.
  • The Secretary-General has a high-level advisory body on AI.


Report

The global dialogue on advancing and utilising Artificial Intelligence (AI) is predominantly positive, underscoring the imperative to weave human rights standards throughout the fabric of AI’s development. The United Nations Secretary-General has advocated for an AI journey deeply rooted in human rights, echoing the broader stance of international governance on technological integration.

There is a universal agreement on AI’s capacity to transform essential sectors, such as healthcare, education, and employment, thereby making substantial contributions to Sustainable Development Goals (SDGs) such as SDG3 (Good Health and Well-being), SDG4 (Quality Education), SDG8 (Decent Work and Economic Growth), and SDG10 (Reduced Inequalities).

Nonetheless, there is an awareness that inappropriate use could lead to adverse effects, like growing discrimination and deepening inequality. The demand for a robust and comprehensive human rights framework governing AI is apparent. Secretary-General Bucharest highlighted the necessity for unmistakable guidelines to effectively manage AI in this digital era.

The Human Rights Council has been identified as a key driver for establishing these guidelines, ensuring they are embedded with human rights criteria, relating directly to SDG16’s objectives to promote just, inclusive societies with strong, accountable institutions.

The sphere of stakeholder engagement and international governance, intrinsic to SDG17, has shown a grateful acknowledgement of joint efforts. Stakeholder input is valued and essential for creating ethical AI frameworks, signifying that multilateral participation is essential for advanced AI governance.

The dissemination of the report on 16th January reflects the significance of inclusive discourse in shaping policy. The worldwide trajectory suggests enhancing education and digital literacy, resonating with SDG4 and SDG8, as a pathway to fostering an understanding of human rights within the digital domain.

Importantly, if AI frameworks lack inclusivity and fairness, there is a risk that biases could be encoded into technological progress, leading to a wider digital divide and contradicting SDG10’s aims. In addressing these challenges, the International Telecommunication Union (ITU) continues to engage with UNESCO’s recommendations. At the same time, the Secretary-General’s high-level advisory body on AI ponders the nuances of ethical governance.

Such cooperation highlights the ongoing necessity for international dialogue and collaboration to ensure AI’s ethical and human rights-focused development. The globally proactive stance on constructing a human rights framework for AI aligns with the goals of SDG16 and SDG17.

In New York, contributions to discussions on AI governance signal a commitment to a future where AI supports and champions human rights. In synopsis, this analysis brings to light a concerted international perspective: AI’s significant benefits must be channelled responsibly, with a steadfast focus on human rights to foster inclusive advancement, prevent social inequalities and bolster robust international partnerships.

It sketches out a collaborative vision for a future in which AI is transformative and equitable and upholds the principles of justice. In this revised summary, attention has been paid to maintaining UK spelling and grammar, ensuring grammatical accuracy, and enhancing coherence while thoughtfully incorporating long-tail keywords that do not compromise the quality of the text.

SD

Speech speed

121 words per minute

Speech length

1538 words

Speech time

763 secs


Report

The speaker, representing the Geneva Science and Diplomacy Anticipator (GESDA) Foundation, highlighted three significant observations:

1. The notable acceleration in science and technology over the past 75 years.

2. The impact of artificial intelligence (AI) on this acceleration could enable the Info-Bio-Nano-Code convergence—identifying it as a pivotal driver in knowledge production.

3. The council’s proactive stance on the human rights implications of emerging technologies, such as digitisation and neurotechnology, was commended. The speaker recalled resolutions showcasing these technologies’ potential to propel human progress and inclusivity. The speaker posed a fundamental question: how can governments and stakeholders actualise the council’s vision regarding emerging technologies?

The response encapsulated the GESDA Foundation’s strategy of employing the concept of anticipation to meet United Nations goals related to peace, security, development, and the right to science. This strategy, underpinned by support from the Swiss government and Geneva authorities, ensures that scientific advancements are democratised early for equitable benefit.

The practical application of this approach involves GESDA’s yearly Strategic Foresight Radar for science breakthroughs, forecasting the trajectory of science and technology in 5, 10, and 25-year frames. The fourth edition of this radar, synthesised with contributions from a vast cohort of over 2,100 scientists from 87 countries, will include 40 scientific topics and 348 potential breakthroughs.

The speaker underlined areas of interest, including eco-augmentation, orbital environments, unconventional computing, and neuro-augmentation. Additionally, the foundation plans to supplement the radar with an AI-powered intelligence tool to guide decision-makers in scientific trends, regulatory compliance, and market implications. This tool is set to debut at Geneva’s upcoming summit.

The foundation’s ‘Quantum for All Initiative’, with a focal point at the Open Quantum Institute (OQI) housed at CERN, was stressed. This institute is dedicated to broadening access to quantum computing knowledge and strengthening global capacity. In collaboration with the XPRIZE Foundation, they launched an international competition to generate quantum computing solutions for crucial challenges such as water sanitation and carbon capture, with the winner announced in January 2027.

In concluding their speech, the speaker touched on the governance of emerging technologies. There was a mention of a forthcoming report on quantum development and quantum diplomacy, contextualised by the Sustainable Development Goals (SDGs). This report, a collective effort of international scholars, companies, country representatives, and international organizations, is designed to inform debates on the governance of emerging technologies.

In summation, the GESDA Foundation is proactive in preparing for future scientific and technological shifts through anticipatory action and collaborative stakeholder involvement, which includes promoting scientific literacy and supporting effective multilateralism. The foundation reiterated its commitment to aiding the council and anticipates productive future engagements.

US

Speech speed

78 words per minute

Speech length

586 words

Speech time

453 secs


Report

The speaker highlights the transformative influence of digital power on the international order, particularly examining the impact of artificial intelligence (AI) on politics and warfare. The presentation acknowledges prior contradictions the session’s president mentioned and pledges to succinctly clarify three principal concerns.

The first point addresses the repercussions of digital power on global institutions, emphasising a shift from traditional state actors to influential international companies such as Amazon. These corporate entities, with resources comparable to nation-states, are reshaping geopolitical dynamics, especially when American and Chinese tech giants are contrasted with their European counterparts, who lack similar sway.

The second point discusses how AI revolutionises military activities and strategies, fundamentally altering the balance of military power. With AI, military forces are experiencing changes in training, deployment, and equipment. The focus is on China’s advancements in AI technology and its implications for United States’ strategic interests, thus showcasing a competitive arena fuelled by AI technological progression.

Addressing the potential future, the speaker introduces the concept of a ‘singularity of the battlefield’, where accelerated decision-making potentially exceeds human capabilities, heightening concerns under authoritarian regimes inclined towards automated warfare. This development poses significant operational risks, including the dependence on AI systems with possible flaws, consequences for civilian safety and compliance with international humanitarian law.

The conclusion of the address urges a comprehensive, multidisciplinary dialogue to grasp and deliberate the strategic and organisational consequences of these technological innovations. There’s a call for a discussion that weaves together international law, strategic studies, intelligence, technology, computer science, and economics, reflecting a drive for a multi-stakeholder process, aligning with the sentiments of earlier speakers.

Using references to Einstein and former U.S. Secretary of State George Soros, the speaker calls for innovative thinking methods. The speaker also designates the United Nations in Geneva as the optimal arena to foster this new thinking and participatory convergence.

This concluding remark is both a recognition of the United Nations’ capability in tackling complex global issues and an appeal for a unified effort to address the broad effects of the digital revolution and the expanding role of AI in today and tomorrow’s society and its governing bodies.

WV

Werner Vogels

Speech speed

152 words per minute

Speech length

1784 words

Speech time

772 secs


Report

The speaker offered an extensive examination of artificial intelligence (AI), tracing its evolution from the philosophical conjectures of Plato and Aristotle to the groundbreaking work of Alan Turing and the official genesis at the Dartmouth workshop in 1956.

The address highlighted AI’s significant role in tackling complex issues, including its use in dismantling child sexual trafficking networks, leading to the rescue of thousands, and improving breast cancer detection rates by 30% more than assessments made by a lone radiologist through AI-aided mammogram analysis.

AI’s transformative influence on business and sectors like agriculture, finance, and social welfare was emphasised, along with a call for “democratic AI” to prevent disparities in access and ensure cultural sensitivity. Notably, a few years back, the invention of Transformers architecture paved the way for new AI capabilities, including textual interfaces apparent in customer service chatbots and other consumer applications.

Yet, the speaker pointed out a widespread lack of public comprehension of these advancements, citing potential unanticipated outcomes without proper integration and reiterating the need for ethical considerations and equitable AI accessibility. Language and cultural biases in AI, especially in English-centric Large Language Models (LLMs), can induce Western biases, underscoring AWS’s duty to foster a diverse AI model range that reflects an array of cultures and datasets, facilitating more universally aligned applications.

The talk suggested that future AI deployment should involve widespread education on AI’s potential and constraints, with a call for environmental sustainability in AI’s growth through renewable energy and resource-sharing initiatives, which Amazon’s AWS pledged to endorse. To sum up, the presentation centred on the responsible development and application of AI, encompassing sustainability, equality, and a globally inclusive approach to guarantee the technology’s benefits are widespread while remaining conscious of its societal and global transformative power.

Event gallery