DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023

10 Oct 2023 01:30h - 03:00h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Bengus Hassan

The analysis highlights several important points in the AI conversation. One key finding is that companies are striving for a first-mover advantage in the AI race, often neglecting to consider the ethical implications of their developments. This emphasizes the need for grounding AI conversations in ethics. It is crucial for companies to not only focus on technological advancements but also take into account the potential consequences of their AI systems on society.

Furthermore, data protection emerges as a vital element in the AI conversation. Many countries, particularly those without data protection frameworks, are now grappling with significant data projects and AI implementations. This raises concerns about the privacy and security of individuals’ data. Reports from Paradigm Initiative highlight this issue, shedding light on the absence of sufficient data protection regulations in various regions, particularly in Africa. These findings underscore the importance of developing robust frameworks to safeguard personal information and ensure the responsible use of AI technologies.

The analysis also highlights the significance of diversity in AI. A personal experience shared by Bengus underscores the potential for bias in AI systems. This serves as a powerful reminder that AI technologies should incorporate perspectives from across the world, not just from the global north. To achieve this, diverse representation in AI modeling and research is essential. By encompassing different viewpoints, AI systems can be designed to be more equitable and inclusive, reducing biases and promoting equal opportunities for all.

Another important aspect discussed in the analysis is the role of regulation in the AI landscape. It is argued that regulation should do more than simply implement control; it should create standards. The conversation about data protection regulation in many countries has often provided an opportunity for certain governments to seek control rather than establishing reliable and comprehensive standards. This highlights the importance of developing regulatory frameworks that genuinely protect individuals and their data while fostering innovation and advancement in AI technologies.

The analysis also raises the point that innovation tends to outpace regulation. It provides a case where a country banned cryptocurrency before fully understanding its potential as a foundation for new forms of money and movement. This example serves as a cautionary tale, indicating that regulators and policymakers should strive to comprehend emerging technologies before enforcing restrictive measures. By creating sandboxes where ideas can be experimented within specific frameworks, regulators can grasp the intricacies and implications of new technologies, enabling them to make informed decisions.

In conclusion, the analysis underscores the need to consider ethics, data protection, diversity, and effective regulation in the ongoing AI conversation. Companies must not solely focus on being at the forefront of the AI race but must also take into account the ethical implications of their developments. Strong data protection frameworks are necessary to ensure the responsible use of AI and safeguard individuals’ privacy. Diversity in AI modeling and research is essential for creating inclusive and unbiased systems. Regulation should aim to establish high standards rather than merely exerting control. Policymakers must strive to understand emerging technologies before enacting restrictive measures.

Amandeep Singh Gill

The Secretary General of the United Nations has proposed the creation of a multi-stakeholder high-level advisory body to govern artificial intelligence (AI) practices. The objective of this proposal is to ensure that AI governance is aligned with principles of human rights, the rule of law, and the common good. The advisory body will serve as a credible and independent entity responsible for assessing the risks associated with AI and providing recommendations to governments on global AI governance options.

To ensure its effectiveness, the advisory body will work towards implementing existing commitments made by governments under international human rights instruments in the digital domain. This emphasizes the need for AI governance that upholds these important values.

The formation of the advisory body is still ongoing, with nearly 1,800 nominations from across the world being considered. It is expected to release an interim report by the end of the year, outlining its initial findings and recommendations.

In its work, the advisory body will consult various ongoing AI initiatives to ensure comprehensive engagement and cooperation. These initiatives include the G7 Hiroshima process, the UK AI Summit, UNESCO’s work on the ethics of AI, and the efforts of the International Telecommunication Union. By incorporating knowledge and insights from these endeavors, the advisory body can harness a wide range of expertise to inform its assessments and recommendations.

One important aspect of the advisory body’s mandate is to examine both the risks and opportunities presented by AI with regard to achieving the sustainable development goals. It will conduct a thorough assessment of the potential risks associated with AI, as well as identify the opportunities and necessary enablers that can help AI contribute to the acceleration of progress in these goals.

Overall, the proposal for a multi-stakeholder high-level advisory body on AI governance reflects the growing recognition of the need for responsible and ethical AI practices. By aligning AI governance with principles of human rights, the rule of law, and the common good, the proposed advisory body seeks to guide and shape the development and deployment of AI in a way that benefits society as a whole.

Moderator – Moritz Fromageot

An open forum on AI regulation and governance at the multilateral level took place, organized by the Office of the UN Secretary-General Envoys on Technology. The event, attended by both in-person and online participants, began with welcome remarks from Moritz Fromageau of the Office of the UN Secretary-General Envoys on Technology, who outlined the agenda for the day.

Amandeep Gill, the Secretary-General’s Envoy on Technology, delivered keynote remarks on AI regulation and governance, followed by Peggy Hicks, Director at the UN Human Rights Office, who moderated the panel discussion.

The forum then transitioned into a Q&A session, with audience members asking questions and the panel members providing answers. During the session, Amandeep had to leave and Quinten stepped in to fill his role. Additionally, Benga also had to leave, so priority was given to including his participation before his departure.

Co-facilitators from the Global Digital Compact were present in the room and encouraged to join the discussion.

For the Q&A session, on-site participants lined up behind the microphone, and the first three questions were collected. The questions focused on balancing the need for quick action with global processes, ensuring the enforcement of agreed-upon rules, and the importance of multi-stakeholder assessments in mitigating and enforcing the rules.

The panel members addressed these questions, and Gabriela took the opportunity to thank the audience and panelists for their participation. Peggy then concluded the session.

Audience

AI regulation is deemed necessary on a global scale due to the rapid advancements in technology, which are surpassing the development of regulatory frameworks. The current lack of swift global regulations means that tech companies are not being held accountable for the ethical and human rights implications associated with AI. To address this, there is a call for punitive measures or fines to be imposed on tech companies that disregard these implications. This approach is supported by the European Union’s General Data Protection Regulation (GDPR), which has implemented significant fines for non-compliance.

Ethical values play a crucial role in the development and deployment of AI. These values, such as dignity, autonomy, fairness, diversity, security, and well-being, are recognized by institutions such as UNESCO, EU regulation, and OECD. However, the challenge lies in enforcing these values in specific contexts. It is argued that concrete and measurable adherence to ethical values is essential in AI to ensure responsible and ethical development and deployment of AI technologies.

Another important aspect of AI regulation is the need for ethical assessments at both micro and global levels. These assessments involve multiple stakeholders and aim to identify, mitigate, and avoid risks associated with AI. At the company level, discussions involving customers and clients are necessary. Additionally, the intersection of bioethics and infoethics needs to be addressed. By including the perspectives of different stakeholders, these assessments can help shape the development and deployment of AI technologies in a manner that upholds ethical standards.

The governance of AI should be guided by global standards that are developed gradually and holistically. This will ensure that all aspects, including economic, social, and cultural rights, are taken into consideration. Furthermore, it is noted that the private sector has an interest in interoperable governments to facilitate seamless jurisdiction transitions. Governance of AI can also involve other policies that impact incentives, such as taxation, trade policy, and intellectual property policy.

In developing AI governance, an interdisciplinary and inclusive approach is advocated. The involvement of voices from all regions, genders, and disciplines is crucial to ensure a comprehensive understanding of the societal impacts of AI and its effects on social, economic, and cultural rights. High-level advisory bodies on artificial intelligence that incorporate diverse perspectives have been established to foster this approach.

Overall, the analysis highlights the importance of global AI regulation, the adherence to ethical values, the need for ethical assessments, the development of global standards, and the embrace of an interdisciplinary and inclusive approach to AI governance. These measures are essential to address the challenges and risks associated with AI technologies and to ensure their responsible, ethical, and inclusive development and deployment.

Moderator – Peggy HICKS

During the discussions on AI governance, participants stressed the need for a comprehensive conversation on this complex topic. They highlighted the importance of addressing issues such as privacy protection, deepfakes, and transparency.

In terms of privacy protection, the speakers noted that recommendations have already been made regarding the establishment of guardrails to protect individuals’ privacy. They emphasized the urgency of taking immediate action on issues like deepfakes and ensuring transparency in the data sets used for large language models.

The global challenge of AI governance was also discussed, with participants calling for a level playing field in the development and implementation of AI technologies. They stressed the need for increased investment to engage with the global majority and ensure inclusive AI governance.

The importance of multi-stakeholder participation in AI governance was highlighted. The participants noted the significant influence held by a small number of companies in the AI sector and called for increased commitment to effective engagement from various stakeholders. Civil society involvement was seen as particularly important in ensuring inclusive AI policy decisions.

Another important aspect discussed was the integration of a human rights framework in AI governance. Participants acknowledged the agreed-upon human rights framework across continents and called for its application in AI governance. They emphasized the need to move beyond rhetoric and make human rights actionable in policy making.

Diversity in the global conversation on AI was recognized as crucial. Participants stressed the need for greater diversity and inclusion to achieve a comprehensive understanding of AI governance issues.

The participants also emphasized the necessity of global standards and guardrails for AI. They highlighted the importance of integrating current knowledge and red lines into global standard-setting processes to ensure responsible AI development.

Transparency emerged as another key aspect of AI governance. Participants advocated for greater transparency in the global AI conversation, including dedicated forums for discussing AI governance.

The discussions also addressed the need for investment in social infrastructure and the digital divide. Participants highlighted the importance of building social infrastructure to support AI development and the role of public investment in creating necessary infrastructure for AI research. They suggested that those profiting from AI should contribute to these investments.

Lastly, participants stressed the need for a global framework to address digital technology and human rights issues. Collaboration across sectors, rights, communities, and countries was deemed essential to effectively tackle these challenges and ensure inclusion of all those affected by technological choices.

Overall, the discussions emphasized the importance of approaching AI governance from multiple perspectives, involving global engagement, multi-stakeholder participation, and a human rights framework. Participants urged immediate action on key issues, increased investment in inclusive AI governance, and the establishment of global standards to ensure responsible and equitable AI development.

Owen Larter

The analysis strongly supports global governance and standards for Artificial Intelligence (AI). The speakers believe that AI presents immense opportunities for humanity but also poses risks that require global collaboration and consensus development. AI encompasses a wide range of tools that offer significant opportunities for industries and infrastructure. However, these opportunities come with risks that transcend boundaries, making a global approach necessary to ensure the safe and responsible development of AI.

The main argument is the need for global standards to be established and adopted by national governments. The International Civil Aviation Organization (ICAO) is an example of successful global governance, involving every country in developing safety and security standards for aviation. The goal is to set global standards for AI in a representative and global way, promoting fairness and accountability.

Developing a global consensus on AI risks is also emphasized. The Intergovernmental Panel on Climate Change is cited as an example of successfully building an evidence-based consensus around climate risks. Similarly, there is a need for a collective understanding and agreement on the risks associated with AI. A global consensus would enable effective mitigation of these risks.

Investment in infrastructure is essential for a broad understanding of AI. The analysis suggests providing publicly available compute data and models, allowing researchers worldwide to better understand AI systems. Additionally, a global conversation on the social infrastructure surrounding AI, including ethical considerations and policy frameworks, is needed. This ensures that the benefits and challenges of AI are understood by stakeholders and align with global values.

The analysis consistently expresses a positive sentiment towards global collaboration, consensus development, and standard setting in AI. AI is seen as an international technology requiring international cooperation to harness its potential and address challenges. Examples such as ICAO and the Intergovernmental Panel on Climate Change are cited as successful models for consensus building and standards setting.

Furthermore, it is important to apply existing domestic laws to AI systems. Discrimination laws pertaining to loans and housing should extend to cover AI systems to prevent biases and discrimination.

Impact assessments are crucial for AI system development. Microsoft’s responsible AI program is mentioned, where impact assessments with human rights-related elements are conducted for high-risk systems. Sharing the workings and templates of these assessments can benefit the AI community in improving transparency and accountability.

In summary, the analysis strongly supports global governance, consensus development, and standards for AI. Collaboration across nations is necessary to maximize opportunities and mitigate risks. A global approach ensures that AI is developed and implemented in line with shared values, benefiting humanity as a whole.

Gabriela Ramos

Artificial intelligence (AI) has played a significant role in various sectors such as health and education. For instance, AI has contributed to our understanding of how the COVID-19 virus works, which has been crucial in vaccine development. AI has also been utilized in the distribution of benefits within the welfare, health, and education systems.

To ensure ethical advancements in AI development, UNESCO has developed frameworks and tools like the Readiness Assessment Methodology and Ethical Impact Assessment. These resources aid member states in implementing AI in an ethical manner. Currently, 40 countries are deploying this framework, with more expected to follow suit.

Legal frameworks play a vital role in the control and development of AI in the public sector. UNESCO recommends that legal regulation, rather than market forces or commercial reasoning, should guide AI development. Many countries are actively building their capacities to handle AI technologies responsibly.

Interoperability is essential in both technical and legal systems. As technologies become increasingly global, it is crucial to ensure interoperability of technical systems and data flows across countries. Additionally, the transnational nature of technologies calls for interoperability of legal systems to effectively regulate AI developments.

Harmful impacts of AI technologies are a concern, and governments need to understand potential implications and anticipate possible harm. It is essential for governments to have measures in place, such as compensation mechanisms, to address any harm caused by AI deployment.

Gabriela Ramos, an advocate for responsible AI development, emphasizes the role of governments in managing AI impacts and upholding the rule of law. Governments serve a crucial function in monitoring and regulating AI technologies to protect individual rights and maintain social order.

In conclusion, AI has been instrumental in sectors like health and education, aiding in vaccine development and benefit distribution. Ethical advancements in AI are promoted through frameworks and tools developed by UNESCO. Legal frameworks guide the responsible control and development of AI in the public sector. Interoperability, both in technical and legal systems, is crucial due to the global and transnational nature of technologies. Governments play a vital role in managing AI impacts and enforcing the rule of law.

Session transcript

Moderator – Moritz Fromageot:
Welcome to everyone here in the room. Also welcome to everybody who is participating online. We have an open forum on AI regulation and governance at the multilateral level now. My name is Moritz Fromageau, I’m part of the Office of the UN Secretary General Envoys on Technology. Let me quickly walk you through the agenda of the day. We will start this off by some panel remarks by our esteemed guests here, and then we’ll have a big Q&A session in which we want to engage with you, the audience. We will start this off with keynote remarks by Amandeep Gill, who is the Secretary General’s Envoy on Technology, and after that, Peggy Hicks, Director at the UN Human Rights Office, will moderate. the panel, and after that we go over to the Q&A session. Yeah, without further ado, I would hand over to Amandeep to introduce the topic.

Amandeep Singh Gill:
Thank you very much, Moritz. Welcome to this event, this discussion on AI governance and the very important dimension of human rights, the role of human rights in how we approach AI governance. So to set a little bit the context, I will talk about the Secretary General’s proposal in his policy brief on the Global Digital Compact that he launched on June 5th this year for a multi-stakeholder high-level advisory body for artificial intelligence that, as the SG said, would meet regularly to review AI governance arrangements and offer recommendations on how they can be aligned with human rights, the rule of law, and the common good. This proposal that he reiterated in his remarks to the first Security Council debate on artificial intelligence in July is currently being put into practice. So this advisory body is being formed as we speak after a process for nominations that ran along two tracks. One was member states being invited to nominate experts to the Secretariat, and the other was an open call for nominations. And all together, we got about 1,800 nominations from around the world. So different areas of expertise, backgrounds, different geographies. So it’s very satisfying to see that degree of interest and excitement about this proposal. we kind of hit the right spot with this. Now, what is the advisory body when it comes together? What is it supposed to do? The Secretary General has tasked it to provide an interim report by the end of the year. And there is a context to this timing. The discussions on the Global Digital Compact start early next year, restart early next year. They move into a negotiation phase. So this interim report would help those who are putting together GDC to consider one of the more important dimensions. There are these eight important high-level dimensions along with the cross-cutting themes of gender and sustainability that have surfaced through the consultation. So it’ll bring more substance and expert-level insight into that discussion. So after that, there is time for the advisory body to consult more widely, including with ongoing initiators. You heard the Japanese Prime Minister speak about the G7 Hiroshima process. There is the UK AI Summit. There has been work that’s been done earlier in the G7, G20 on AI principles. And there is longstanding work in the UN context. And today, I’m very happy to be joined by some of my colleagues. The work in UNESCO on the ethics of AI, consensus recommendation adopted by all member states. The work in the International Telecommunication Union on some of the standards that underpin digital technologies, but also at the AI for Good meetings. And then, most importantly, from the perspective of the SD’s vision and today’s topic, the work being done by the Office of the High Commissioner for Human Rights on how to make sure that. existing commitments that government member states have taken under international human rights instruments, they are implemented in the digital domain. So I just want to conclude by saying that this body that we’ll start meeting soon would help us pool multidisciplinary AI expertise from around the world to provide a credible and independent assessment of AI risks and make recommendations to governments on options for global AI governance in the interest of all humanity. I think those conversations that are happening today, they are very important, they are essential building blocks, but if this is an issue that concerns all humanity, then all humanity needs to be engaged on it through the universal forum that is the United Nations. The risk discussion can often be political or it can be motivated by economic interests. We want a discussion in which there is an independent, neutral assessment of that risk and a communication of that to the global community at large. At the same time, we also need to make sure that the opportunities and the enablers that are required for AI to play a role in the acceleration of progress on the sustainable development goals, they are also assessed, they are also presented in a sober manner to the international community. So looking at the risks and the opportunities in this kind of manner allows us to put the right governance responses in place, whether they are at the international level or at the national, regional regulatory level or at the level of industry where there may be self-regulation, co-regulation schemes to. address risks, including through the kind of initiatives that the Japanese Minister shared yesterday. So I’ll stop there and hand it to Peggy for the

Moderator – Peggy HICKS:
moderating panel. Thank you, Peggy. Great, thank you so much. We’re so fortunate to have Amandeep with us to give us that overall perspective about where we stand on these issues now. I’m going to have the pleasure, I’m Peggy Hicks with the Office of the High Commissioner for Human Rights, and I’ll have the pleasure of moderating the panel but also giving some introductory remarks from the Human Rights Office perspective starting out to just sort of set the course for us by making sort of four introductory remarks. One is that I think when we’re looking at the issues of AI governance, we need to be able to have a complex conversation. We tend to throw out the term AI and think that we all know what we’re talking about. We tend to talk about existential risk, near-term risk, short-term, mid-term risk, with no real definitions on the table. We need to break the conversation down. We need to be aware that there are areas that are already existing. AI that’s in use, being used in human rights sensitive and critical areas like law enforcement, where we don’t have any question about what needs to be done. We just need to implement the things that we already know. Recommendations have already been made about the guardrails that should be in place, for example, on mass surveillance technologies to protect privacy and in other places. We need to move forward on that and we don’t have to wait to do that. But then we also have the issues that have really rushed the surface around generative AI where there is a real need to look at what are the new challenges that are presented. And even within that area, some are immediate in terms of, for example, the impact of deepfakes. the need for water marking and providence to be put in place as quickly as possible, transparency around data sets for large language models. So there are things that we can do urgently, even within that emerging space. But then we have to also be able to look forward at the same time to what are the risks that are in our future that we see, and to be able to do the hard work of putting in place the governance mechanisms and approaches that will allow us to make sure that we’re tackling not just what we already know, but what we foresee for the future. The next point I want to emphasize is that that is a global challenge. And as much as we appreciate all the different efforts at the national and regional level, we need to be able to come together in a global way to address these issues. We need to be able to learn from each other, we need to recognize that the solutions won’t work if they’re only solutions that are adopted and taken in one place. And for that global engagement to work, we need to create a level playing field. And that means that there needs to be much greater investment and resources and engagement with the global majority that may have more difficulty being part of these policymaking conversations going forward. The third piece is one that of course comes up in the IGF context all the time, is around what we mean by multi-stakeholder and how that has to be part of the governance approach that we undertake in AI. And I want to emphasize that when we talk multi-stakeholderism, we are talking both in terms of the business side of things and the civil society side of things. And in fact, what we need on each of those pieces is quite different. With regards to business, there’s a tendency to really look at how we engage and to some extent mitigate the extent that a small number of companies have an enormous influence in this space. But at the same time, we need to create a race to the top where those companies may be the ones that are best prepared to put in place some of the guardrails that we need, but we also need to protect against the way other businesses will come into the sector and are coming in, perhaps with less incentive to put those same guardrails in place as we go forward. On the civil society side, we all know. know that that is an area where there’s a lot of commitment to general participation, but perhaps not as much to effective engagement. And we need a different pathway. We need to draw on the expertise. We need to make sure that civil society is present, because they’re the ones that will help us to make sure that no one’s left behind. And finally, and you won’t be surprised to hear me say this, I want to make a pitch for human rights and the human rights framework as being a crucial tool to allow us to move forward in all of these areas effectively. We’ve heard in many of the sessions I’ve been in already at the IGF how we have to build on what already exists and not create everything afresh. Well, the human rights framework is a framework that has been agreed across continents, across contexts. It’s celebrating its 75th anniversary today. My pin shows. And we need to find a way that we leverage it in this space. But that also requires support for us to be able to do that more effectively. It requires all of us to move from the talking point of, yes, we’re grounded in human rights, to making it actionable in a variety of ways in the policymaking context. So those are the introductory remarks from my side. But I’m very much looking forward to hearing from the contributors today. And I’m very pleased that we’re going to turn first, I guess, to Bengus Hassan, who’s the executive director of Paradigm Initiative and a member of the IGF leadership panel. So over to you, Bengus.

Bengus Hassan:
Thank you, Peggy. And thank you, Amandeep, for the earlier comments. I think it’s important to start with the three areas that have been identified by the Secretary-General in terms of human rights, rule of law, and common good, help with the ongoing conversation. But let me start with a statement, someone. So at the opening ceremony, someone who sat, I think, behind me. Yes, behind me. I shouldn’t confuse behind with beside. leaned over after the session and said, look at the stage, there’s no diversity, and during the AI panel, and then we had a conversation. And the conversation we had wasn’t just about diversity, but was about many things. And Peggy, you’re right. Civil society already, I mean, AI is not new. It’s been said that AI is the unofficial theme for 2023 IGF. I’m sure if you got a dollar for every time AI is mentioned here, you all be billionaires already. And also, there’s a tendency for us to assume that a conversation we’re having is understood by everyone and we’re all at the same level, but we’re not. So first of all, there are people whose level of inclusion, even before you have conversations of AI, are not exactly, we already have a divide, right? We already have a divide that is contributed to by some of the problems that we have that civil society is trying to address. And so three very quick things for me. Number one is that in all of this conversation, we’ve talked about the need for human rights, for the rule of law, and for the common good. But I think the common good will only be served if we have a conversation that is based on ethics. And I say this because if you look at all of the race, literally, like the AI race that we had over the last few months, and I’m sure we hear a bit more from the private sector representative on this, and at some point, there had to be a call to say, guys, let’s stop. And the reason for that was because it then, it became a race literally without rules. And everybody was trying to get to be the first to do it. Of course, there are many reasons for that. There’s economic incentive and there are other, the first come advantage and all of that. But those conversations must be built on ethics. And thankfully, we already have many frameworks around human rights that can guide us in this. So it’s not, we’re not creating new principles. We’re not saying that the ethics should be based on new inventions. We already have principles for that. The second is on data protection. And I say this particularly because we’ve had many conversations about the need for. privacy and protection, but there are many countries where there are still, for example, majority, you know, so we do a part of the initiative, we do a report every year on the state of the internet, on digital rights across the African continent. And one of the major challenges that we have is that there are many countries that do not even have data protection frameworks already. And not only are they now talking about, you know, just collecting biometric data, but they’re also talking about AI, they’re talking about massive, you know, data projects, and that is important. So ethics, also data protection. And I’ll come back to the first point that I made about diversity, not just diversity in terms of conversation. It’s great to have a panel, and at times I think with tokenism you can solve the problem, but we need to go beyond the tokenism. I think that the importance is not just in the conversations, but also in the modeling. I always give the example of my very first, you know, experience with an AI demo, you know, somewhere, you know, not too far from here. And I, you know, stood in front of this machine where everyone was standing and they were testing. I need to tell you where in the world you’re from and tell you a bit more about yourself based on the data I had been fed with. And then I faced this machine and I said, hi, and I said, hello, and I said a few words. And the machine not only said I was from the wrong continent, but also said I was very angry, and I was like, wait a second, what is going on here? And by the way, that project was already being used by a country to determine who to arrest based on prank calls. So it meant that anyone who sounds—I sound like this all the time because I’m Nigerian. I’m from a country of 200 million people. You need to raise your voice to be heard. So when I speak, I need to raise my voice. So if the machine thinks I’m angry and all that, it’s not because I am, it’s because I’m Nigerian and I have to raise my voice. So I think it’s absolutely important for us, not just in conversations, but in modeling and also in— research. AI by nature is global, but global does not mean it happens in the global north. Global means that it has applications across the entire world, and if it has, then it means that diversity must be a fundamental factor in what we do. Otherwise, we’re going to keep having many of the problems we currently have on social media where platforms are struggling to interpret something that is understood within a context and mean something else entirely once it crosses to another context. So ethics, data protection, and diversity.

Gabriela Ramos:
Thank you very much, Benga. Words to live by there, and I’m sure we’ll go back to each of those three points. But I understand that Gabrielle Ramos is now online and able to join us, so I’d like to introduce Gabrielle Ramos, who is the Assistant Director General of Social and Human Sciences at UNESCO. Over to you, Gabrielle. Thank you so much, Peggy, and I’m very sorry, but I got the wrong link, and I was with a very technical expert. Very interesting session, but it was not mine. Great to be here with you, and thank you. Great to share this panel with you and with Amandib. And I could not agree more with what the previous speaker mentioned. I think that ethics is a good guide because it’s not only about the challenges we are confronting now, but actually the challenges that might be posed to us with this very fast moving technologies. And we are now probably questioning all these issues brought by the generative AI, but AI is not new. And we know since how many years AI has been used to take decisions that are substantial and relevant for all of us. We know the application of these technologies in the distribution of benefits in the welfare system, the health system, the education system. We know how much… facial recognition has been used and now is being debated how much we can rely on it to take decisions in the public sector. But the public and the private sector have been taking decisions based on AI for many years. We tend to forget, but we know that having a vaccine to fight the COVID pandemic was actually allowed because of the analytical capacities that the technologies could put together to understand how the virus work. So it’s not new, but the questions that we ask of course are much more relevant given the pervasiveness and also the fastest speed at which these developments are advanced. So it’s very important that we have the right frameworks. If these major technologies are just deployed in the markets for geopolitical reasons, for commercial reasons, for profit-making reasons, it’s not going to work. And that’s why we are very pleased to be contributing to this framing of the technologies in the right manner at UNESCO. As since two years ago, the 193 member states adopted the UNESCO recommendation on the ethics of artificial intelligence. And I recognize Amandeep was one of the major contributors because he was part of the multidisciplinary group that we put together to develop the recommendation. And it was pretty straightforward, but I feel it was also in the right frame because the question was not to go into a technological debate of how do we fix the technologies or how do we build the technologies in certain ways to deliver for what we want to have in the world. But the question was actually, what are the values that we are pursuing? And then we put it all around. It’s a societal debate, not a technological debate. And the values, we know them. the values that the technologies should serve, are the human rights and human dignity, are the fairness, inclusiveness, protection, privacy. And these values need to be served by certain principles or goals. And you know them because these goals are of accountability, transparency, proportionality, the rule of law. These principles are part of the equation that have been advanced by many, many players in the artificial intelligence ecosystem. But these principles need to be translated from our perspective into policies, because policies is what will make the difference. Yes, the technologies are being developed by the private sector mainly, but this will not be different as many other sectors that we have in the economy where governments need to provide with the framework and the right framework for them to develop according to the law. And at the end, it’s not that the governments are going to go into every single AI lab to check that we have diverse teams, that the quality of the data is there, that the training of the algorithm has the adequate checkpoints, not to be tainted by biases and prejudice. But at the end, when you have the norm and when you have the tools and the audit systems to advance these kinds of outcomes, is when you get things right. And this is where we are now in the conversation, because the member states, when they adopted the recommendation, it was not only left to the goodwill of anybody who wanted to advance in building these legal frameworks, but they also asked UNESCO to help them advance specific tools for implementation, because we also are in an heterogeneity of capacities and systems that can be put together. And therefore, we developed two tools. understand where member states are regarding the recommendation, the readiness assessment methodology, that is not only a technological discussion, again, it is about the capacities of countries to shape these technologies, to understand these technologies and to have the legal frameworks that are necessary for them to deliver. And then we also develop the ethical impact assessment. And I feel that now we are converging with many other institutions and organizations that are advancing better frameworks for developing on AI. Just last Friday, we were with the Dutch Digital Authority because this is also an institutional debate. For us, this is for governments. Governments need to upgrade their capacities and the way they handle these technologies because, as I said, I’m a policy person and the reality is that this is about shaping an economic sector. An economic sector that, yes, pervades many other sectors and is changing the way all the other sectors are working. But at the end, it’s an economic sector. The way that the technologies are produced can be shaped, can be determined by technical standards, but it can also be determined by the rule of law. And it’s not as difficult as it might seem in terms of at least having these guardrails. When we say, for example, that we need to ensure human determination, well, then what the recommendation established is that we cannot provide AI developments with legal personality. And I feel this is just the very basic to ensure that whenever something goes wrong, there is going to be a person, there is going to be somebody that is in charge and that can be legal, liable, liable legally. And then we also need to have systems for redressal mechanisms and to ensure that the rule of law is really ensured online. I’m proud that have this framework is now being really deployed by 40 countries around the world and we will be having more. Next week we are going to be in Latin America launching the American Council for the Implementation of the Recommendation and we’re partnering with many institutions, with the European Union, with the Development Bank in Latin America, with the Patrick McGovern Foundation, with Bilastar, to ensure that we work with member states to look how they can build up these capacities to understand the technologies and to deliver better frameworks. We always also talk about skills, skills, skills, skills to understand, to frame, to advance a better deployment of the technologies. I feel that it’s also very important that we have the skills in the public sector to frame and to understand because these are also so fastly moving technologies that we need to be able to anticipate also the impacts that they can have in many fields that have not been tested. But if you ask me for the bottom line, the bottom line, and I think this is not the way that generative AI or chat GPT arrived to the market, is that you need to have an ethical impact assessment, a human rights impact assessment of major developments on artificial intelligence because before they reach the market. I think this is just right due diligence and it’s not what is happening in many of these developments as we see them. And therefore, I think it’s the moment to put the conversation right in the right framework to ensure that these technologies deliver for good. And we are seeing many movements. We just saw the bill that was put together in the US Congress. We know what the European Union is doing. We know how many countries are advancing this and we’re also doing it with the private sector. We can neither put all the private sector in one basket. We’re working with Microsoft and Telefónica because also this needs to be a multi-stakeholder approach, also gathering the civil society and many, many groups that need to be represented because the ethics of artificial intelligence concern us all. I’m so glad that I have this minute to share with you these thoughts and I’m looking forward to the exchanges. So thank you so much.

Moderator – Peggy HICKS:
Thank you very much, Gabriela. It’s wonderful to hear your comments based on the experience of UNESCO and the ethics of AI development, but also its application, as you said, and the work that’s being done globally to move forward on these issues. And I think the point that you make around human rights impact assessments and the need for them to be done before things reach the market is one that we’ll come back to as well. I’d like to turn to our final panelists now. We’re fortunate to have with us Owen Lartner, who’s Director of Public Policy in the Office of Responsible AI at Microsoft. Over to you, Owen.

Owen Larter:
Thank you, Peggy. It’s a pleasure to be here. It’s a pleasure to be part of such an esteemed panel. So as Peggy mentioned, I’m Owen Lartner at Microsoft. We are very enthusiastic about the opportunity of AI. We’re excited to see the way in which customers are already using our Microsoft co-pilots to better use our productivity tools. We talk a lot about co-pilots at Microsoft rather than auto-pilots. The vision for Microsoft around AI is very much retaining the human dignity and the human agency at the center of things. And I think more broadly, we see AI as a huge range of tools that is gonna offer humanity an immense amount of opportunity, really to understand and manage complex systems better and to be able to address major challenges like climate change, like healthcare, like a lot of what is being addressed in the SDGs. So a lot of opportunity, but I think it’s clear that there is risk and complementarity. Thank you very much. a panel, and so we need to think about governance. And I think as we turn to governance of AI, we need to think about governance globally. As it was said before, AI is an international technology. It is the product of collaboration across borders. We need to allow people to be able to continue to collaborate in developing and using AI across borders. It’s also quite clear that the risks that AI presents are international. They transcend boundaries. An AI system created in one part of the world can cause harm in another part of the world, either intentionally or via accident. And so I think as we think about global governance, it’s worth taking a little bit of a step back and sort of understanding where we are. And I do feel like an enormous amount more work is needed, but we’ve made a huge amount of progress in the last year. We’re coming up to quite an important milestone or a significant milestone, which is that we’re just a few weeks shy of the one-year anniversary of ChatGPT being launched on the 30th of November in 2022. And I think we can see the way in which that has really changed the conversation around the world on these issues. I think it’s fantastic to see the way in which the UN has done what the UN is always very good at doing, which is really catalyzing a global and representative conversation on these issues. We’re excited about the high-level advisory body. We think that’s gonna be really productive work. Really delighted to be working with UNESCO to be able to take forward their recommendation on artificial intelligence. We think that’s a really important piece of work. And really exciting to see the way in which you now have concrete safety frameworks being developed and implemented around the world. People might be familiar with the NIST AI Risk Management Framework. This is from the National Institute for Standards and Technology in the US. They published their AI Risk Management Framework at the start of this year. It really is a global best practice framework that any organization can use now to develop their own internal responsible AI program. So I think we’ve sort of moved to a place where we have the building blocks of a global governance framework in place. I think now it really behooves us to take a bit of a step back and think about how we chart a way forward. And I think there’s probably a couple of things that are worth bearing in mind as we do that. of having a bit more of a conversation about where we actually want to get to. What do we want a global governance regime for AI actually to be able to achieve? And then secondly, what can we learn from the many attempts and the many successes around global governance in other regime? So I’ll offer a few thoughts in closing. I think as we move forward, we ultimately want to get to a place where we are setting global standards that are being developed in a representative and global way that can then be implemented by national governments around the world. And I think there are great lessons to draw from organizations like ICAO, the International Civil Aviation Organization, part of the UN family. It does a great job of including pretty much every country around the world in developing safety and security standards for aviation globally. So I think there’s more that we can learn from that. I think the other thing that we need a global governance regime to do is to help us develop more of a consensus on the risks of AI. It’s really important part of thinking about how we address them. So I think of organizations like the Intergovernmental Panel on Climate Change, which has done a fantastic job of developing an evidence-based consensus around risks in relation to climate. Actually a really effective job of then taking that out and driving a public conversation, which can lay the groundwork for policy as well. I think that the final suggestion I’ll make is that we really need to invest in infrastructure as we move away forward. That’s both the technical infrastructure so that we’re able to study these systems in a holistic and broad way. It is very intensive to develop and use these systems, so we need to provide publicly available compute data and models so that researchers around the world can better understand these systems, can develop the much-needed evaluations that we need going forward. I think the other bit that is just as important, if not more so, is thinking about the sort of social infrastructure. How do we really have a global conversation on a sustained way on these issues that is properly representative and brings in views from everywhere around the world, including the global south? I think it’s a great start on that front. I think conversations like this and work that the IGF is doing is really important. I think there’s more that can be done. One small contribution that we’ve made so far and we want to do more is setting up a global responsible AI fellowship. So we have a number of fellows around the world, including from countries like Nigeria and Sri Lanka and India and Kyrgyzstan, where we’re bringing together some of the best and brightest minds working on responsible AI, right across the global south to help shape more of a global conversation and inform the way that we at Microsoft are thinking about responsible AI. I think there’s much more opportunity to do this kind of thing when we’re moving forward. But I’ll pause there for now.

Moderator – Peggy HICKS:
Great, thanks so much, Owen. It’s been really helpful to hear your comments on what the global governance AI challenge looks like and what are some of the next steps we need to take. Just to pull together some of the thoughts and then we’re gonna turn over for the question and answer. I mean, I think we heard very, very similar messages to some extent from our somewhat diverse panel, not as diverse as we’d need to be probably here either, Benga, but we all recognize the need for that global diversity. How we achieve it, I think we still have a lot of work to do, we can commit to it in principle, but in practice, it requires a lot more effort, a lot more resources to make it a reality, I think. We also heard the importance of really putting in place guardrails based on what we already know in the space and moving forward on them, the governance conversation with regards to the best practices is there, but we also need to recognize that we do have some red lines and those red lines ought to be part of the global standard setting process as well and moving forward. And finally, we also need to understand the need for greater transparency, greater ability for a global conversation to happen and that means making sure that forums like this one are available to a much broader audience, but that we have, I liked Owen’s comments about the social infrastructure that’s needed and that will require investment and commitment as well to move forward. So with that, I think I will close this first segment of the panel discussion and I’m to turn. over to Moritz, who will guide us in the question and answer. Over to you.

Moderator – Moritz Fromageot:
Thank you very much, Peggy. So we will now take the time for an extensive question and answer. So you all have the possibility to ask any question you might have. Unfortunately, Amandeep had to already leave the session, but our colleague Quinten is filling in. Also, I understood that Benga has to leave it in 20 minutes as well, so we might just prioritize you in the process. And I’m also seeing that we have the co-facilitators from the Global Digital Compact in the room, so do let us know if you want to participate in the discussion. For the on-site questions, you can line up behind the microphone over there. First come, first serve. We collect the first three questions and then answer them from the panel. And yeah, so feel free to ask anything regarding the session topic.

Audience:
OK, that’s a nice clarification. Hello, everyone. I’m Alice Lenna from Brazil. I’m also a consultant for GRI, the Global Index on Responsible AI. And I have a question that I think has relations with everything that you’ve said so far. Because we’ve been listening in all the panels on AI that AI must be regulated through a global lens, right? It can’t be just national frameworks. And we’ve also been listening that it must happen now. It’s urgent. And these things, we know that global regulations are not the fastest regulations we have. So my question is, how do we balance these needs? Thank you. Hi, I’m an attorney at law from Sri Lanka. Last year I just did a course from CIDP in Washington, and I’ve been studying AI policy. I was just wondering, the biggest threat is that the technology is running far ahead of the law. And is there any possibility, like we were speaking of global AI regime, et cetera, is there any possibility that punitive measures, like fines or penalties, can be given to these tech companies which are going ahead without the implications, the human rights aspect, the ethics, without that being examined, if the tech companies put out the tech? I feel the only way is to penalize them somehow, like how GDPR brought huge fines. Is there any conversation on that going on, or I just want to know? Hello, my name is Yves Poulet, and I am vice chairman of IFAP UNESCO program, and my specialty is infoethics, and I am chairing a working group on infoethics at UNESCO. I think we are agreeing all together about ethical values. I think there are a certain number of ethical values which are recognized by UNESCO recommendation, by EU regulation, by OECD, and these ethical values are very well known. That’s dignity, that’s autonomy, that’s definitively fairness, that’s diversity, that’s the problem of security and well-being, and so and so. So the problem is not the ethical values. I think that Gabriela was right. The problem with ethics is not the problem of… of designing the ethical values. But the problem is to what extent these ethical values are met in a concrete situation. And that’s another problem, and that’s another difficulty. And that’s why I think we need to have, definitively, legislation imposing what we call ethical assessment. I think it’s very important to have this ethical assessment. At a micro level, it means at the company level. And this ethical assessment needs, absolutely, to have what we call a multi-stakeholders within the company, and perhaps the customer, perhaps the clients, and I don’t know exactly which must be around the table. But we need to have this multi-stakeholders and multi-disciplinary assessment to clearly enunciate the risk, to mitigate the risk, and definitively, to try to avoid the main risk. And that’s very, very important, I think. If we have this ethical assessment at the micro level, I think that’s the most important thing. At the global level, I think we need, definitively, to have the discussion, discussion about a very important issue, like the increased man. It is quite clear that bioethics and infoethics, tomorrow, will join together. It is quite clear that, definitively, we must have a certain number of reflection about our iterative system, especially as we have the problem of manipulation of people, and all these questions. So my question is to know what’s your position about this reflection?

Moderator – Moritz Fromageot:
Yes, thank you very much. Just one suggestion, I think for the next round of questions, you could also say whom on the panel. you address the question too, then we can have it a bit more targeted. So yeah, three questions. The first one on how to balance the need for quick action in the face of some of the global processes that can take a little longer. Second question’s on enforcement. How do we make sure that the rules that we agreed on are actually applied? And yeah, the third one on the need for multi-stakeholder assessments on how to mitigate and also enforce the rules. So who would like to go ahead?

Gabriela Ramos:
I can chip in if you. Perfect, Gabriela. Then we’ll start with Gabriela and then give over to Benga. Okay. Well, thank you. I think these are very relevant questions and it’s true that the technologies are global and therefore this transnational character needs to be recognized. And I feel that’s why we are always referring to the interoperability, not only of the technical systems and the data flows across countries, but we are also talking about interoperability of the legal systems, because at the end, the kind of definitions that you have in one jurisdiction is going to be determining the kind of outcomes when you go into international corporations for law enforcement. But at the end, the very basic tenant of all this construction is to have the enforcement of the rule of law regarding these technologies at the national level. And this is the emphasis that we are putting in the implementation of the recommendation on the ethics of AI with the many different countries with whom we are working, because at the end, governments need to have the capacity first to understand the technologies, which is not as a straight. forward as it seems. Second, to anticipate what kind of impact that they can have on the many rights that they need to protect. And then to have commensurate measures whenever there is harm. And I think that this is another bottom line. Whenever there is harm, there should be compensation mechanisms. And these are the areas where governments need to upgrade their capacities. Then, of course, we need international cooperation, because at the end it would not work only if you have regulatory fragmentation at the national level. It’s very important that we also have this kind of exchanges in a multi-stakeholder approach to ensure that we learn from each other and that we can also share what we know are those that are the front-running developments in terms of the legal frameworks and those that are lagging behind. But I feel, again, the role of governments is really important in trying to ensure that the rule of law is respected. But that’s their task and that’s why they are paid for.

Moderator – Moritz Fromageot:
Thank you, Gabriela.

Owen Larter:
Fantastic. I can jump in and give some thoughts and agree with a lot of what Gabriela said as well. I think on the sort of global piece, I think it’s exactly right to look at these issues through a global lens. The risks that are presented are global. But I don’t think that necessarily means that every single national regulation needs to look the same as each other. Exactly as Gabriela said, I think it’s all about interoperability. And I think a big part of this will be developing some global standards in relation to how you evaluate these systems, for example, that different countries can then implement in a way that is sensible for them. In terms of sort of how to apply the law and where the law might apply, I think there is a large amount of existing domestic law that should be being applied right now in relation to AI systems. I think if you’re in a country where you have a law against being able to discriminate against someone in providing a loan or access to housing, it shouldn’t matter whether you’re using AI or not, that law should apply. I don’t think it should be an offense that, oh, you know, yes, I discriminate against this person and gave loans on unfavorable terms, but I was using AI, so don’t come and penalize me. That’s not gonna hold. So I think existing law should be applied across various different jurisdictions, whilst we also put in place these other frameworks that address some of the specific issues of AI as well. And then in relation to the impact assessment process, I think it’s a great thought. We are very enthusiastic about impact assessments at Microsoft. It’s one of the many things that we’re very enthusiastic about in relation to the UNESCO framework. We actually have an impact assessment as a core part of our responsible AI program at Microsoft. So any high-risk system that is being developed, the product team has to go through an impact assessment. It has a number of human rights related elements to it in relation to making sure the system is performing fairly, addressing issues of bias. We think that’s a fundamental sort of structured process to be able to go through. We actually have now started publishing the templates that we use for our impact assessment, and we’ve also published the guide that we use to help our colleagues navigate the impact assessment process. We think it’s really important to share our working as we go as a company so that others can quite frankly scrutinize and build on it and improve it. So we’d welcome thoughts that people have on the impact assessment template that we’re using at Microsoft.

Bengus Hassan:
Thank you. I mean, just to build on the earlier contributions, in terms of regulation being global and the fierce urgency of now, I mean, I can understand why that is the conversation that is happening because that’s a natural reaction to some of the confusion we’ve seen in the last one year. But one is that, first of all, the regulation and academia are now trying to diagnose the issues that the government has identified and they have identified instructions and they have had brief conversations with the government, but they have not had a conversation with the individuals that they have in various places in order to look into impairments and understanding and implementing new regulations. And I think it’s really important to say this, is that regulation is about creating standards and not implementing necessary control. And I say this because this is the same conversation we had about data protection regulation in many countries where it then became an opportunity for certain governments to seek legitimate control over areas where they were supposed to create standards. So the idea was to control and not to create standards that they were also, you know, going to go to a bad buy. But I think there are many existing processes that we can build on. And I can understand why global always, you know, gives the idea of being slow. Because there’s negotiation, there are countries that – I think there are some countries that may just want to be contrarian, just so, you know, because they want to, you know, take the mic and speak or something. But there are existing processes and there are things that work. I mean, I like the example that you gave of the International Civil Aviation Organization. And there are many examples that we can look at. We can look – you know, we can talk about some of the multi-stakeholder conversations we’ve had at ICANN, you know, and now at the IGF, and we can build on those processes. And on the second question, just very quickly, I understand the concern, and like you said, there are many, you know, the existing laws that can be applied. But I’m also a bit cautious when it comes to the sort of the tension between innovation and regulation and policy or regulation. I think that innovation will always, always, always be ahead of regulation. And what is important is for regulators and policymakers to at least seek to understand before regulating. Because we’ve seen in many instances – I mean, I know a country where we’re working where cryptocurrency was banned, and we had to write a policy brief to the central bank. You can’t ban this. What you are banning is the foundation of the new forms of money and movement. So I think it’s really important to, you know, create sandboxes where people can experiment ideas but within, you know, specific frameworks where if something goes wrong, of course, there is – you have to apply – abide by but it’s absolutely important that in the name of you know cautioning and not you know not allowing people to go a wire that we’re not stifling innovation because we’ve also seen that happen where regulation doesn’t understand innovation and wants to jump ahead of it. Thanks and I’ll pop in as

Moderator – Peggy HICKS:
well and then the the first question I think is a really important one and I think the that idea that that we can’t come up with a global framework I’ve said that a million times myself that you know making a treaty isn’t isn’t going to get us there because it will take us too long and by the time we got it it would already be outdated but I think Benga’s answer and and and Owen and Gabrielle as well have said some of the pieces that we have and we need to we need to build piece by piece one thing that we desperately need right now we talked about in a conversation earlier today is around a authoritative monitoring and observatory that will give us greater

Audience:
there’s a kind of paradox here everyone’s talking about global standards universally global standards and everyone’s talking about fast and what I’d like to suggest in a minute is that perhaps in this case I mean there are reasons why that this could happen very quickly including the fact that the private sector is very interested in interoperable governments so that they can move through jurisdictions easily without having you know different regulation in different jurisdictions so there’s a lot of kind of of a carrot there but I’d like to suggest in this case slow maybe fast in a sense because to get a global agreement to move from 20 countries 50 countries to 193 countries all of those countries have to want this and what we’ve noticed at least on the global digital compact process process, is that the term human rights has often had certain connotations for certain groups of countries. And as an example of that, we had a lot of submissions to the global digital compact process and, you know, from some of the political groups that were, say, from the Global North Human Rights, we did a word count of how many times human rights was put in there. It was, you know, a ratio compared to the words digital divide. It was in a ratio of up to 15 times. Every time digital divide was mentioned, human rights was mentioned 15 times. For some of the other groups representing the Global South, human rights may have been mentioned zero times and digital divide several times. So completely the opposite. Now what I’m going to suggest is that when we think about human rights holistically, yes, we have the individual, civil, political, and kind of rights. We also have the social, economic, and cultural rights in the Universal Declaration of Human Rights 22 to 27. And these are also human rights. And these also need to be protected and governed for. And these are human rights which the whole world can get behind, including the right to work, employment, favorable pay, standard of living, education, protection of authorship. So how can the world think about this topic of governance of AI from a holistic perspective and bring along the countries who have more urgent pressing needs on the economic side, on the development side, and take a holistic approach, not just geographically to 193 countries, but also holistically from a governance perspective? So if you allow me one more kind of interpretation here, we’re talking a lot about regulation and legislation in this panel. But governance can also involve other types of policies, not just legal regulation, not even just ethical standards or technical standards. They can also involve other kinds of policies that impact incentives, from taxation, trade policy, intellectual property policy, which also, by the way, is one of the socio-economic cultural rights for authorship. So how can the conversation be shaped in a way that governance can be thought of holistically across the different parts of the UN’s work, not just what is commonly thought of as human rights, the social and political rights, but also the economic, social, cultural rights, and the sustainable development goals? And how can all of these other countries who, when they hear human rights, they think, it doesn’t matter if we don’t focus on the economic side, to actually embrace a concept of governance that will, we hear a lot about AI accelerating SDGs, but how is that actually going to happen? We can talk about productivity tools on Office Copilot 365, that’s great for a lot of office workers in the West, but how does that actually put bread on the table? How do we get the climate resilient agriculture that people keep talking about? Does that actually involve different forms of economic policy like prizes or subsidies or even incentive-creating policies, like in the COVID challenge trials where the vaccine was developed in a matter of weeks instead of normally years? How does that happen to really get material impact on the SDGs? So I would say slow is fast in this. To get a global 193 countries agreeing, they have to see an interest in it, and to see an interest in it, we have to think of human rights holistically, to include the whole Universal Declaration of Human Rights, not just a sub-part of it, and to get to that, we need a holistic approach to policy which doesn’t focus… focuses only on regulation, but also embraces other kinds. And that’s why, when the Secretary General put together his high-level advisory body on artificial intelligence, which will look at governance, there was an explicit choice to make it interdisciplinary, include voices from all regions, genders, but also from all disciplines, including digital economy, including anthropology, to look not just at the individual impacts on individuals’ human rights, but also the societal impacts on individuals’ social, economic, and cultural rights. Thank you.

Moderator – Moritz Fromageot:
Thank you very much, dear audience and dear panel. I would hand it back to Peggy for wrapping this up very quickly.

Moderator – Peggy HICKS:
Thanks. Quentin already helped me out with that assist on the human rights side. But I do think it’s a crucial point, and one that we need to think about is that human rights aren’t only, when we use the words human rights, the digital divide, and what it means for people who are suffering for the lack of technology is also a human rights that falls in the basket of economic, social, and cultural rights, as Quentin has described. But we have to get away from a terminology debate and move forward on the issues that we’ve discussed today. I see the facilitator for the Global Digital Compact here as well. There’s a lot of work to be done in building that global framework, but it does need to be done across sectors and across rights, but also across communities, countries, and people. And that means finding the ways to bring in all of those who are going to be affected by these choices in a much more effective way. And that goes to the second part of the question that you asked, which is, how do we make sure that the resources are available to do it? I think that’s a fundamental piece here, that we need investment in this global public good. And that does mean, and I think Owen even brought up, the need for that social infrastructure to be built. And that means public compute resources that will allow the researchers to be able to do the research that we all know we need them to do. So it’s really looking at those questions and finding a way that we can make sure that those who are making the profits out of this are also helping us potentially to invest in the ways that we can make sure that this opportunity side of artificial intelligence is there for all of us. Thank you all so much for joining us. Thanks to the wonderful panel that we’ve had with us today. And I hope everybody enjoys the rest of the IGF. Thank you. Thank you. Thank you. You You You You

Amandeep Singh Gill

Speech speed

150 words per minute

Speech length

871 words

Speech time

348 secs

Audience

Speech speed

150 words per minute

Speech length

1561 words

Speech time

623 secs

Bengus Hassan

Speech speed

199 words per minute

Speech length

1673 words

Speech time

503 secs

Gabriela Ramos

Speech speed

166 words per minute

Speech length

2066 words

Speech time

745 secs

Moderator – Moritz Fromageot

Speech speed

157 words per minute

Speech length

484 words

Speech time

185 secs

Moderator – Peggy HICKS

Speech speed

200 words per minute

Speech length

2122 words

Speech time

636 secs

Owen Larter

Speech speed

235 words per minute

Speech length

1799 words

Speech time

460 secs