UNSC meeting: Artificial intelligence, peace and security

18 Jul 2023 09:00h - 17:00h

This is an initiative launched by Switzerland as an elected member of the UN Security Council. It aims to provide quick and easy access to the content of UNSC meetings through AI-powered reporting and to explore the potential of this technology. We have selected 10 UNSC meetings that took place between January 2023 and October 2024 and discussed elements of ”A New Agenda for Peace“. These meetings have been analyzed by Diplo Foundation using DiploGPT, their artificial intelligence solution. Below you will find the resulting report on one of the meetings.

Table of contents

Disclaimer: All reports and responses on this website are machine-generated. Errors and incomplete information may occur, and it is strongly recommended to verify the content with the official UN meeting record or the livestream on UN Web TV. AI has great potential to make UNSC meetings more accessible, but this is still in the experimental stage.

Key themes and observations

Artificial Intelligence and International Peace and Security: A United Nations Security Council Debate

The United Nations Security Council convened a landmark session to discuss the implications of artificial intelligence (AI) for international peace and security. This debate brought together representatives from member states, AI experts, and UN officials to address the opportunities and challenges presented by rapidly advancing AI technologies. The session covered a wide range of topics, including AI’s potential benefits for peacekeeping and conflict prevention, its associated risks to global security, and the urgent need for international governance frameworks.

AI’s Potential for Enhancing Peace and Security

Several speakers highlighted AI’s potential to contribute positively to international peace and security efforts. Japan emphasized the importance of human-centric and trustworthy AI, suggesting that AI could enhance efficiency and transparency in Security Council decision-making. The United States and France noted AI’s capacity to bolster global security architecture, augment decision-making processes, and enhance humanitarian efforts.

Specifically, AI was recognized for its potential to improve UN peacekeeping operations. Ghana and Ecuador pointed out that AI could be used for identifying early warning signs of conflicts, facilitating coordination of humanitarian assistance, improving risk assessment, and enhancing the safety and security of peacekeepers. Switzerland mentioned its development of an AI-assisted analysis tool for the UN Operations and Crisis Center, demonstrating concrete steps towards integrating AI into peacekeeping efforts.

Risks and Challenges Associated with AI

While acknowledging its potential benefits, many speakers also expressed concerns about the risks AI poses to international peace and security. The United States, China, and France highlighted potential threats such as AI-enabled cyberattacks targeting critical infrastructure, the spread of disinformation, and the possible misuse of AI for terrorist or criminal purposes.

Ecuador and Malta raised specific concerns about the integration of AI into autonomous weapon systems, calling for human control to be maintained in military applications of AI. The issue of lethal autonomous weapons systems was a recurring theme, with several countries, including Brazil and Switzerland, advocating for their prohibition or strict regulation.

Gabon and Mozambique drew attention to the potential for AI to exacerbate global inequalities, noting that the resources and capabilities necessary for AI development are not evenly distributed across the globe. This disparity could reinforce existing inequalities and create new asymmetries in the international system.

The Need for Global AI Governance

A central theme of the debate was the urgent need for global governance of AI technologies. UN Secretary-General Antonio Guterres proposed creating a new UN entity to support collective efforts to govern AI and announced the formation of a high-level advisory board on AI to explore global governance options.

Many countries, including Japan, China, and France, expressed support for international collaboration and UN involvement in AI governance. They emphasized the importance of developing ethical principles, legal frameworks, and international standards to guide the responsible development and use of AI technologies.

However, there were differing views on the approach to AI governance. While some countries advocated for binding international agreements, others, like Russia, expressed skepticism about discussing AI in the Security Council at this stage, arguing that more specialized, scientific discussions are needed before addressing it at this level.

Balancing Innovation and Regulation

The debate also touched on the challenge of balancing innovation with regulation in the rapidly evolving field of AI. The United Arab Emirates emphasized the need to establish rules and guardrails for AI governance while cautioning against over-regulation that could stifle innovation, particularly in emerging nations.

China advocated for a balanced approach that promotes innovation while ensuring safety and controllability in AI development. The United States highlighted its efforts to maximize AI’s benefits while mitigating risks through initiatives such as an AI risk management framework and a blueprint for an AI Bill of Rights.

The Role of Multiple Stakeholders

Many speakers stressed the importance of involving multiple stakeholders in AI governance. Guterres called for the integration of the private sector, civil society, and independent scientists in AI governance efforts. Malta supported the development of universal instruments for ethical AI frameworks, emphasizing the need for cooperation among multiple stakeholders.

Brazil and Ghana advocated for a whole-of-society approach to AI governance, highlighting the need to leverage the potential of the private sector while retaining human rights at the core of all ethical principles.

Conclusion

The UN Security Council debate on AI and international peace and security highlighted the complex and multifaceted nature of this emerging technology. While there was broad agreement on AI’s potential to contribute positively to peace and security efforts, concerns about its risks and the need for robust governance frameworks were equally prominent.

The session underscored the urgency of developing international standards and ethical guidelines for AI development and use, particularly in contexts that could impact global peace and security. It also highlighted the need for continued dialogue and cooperation among nations, as well as the involvement of diverse stakeholders in shaping the future of AI governance.

As AI continues to advance rapidly, the international community faces the challenge of harnessing its benefits while mitigating its risks. This Security Council debate marks an important step in addressing these challenges at the highest levels of global governance, setting the stage for future discussions and potential actions to ensure that AI contributes to a more peaceful and secure world.

Transcript of the meeting

President – United Kingdom:
The 9,381st meeting of the Security Council is called to order. The provisional agenda for this meeting is maintenance of international peace and security, artificial intelligence, opportunities and risks for international peace and security. The agenda is adopted. I warmly welcome the Secretary General and the distinguished high-level representatives. Your presence today underscores the importance of the subject matter under discussion. In accordance with Rule 39 of the Council’s Provisional Rules of Procedure, I invite the following briefers to participate in the meeting. It is so decided. The Security Council will now begin its consideration of Item 2 of the agenda. I wish to draw attention of the Council members to the document S-2023-528, a letter dated 14th July 2023 from the Permanent Representative of the United Kingdom of Great Britain and Northern Ireland to the United Nations addressed to the Secretary General transmitting a concept paper on the item under consideration. I give the floor to the Secretary General, His Excellency Mr. António Guterres.

Secretary General – Antonio Guterres:
Mr. President, Excellencies, I thank the United Kingdom for convening the first debate on artificial intelligence ever held in this Council. I have been following the development of AI for some time, and indeed I told the General Assembly six years ago. We know that AI would have a dramatic impact on sustainable development, the world of work, and the social fabric. But like everyone here, I’ve been shocked and impressed by the newest form of AI, generative AI, which is a radical advance in its capabilities. The speed and reach of this new technology in all its forms are utterly unprecedented. It has been compared to the introduction of the printing press. But while it took more than 50 years for printed books to become widely available across Europe, CHET-GPT reached 100 million users in just two months. The finance industry estimates AI could contribute between 10 and 15 trillion US dollars to the global economy by 2030. Almost every government, large company, and organization in the world is working on an AI strategy. But even its own designers have no idea where their stunning technological breakthrough may lead. It is clear that AI will have an impact on every area of our lives, including the three pillars of the United Nations. It has the potential to turbocharge global development, from monitoring the climate crisis to breakthroughs in medical research. It offers new potential to realize human rights, particularly to health and education. But the High Commission for Human Rights has expressed alarm over evidence that AI can amplify bias, reinforce discrimination, and enable new levels of authoritarian surveillance. Today’s debate is an opportunity to consider the impact of artificial intelligence on peace and security, where it is already raising political, legal, ethical, and humanitarian concerns. I urge the Council to approach this technology with a sense of urgency, a global lens, and a learner’s mindset. Because what we have seen is just the beginning. Never again will technological innovation move as slow as it is moving today. Mr. President, AI is being put to work in connection with peace and security, including by the United Nations. It is increasingly being used to identify patterns of violence, monitor ceasefires, and more, helping to strengthen our peacekeeping, mediation, and humanitarian efforts. But AI tools can also be used by those with malicious intent. AI models can help people to harm themselves and each other at massive scale. Let’s be clear. The malicious use of AI systems for terrorist, criminal, or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale. AI-enabled cyberattacks are already targeting critical infrastructure and our own peacekeeping and humanitarian operations, causing great human suffering. The technical and financial barriers to access are low, including for criminals and terrorists. Both military and non-military applications of AI could have very serious consequences for global peace and security. The advent of generative AI could be a defining moment for disinformation and hate speech, undermining truth, facts, and safety, adding a new dimension to the manipulation of human behavior, and contributing to polarization and instability on a vast scale. Deepfakes are just one new AI-enabled tool that, if unchecked, could have serious implications for peace and stability. And the enforcing consequences of some AI-enabled systems could create security risks by accident. Look no further than social media. Tools and platforms that were designed to enhance human connection are now used to undermine elections, spread conspiracy theories, and incite hatred and violence. Malfunctioning AI systems are another huge area of concern. And the interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming. Generative AI has enormous potential for good and evil at scale. And creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead. Without action to address these risks, we are derelict in our responsibilities to present and future generations. Mr. President, the international community has a long history of responding to new technologies with the potential to disrupt our societies and economies. We have come together at the United Nations to set new international rules, sign new treaties, and establish new global agencies. While many countries have called for different measures and initiatives around the governance of AI, this requires a universal approach. And questions of governance will be complex for several reasons. First, certain powerful AI models are already widely available to the general public. Second, and unlike nuclear material and chemical and biological agents, AI tools can be moved around the world, leaving very little trace. And third, the private sector’s leading role in AI has few parallels in other strategic technologies. But we already have entry points. For example, one is the 2018-2019 Guiding Principles on Lethal Autonomous Women’s Systems agreed through the the Convention on Certain Conventional Weapons. I agree with the large number of experts that have recommended the prohibition of lethal autonomous weapons without human control. We also have the recommendations on the ethics of artificial intelligence agreed through UNESCO in 2021. The Office of Counterterrorism, working together with the Interregional Crime and Research Center Institute of the United Nations, has provided recommendations on how member states can tackle the potential use of AI for terrorist purposes. And the AI for Good summits, hosted by the International Telecommunications Union, have brought together experts, the private sector, United Nations agencies and governments around the world, to address the existing challenges while also creating the capacity to monitor and respond to future risks. It should be flexible and adaptable, and consider technical, social and legal questions. It should integrate the private sector, civil society, independent scientists and all those driving AI innovation. The need for global standards and approaches makes the United Nations the ideal place for this to happen. The Charter’s emphasis on protecting succeeding generations gives us a clear mandate to bring all stakeholders together around the collective mitigation of long-term global risks. AI poses just such a risk. I therefore welcome calls from some member states for the creation of a new United Nations entity to support collective efforts to govern this extraordinary technology, inspired by such models as the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change. The overarching goal of this body is to ensure that the United States and the rest of the world are prepared to respond to the challenges of climate change. The United States and the rest of the world are prepared to respond to the challenges would be to support countries to maximize the benefits of AI for good, to mitigate existing and potential risks, and to establish and administer internationally agreed mechanisms of monitoring and governance. Let’s be honest, there is a huge skills gap around AI in governments and other administrative and security structures that must be addressed at the national and global levels. A new UN entity would gather expertise and put it at the disposal of the international community, and it could support collaboration on the research and development of AI tools to accelerate sustainable development. Mr. President, as a first step, I am convening a multi-stakeholder, high-level advisory board for artificial intelligence that will report back on the options for global AI governance by the end of this year. My upcoming policy brief on a new Agenda for Peace will also make recommendations on AI governance to Member States. First, it will recommend that Member States develop national strategies on the responsible design, development, and use of AI, consistent with their obligations under international humanitarian law and human rights law. Second, it will call on Member States to engage in a multilateral process to develop norms, rules, and principles around military applications of AI, while ensuring the engagement of other relevant stakeholders. And third, it will call on Member States to agree on a global framework to regulate and strengthen oversight mechanisms for the use of data-driven technology, including artificial intelligence, for counterterrorism purposes. The policy brief on a new Agenda for Peace will also call for negotiations to be concluded by 2026 on a legally binding instrument to prohibit little or most weapons systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law. I hope Member States will debate these options and decide on them. the best course of action to establish the AI governance mechanisms that are so urgently needed. In addition to the recommendations of the new Agenda for Peace, I urge agreement on the general principle that human agency and control are essential for nuclear weapons and should never be withdrawn. The Summit of the Future next year will be an ideal opportunity for decisions on many of these interrelated issues. Mr. President, I urge this Council to exercise leadership on artificial intelligence and show the way towards common measures for the transparency, accountability and oversight of AI systems. We must work together for AI that bridges social, digital and economic divides, not one that pushes us further apart. I urge you to join forces and build trust for peace and security. We need a race to develop AI for good, to develop AI that is reliable and safe, and that can end poverty, banish anger, cure cancer and supercharge climate action. AI that propels us towards the Sustainable Development Goals. And that is the race we need. And that is a race that is possible and achievable. And I thank you.

President – United Kingdom:
I thank the Secretary General for his briefing. I now give the floor to Mr. Jack Clark.

Jack Clark:
Thank you very much. I come here today to offer a brief overview of why AI has become a subject of concern for the world’s nations, what the next few years hold for the development of the technology, and some ideas for how policymakers may choose to respond to this historic opportunity. The main takeaway from my remarks should be, we cannot leave the development of artificial intelligence solely to private sector actors. The governments of the world must come together, develop state capacity, and make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace. So why am I making this statement? It helps to have a sense of recent history. One decade ago, a company in England called DeepMind published research that showed how to teach an AI system to play old computer games like Space Invaders and Pong. Fast forward to 2023, and the same techniques that were used in that research are now being used to create AI systems that can beat military pilots in air fighting simulations, stabilize the plasma in fusion reactors, and even design the components of next generation semiconductors. Similar trends have played out in computer vision. For a decade ago, scientists were able to create basic image classifiers and generate very crude pixelated images. And today, image classification is used across the world to inspect goods on production lines, analyze satellite imagery, and improve state security. And for AI models which are drawing attention today, like OpenAI’s ChatGPT, Google’s BARD, and my own company AnthropX Claude, are themselves also developed by corporate interests. So a lot has happened in 10 years, and we can expect new and even more powerful systems in the coming years. We can expect these trends to continue. Across the world, private sector actors are the ones that have the sophisticated computers and large pools of data and capital resources to build these systems. And therefore, private sector actors seem likely to continue to define the development of these systems. While this will bring huge benefits to humans across the world, it also poses potential threats to peace, security, and global stability. These threats stem from two essential qualities of AI systems. First, their potential for misuse, and second, their unpredictability, as well as the inherent fragility of them being developed by such a narrow set of actors. On misuse, these AI systems have an increasingly broad set of capabilities, and some beneficial capabilities sit alongside ones that compose profound misuses. For example, an AI system that can help us in understanding the science of biology may also be an AI system that can be used to construct biological weapons. On unpredictability, a fundamental sense of AI is that we do not understand these systems. It is as though we are building engines without understanding the science of combustion. This means that once AI systems are developed and deployed, people identify new uses for them, unanticipated by their developers. Many of these will be positive, but some could be misuses like those mentioned above. Even more challenging is the problem of chaotic or unpredictable behavior. An AI system may, once deployed, exhibit subtle problems which were not identified during its development. Therefore, we should think very carefully about how to ensure the developers of these systems are accountable so that they build and deploy safe and reliable systems which do not compromise global security. To dramatize this issue, I think it’s helpful to use an analogy. I would challenge those listening to this speech to not think of AI as a specific technology, but instead as a type of human labor that can be bought and sold at the speed of a computer and which is getting cheaper and more capable over time. And as I have described, this is a form of labor that has been developed by one narrow class of actors, companies. We should be clear-eyed about the immense political leverage this affords. If you can create a substitute or augmentation for human labor and sell it into the world, you are going to become more influential over time. Many of the challenges of AI policy seem simpler to think about if we think of them like this. How should the nations of the world react to the fact that anyone who has enough money and data can now easily create an artificial expert for a given domain? Who should have access to this power? How should governments regulate this power? Who should be the actors able to create and sell these so-called experts? And what kinds of experts can we allow to be created? These are huge questions. Based on my experiences, I think the useful things we can do are to work on developing ways to test the capabilities, misuses, and potential safety flaws of these systems. If we’re creating and distributing new types of workers which will go into the global economy, then it stands to reason we would like to be able to characterize them, evaluate their capabilities, and understand their failings. After all, humans go through rigorous evaluation and on-the-job testing for many critical roles ranging from the emergency services to the military. Why not the same for AI? For this reason, it has been encouraging to see many countries emphasize the importance of safety testing and evaluation in their various AI policy proposals, ranging from the European… Union’s AI framework, to China’s recently announced generative AI rules, to the United States National Institute of Standards and Technology’s risk management framework for AI systems, to the United Kingdom’s upcoming summit on AI and AI safety. All of these different AI policy proposals and events rely in some form on testing and evaluating AI systems, so the governments of the world should invest in this area. Right now, there are not standards or even best practices for how to test these frontier systems for things like discrimination, misuse, or safety. And because there aren’t best practices, it’s hard for governments to create policies that can create more accountability for the actors developing these systems, and correspondingly, the private sector actors enjoy an information advantage when dealing with governments. In closing, any sensible approach to regulation will start with having the ability to evaluate an AI system for a given capability or flaw, and any failed approach will start with grand policy ideas that are not supported by effective measurements and evaluations. It is through the development of robust and reliable evaluation systems that governments can keep companies accountable and companies can earn the trust of the world that they want to deploy their AI systems into. If we do not invest in this, then we run the risk of regulatory capture, compromising global security, and handing over the future to a narrow set of private sector actors. If we can rise to this challenge, however, I believe we can reap the benefits of AI as a global community and ensure there is a balance of power between the developers of AI and the citizens of the world. Thank you very much.

Yi Zeng:
My name is Yi Zeng and I would like to take this opportunity to share with distinguished representatives my personal view on AI for good on international peace and security. And I hope it may be useful to promote the discussion and understanding of the necessity of AI global governance. There’s no doubt that AI is a powerful enabling technology to push forward global sustainable development. When investigating on the use of AI for SDGs, we find that most of the efforts are on AI for quality education and AI for healthcare. Well, many of other important topics such as AI for biodiversity, climate actions and AI for peace have been put very few efforts on. Well, I think these are the essential topics for the future of humanities and governments should definitely work together on these important topics. Military AI and AI for peace and security are very related topics, but fundamentally different in important aspects. As a essential pillar for sustainable development goals, we should push forward AI for international peace, reduce not to enhance security and safety risks. When thinking about AI for good from the perspective of peace and security, it will be much better to make efforts on using AI to identify disinformation, misunderstanding among different countries and political bodies and use AI for network defense, not to attack, instead of seeking ways to create disinformation for military and political purpose. AI should be used to connect people in cultures, not to disconnect them. And this is why we create an AI enabled cultural interactions engine to find commonalities and diversities among different UNESCO world heritages. From these world heritages, we find we are not that disconnected from cultural side and these commonalities serve as roots and friendly hands that help us to appreciate and understand or even learn from these diversities in various cultures. The current AI’s including recent generative AI’s are all information processing tools that seems to be intelligent. Well, they are without real understandings and hence are not truly intelligent. This is why they of course cannot be trusted as responsible agents that can help human to make decisions. For example, although the world still has to reach a concrete enough consensus on lethal autonomous weapon systems, at least AI should not be used to directly make death decisions for human. Effective and responsible human control must apply for sufficient human AI interactions. AI should also not be used for automating diplomacy tasks, especially foreign negotiations among different countries. since it may use and extend human limitations and weaknesses, such as cheating distrust, to create bigger or even catastrophic risks for humans. It is very funny, misleading, and irresponsible that dialogue systems powered by generative AI always argue, I think, I suggest, while there are no I or even no me in the AI models. Hence, again, to emphasize, AI should never and ever pretend to be human, take the human position, or mislead humans to have the wrong perception. We should use generative AIs to assist but never trust them to replace human decision making. We must ensure human control for all AI-enabled weapon systems, and the human control has to be sufficient, effective, responsible. For example, cognitive overload during human-AI interactions has to be avoided. We must prevent the proliferation of AI-enabled weapon systems since related technology is very likely to be maliciously used or abused. Both near-term and long-term AIs will have the risk to create human extinctions simply because in the near term, we haven’t found a way to protect ourselves from AI’s utilization on human weaknesses. Well, when AIs are using it, they even don’t know what do we mean by human death and life. And in the long term, we haven’t given superintelligence any practical reasons why they should protect humans, and the solution may need decades to find. Our preliminary research suggests that we may need to change the way we interact with each other, other species, the ecology, and the environment, which may need decision making for the whole human species. And we need to work deeply and thought-provokingly all together. Due to these near-term and long-term challenges, I’m quite sure that we cannot solve AI for peace and security issue today, and the discussion may be a good starting point for member states. Challenging though, here I would suggest that the UN Security Council consider the possibility of having a working group on AI for peace and security, working on near-term and long-term challenges. Since at the expert level, it would be more flexible and scientific to work together and easier to reach a consensus from scientific and technical point of view, and to provide assistance as well as support for the council member countries to make decisions. The Security Council members should set a good model and play an important role for other countries on this important issue. AI is proposed to help humans solve problems, not creating problems. I was asked by a boy, besides its appearance on sci-fi, whether nuclear bomb assisted by AI can be used to blow up the… asteroid attacking the Earth or alter its trajectory to avoid a collision with the Earth to save our lives. I think although the idea may not be sufficiently solid and very risky at this point, it is at least using AI to solve problems for humankind, which is at least much better compared to empower AI to attack each other with nuclear weapons on this planet, which creates problems for human society and may create catastrophic risks to us, to our next generation, or even to human civilization. In my own view, humans should always maintain and be responsible for final decision-making on the use of nuclear weapons, and we have already affirmed that a nuclear war cannot be won and must never be fought. Many countries have announced their own strategy and opinions towards AI for security and governance in general, including but not limited to P5, and we can see there are commonalities and serve as important inputs for international consensus. But this is still not enough. The United Nations must play a central role to set up a framework on AI for development and governance to ensure global peace and security. As a shared future for all, we need to set up the agenda and the framework all together, leaving no one behind. With all that, I thank you for your attention.

President – United Kingdom:
I shall now make a statement in my capacity as the representative of the United Kingdom.This is the first-ever discussion of artificial intelligence in the Security Council, and ahistoric meeting. Council. Since the early development of Artificial Intelligence by pioneers like Alan Turing and Christopher Strachey, this technology has advanced with ever greater speed. Yet, the biggest AI-induced transformations are still to come. Their scale is impossible for us to comprehend fully, but the gains to humanity will surely be immense. AI will fundamentally alter every aspect of human life. Groundbreaking discoveries in medicine may be just around the corner. The productivity boosts to our economies may be vast, and AI may help us adapt to climate change, beat corruption, revolutionize education, and deliver the Sustainable Development Goals and reduce violent conflict. But we are here today because AI will affect the work of this Council. It could enhance or disrupt global strategic stability. It challenges our fundamental assumptions about defence and deterrence. It poses moral questions about accountability for lethal decisions on the battlefield. There can already be no doubt that AI changes the speed, scale, and spread of disinformation with hugely harmful consequences for democracy and stability. AI could aid the reckless quest for weapons of mass destruction by state and non-state actors alike. But it could also help us stop proliferation. That’s why we urgently need to shape the global governance of transformative technologies. Because AI knows no borders. The UK’s vision is founded on four irreducible principles. Open. AI should support freedom and democracy. Responsible. AI should be consistent with the rule of law and human rights. secure. AI should be safe and predictable by design, safeguarding property rights, privacy and national security. And resilient. AI should be trusted by the public and critical systems must be protected. The UK’s approach builds on existing multilateral initiatives such as the AI for Good Summit in Geneva or the work of UNESCO or the OECD and the G20. Institutions like the Global Partnership for AI, the G7’s Hiroshima process, the Council of Europe and the International Telecommunications Union are all important partners. Pioneering AI strategies will also need to work with us so that we can capture the gains and minimise the risks to humanity. No country will be untouched by AI so we must involve and engage the widest coalition of international actors from all sectors. The UK is home to many of the world’s trailblazing AI developers and foremost AI safety researchers. So this autumn the UK plans to bring together world leaders for the first major global summit on AI safety. Our shared goal will be to consider the risks of AI and decide how they can be reduced through coordinated action. Momentous opportunities on a scale that we can barely imagine lie before us. We must seize these opportunities and grasp the challenges of AI including those for international peace and security, decisively, optimistically and from a position of global unity on essential principles. of men which, taken at the flood, leads to fortune. In that spirit, let us work together to ensure peace and security as we pass across the threshold of an unfamiliar world. I now resume my function as President of the Council. I now give the floor to His Excellency Mr. Shunsuke Takei, State Minister for Foreign Affairs of Japan.

Japan:
Thank you, Mr. President. I commend your initiative to take up AI in the Security Council. This will be a good beginning for future global discussion. I also thank the Secretary-General and the other briefers. AI is changing the world. AI has transformed human life. Its speed potential and risks go beyond our imagination and national borders. We are being tested at this historical juncture. Can we have the mind of self-discipline to control it? Mr. President, my political belief is, instead of worrying, deal with it. I believe that the key to take on the challenge is twofold. Human-centric and trustworthy AI. Human beings can and should control AI to enhance human potential, not the other way around. Let me make two points. First, the human-centric AI, the development of AI, should be consistent with our democratic values. fundamental human rights. AI should not be a tool of rulers, but should be placed under the rule of law. The military use of AI is a case in point. It should be responsible, transparent, and based on international law. Japan will continue to contribute to the international rule-making process of the laws in the CCW. Second, trustworthy AI. AI can be more trustworthy with a wide range of stakeholders included in the process. I believe that this is where is convening power of the UN can make a difference and bring together with them from around the world. Last month, Japan led the discussion at the UN on the misuse of AI by terrorists by hosting a side event with UNOCT and UNIQLO. Japan is also proud to have to launch the G7 Hiroshima AI process this year and to contribute to the global discussion on generative AI. Mr. President, the Security Council and the UN can update its existing toolkits through the use of AI. First, we should consider how to the active use of AI can enhance the efficiency and the transparency of this Council in its decision-making and working method. We welcome the effort of the Secret military art to utilize AI for mediation and peace-building activities. Moreover, we can make the UN work more efficiently and effectively through AI-based early warning systems for the conflict, sanctioned implementation monitoring and countermeasures against disinformation and on-peak operations. Let me conclude by expressing our willingness and activity participate in the discussion on AI at the UN and beyond. Thank you.

President – United Kingdom:
I thank His Excellency Mr. Takei for his statement. I now give the floor to His Excellency Mr. Manuel Goncalves, Deputy Minister for Foreign Affairs and Cooperation of the Republic of Mozambique.

Mozambique:
Mr. President, Mr. Secretary General of the United Nations, Your Excellencies and distinguished members of Council, Mozambique warmly congratulates United Kingdom for its brilliant presidency and for convening this timely and important debate on artificial intelligence, opportunities and risks for international peace and security. We wish to convey our gratitude to His Excellency Antonio Guterres, Secretary General of United Nations for his insightful remarks. We thank the briefers for their thoughtful and pertinent contributions. Mr. President, allow me to begin with a candid disclosure. This statement was made solely by humans and not by generative artificial models like the well-known charting GPT. This disclosure is important. It unveils some of the axioms surrounding the rapid advancement in AI. Especially, we are approaching a point where a digital machine can now execute a task that, for the majority of human existence, were exclusively within the realm of human intelligence. The recent acceleration in both the power and visibility of AI systems, along with the increasing awareness of their capacity and limitation, has sparked concern that technology is advancing at such a rapid pace that it may no longer be safely controllable. Discussion is warranted. While recently, advancement in AI presents immense opportunities to enhance various fields such as speech-making, medicine, welfare, through democratization of innovation, certain models have also exhibited capabilities that surpass the understanding and control of their creators. This poses risks of various kinds, including the potential of catastrophic outcomes. Indeed, we should take heed and cautiously tell of the sources of apprentices. As AI engineers increasingly and convincingly imitate it, and in some cases even surpass, vice behavior is associated with the human being, they have become an ideal tool for spreading misinformation, scamming individuals, engaging in academic cheating, deceitfully initiate conflicts, recruiting terrorists, selling division, and perpetuating numerous other nefarious activities. AI models have evolved in a self-programming machine capable of automating their learning process through continuous loop of self-improvement. This necessitates the establishment of robust governance structure that aim to mitigate the risk of accidents and misuse while still forcing innovation, hamstring the potential of positive outcomes. Mr. President, as I currently stated in concept note for this briefing, artificial intelligence technology posses the potential to profoundly transform our society having multitude of positive effects. AI can contribute to eradicate disease, combat climate change, and currently predict natural disasters, making it a valuable ally for the global south. Similarly, by leveling the extensive database generated by organizations like SEAD, regional organization, and wide UN system, which are there to rigorous standard of quality control, source and member state involvement, we have the potential to enhance the early warning capability, customizing mediation effort, and strengthening strategic communication in peacekeeping among other various examples. AI can be a valuable tool in enhancing this vast data. for the benefit of these endeavors. Mr. President, first, with the opportunity and the threat posed by U.S. artificial intelligence, the Republic of Mozambique recognizes the importance, as a member state of the United Nations, of adopting a balanced approach that encompasses the following aspects. First, in the event that credible evidence emerged indicating that AI poses an existential risk, it is crucial to negotiate an intergovernmental treaty to govern and monitor its use. Second, it is essential to develop relevant regulation and appropriate legislation to safeguard privacy and data security. This entails ensuring that all relevant actors, including the government and companies, provide digital technology or ties with AI in an ethical and responsible manner, while respecting the principle outlined in Article 12 of the Universal Declaration of Human Rights and Article 17 of the International Convention on Civil and Political Rights. Third, the Global Digital Pact should be promoted, facilitating the sharing of technological knowledge between advanced countries and those in the early stages of AI development. This collaborative effort between AI specialists, governments, companies, and civil society aims to mitigate the risks of misuse and foster responsible AI practice. Mr. President, in conclusion, it is important to recognize that the impetus required for AI are not disconnected from the real world. The necessary resources such as data, computer power, electricity, skills, and technological infrastructure are not evenly distributed across the globe. By shrinking a balance between the advantage of AI and the social safeguards in place, we are sure that AI does not become a source of conflict that reinforce inequalities and asymmetries potentially posing threat to global peace and security. This approach aims to harm the potential of human-wide activity mitigating any negative consequence that may arise. Thank you, Mr. President.

President – United Kingdom:
I thank His Excellency, Mr. Gontalves, for his statement. I now give the floor to His Excellency, Assistant Minister for Foreign Affairs and International Cooperation for Advanced Science and Technology of the United Arab Emirates.

United Arab Emirates:
I will begin by thanking Secretary General Guterres for his attentive remarks today. Thank you to Foreign Secretary Cleverly and the UK presidency for bringing such a salient topic to the Council for discussion. I would also like to thank our other briefers for their illuminating statements. Mr. President, how we negotiate the threats and opportunities for artificial intelligence is fast becoming one of the defining questions of our time. Five years ago, the UAE and Switzerland brought forward a proposal to Secretary General Guterres to establish a deliberative group to consider this very question. Under the Secretary General’s leadership, a high-level panel on digital cooperation was created and it was clear. for their deliberations that technologies such as AI could no longer go unchecked. Computation processing power has followed Moore’s Law, doubling every 18 months since the down of the computer age. Not anymore. AI development is now outpacing Moore’s Law and moving at breakneck speed and governments are unable to keep up. This is the wake-up call we need. It is time to be optimistic realists when it comes to AI, not just for assessing the threats this technology poses to international peace and security, but to harness the opportunities it offers. To this end, I will make four brief points today. First, we must establish rules of the road. There is a brief window of opportunity available now where key stakeholders are willing to unite and consider the guardrails for this technology. Member states should pick up the mantle from the Secretary General and establish commonly agreed-upon rules to govern AI before it’s too late. This should include mechanisms to prevent AI tools from promoting hatred, misinformation and disinformation that can fuel extremism and exacerbate conflict. As with other cyber technologies, the use of AI should be firmly guided by international law since international law continues to apply in cyberspace. But we must also recognize that strategies may need to be adopted so that we can effectively apply the conventional principles of international law in the rapidly evolving context of AI development. Second, artificial intelligence should become a tool to promote peace-building and the de-escalation of conflicts, not a threat multiplier. AI-driven tools have the potential to more effectively analyze vast amounts of data, trends and patterns. That translates to an increased ability to detect terrorist activity in real time and predicting how the that the adverse effects of climate change may impact peace and security. It also paves the way for limiting the misattribution of attacks, as well as ensuring response in conflict settings as proportionate. At the same time, we must be aware of the potential misapplication of this technology in targeting critical infrastructure and fabricating false narratives to fuel tensions and incite violence. Third, the biases of the real world should not be replicated by AI. Decades of progress on the fight against discrimination, especially gender discrimination towards women and girls, as well as against persons with disabilities, will be undermined if we do not ensure that an AI that is inclusive. The high-level panel on digital cooperation was clear in stating that an inclusive digital economy and society was a priority action for immediate attention. Any opportunity that AI offers can only be a true opportunity if it is based on the principle of equality, both in its design and access. Fourth, we must avoid over-regulating AI in a way that hinders innovation. The creativity, research and development activities occurring in the context of AI taking place in emerging nations are critical for the sustainable growth and development of those nations. To maintain this, emerging countries need flexibility and agile regulations. We should nurture a sector that encourages responsible behavior using smart, effective and efficient regulations and guidelines, and avoid too rigid rules that can hamper the evolution of this technology. Mr. President, Throughout history, major shifts and leaps forward have often followed moments of major crisis. The creation of the United Nations and the Security Council following the Second World War is a major step forward. speaks to this very fact. When it comes to AI, let’s not wait for the moment of crisis. It’s high time to get ahead of the curve and shape an AI arena that is geared towards preserving international peace and security. Thank you, Mr. President.

President – United Kingdom:
I thank His Excellency Mr. Sharaf for his statement. I now give the floor to the representative of China.

China:
Thank you, Mr. President. Mr. President, China welcomes you to preside over today’s Security Council meeting and thanks Secretary General Guterres for his briefing. Many of his insights deserve our study. I would also like to thank Professor Zeng Yi and Professor Jack Clark for their briefings. Their insights can help us better understand and handle issues related to AI. In recent years, the world has witnessed a rapid development and wide application of AI with complex effects constantly emerging. On the one hand, the empowering role of AI in areas such as scientific research, health care, autonomous driving, smart decision-making is becoming increasingly prominent, generating huge technological dividends. On the other hand, the scope of AI application has been constantly expanding, causing increasing concerns in areas such as data privacy, spreading false information, exacerbating social inequality, and disrupting employment structures. In particular, the misuse or abuse of AI by terrorist or extremist forces will pose a significant threat to the international peace and security. At present, as a cutting-edge technology, AI is still in its early stage of development. As a double-edged sword, whether it is good or bad, good or evil, depends on how mankind utilizes it, regulates it, and how… We balance scientific development with security. The international community should uphold the true spirit of multilateralism, engaged in extensive dialogue constantly, seek consensus, and explore the development of guiding principles for AI governance. We support the central coordinating role of the U.N. in this regard, and support the Secretary General and Secretary General’s efforts in holding discussions amongst all parties, as well as the full participation of all countries, especially developing countries. In participating in this cause and making their own contributions, I would like to make some preliminary observations. First, we should adhere to the principle of putting ethics first. The potential impact of AI may exceed human cognitive boundaries. To ensure that this technology always benefits humanity, it is necessary to take people-oriented and AI for good as the basic principles to regulate the development of AI, and to prevent this technology from turning into a runaway wild horse. Based on these two guidelines, efforts should be made to gradually establish and improve ethical norms, laws, and regulations, and policy systems for AI, while allowing countries to establish AI governance systems that are in line with their own national conditions based on their own development stages and social and cultural characteristics. Second, adhere to safety and controllability. There are many uncertainties in the development and application of AI-related technologies, and safety is the bottom line that must be upheld. The international community needs to enhance risk awareness, establish effective risk warning and response mechanisms, ensure that risks beyond human control do not occur, and ensure that autonomous machine killing does not occur. We need to strengthen the detection and evaluation of the entire life cycle of AI, ensuring that mankind AI has the ability to press the pause button at critical moments. Leading technology enterprises should clarify the responsible parties, establish a sound accountability mechanism, and avoid developing or using risky technologies that may have serious negative consequences. Thirdly, we must adhere to fairness and inclusiveness. The impact of AI on science and technology is worldwide and revolutionary. Equal access and utilization of AI technology products and services by developing countries are crucial to bridging the technological, digital, and development divide between the North and the South. The international community should work together to ensure that developing countries equally enjoy the development dividends brought by AI technology and continuously enhance their representation and voice in this field. Fourthly, certain developed countries, in order to seek technological hegemony, make efforts to build their exclusive small clubs and maliciously obstruct the technological development of other countries and artificially create technological barriers. China firmly opposes these behaviors. Fourthly, adhere to openness and inclusiveness. The development of science and technology needs to achieve a relative balance between technological progress and safe applications. The best path is to maintain open cooperation, encourage interdisciplinary, inter-industrial, inter-regional, and cross-border exchanges and dialogues, and oppose various forms of exclusive clubs and decoupling and disconnection. We need to promote coordination and interaction among international organizations, government departments, research and educational institutions, enterprises, and the public in the field of AI development and governance under the UN framework and jointly create an open, inclusive, just, and non-discriminatory environment for scientific and technological development. Fifth, I’d adhere to peaceful utilization. The fundamental purpose of developing AI technology is to enhance the common well-being of humanity. Therefore, it is necessary to focus on exploring the potential of AI in promoting sustainable development, promoting cross-disciplinary integration and innovation, and better empowering the global development cause. The Security Council could focus on the application and impact of AI in conflict situations to enrich the toolkit of the UN for peace. AI in the military field may lead to major changes in the weight of warfare and the format of war. All countries should uphold a responsible defense policy, oppose the use of AI to seek military hegemony or to undermine the sovereignty and territorial integrity of other countries, and avoid the abuse, unintentional misuse, or even intentional misuse of AI weapon systems. Mr. President, today’s discussion on AI highlights the importance, necessity, and urgency of building a community of shared future for mankind. China adheres to the concept of community of shared future for mankind and has actively explored the scientific path of AI development and governance in all fields. In 2017, the Chinese government issued the New Generation Artificial Intelligence Development Plan, which clearly laid out basic principles such as technology, leadership, system layout, market leadership, open source, and openness. In recent years, China has continuously improved relevant laws and regulations, ethical norms, intellectual property standards, safety monitoring, and evaluation measures to ensure the healthy and orderly development of human beings. development of AI. China has always participated in global cooperation and governance in AI with a highly responsible attitude. As early as 2021, China hosted the Impact of Emerging Technologies on International Peace and Security area conference during its presidency of the Security Council. For the first time, bringing the Council’s attention to emerging technology such as AI, China has successively submitted two position papers on military application of AI and its ethical governance on UN platforms, offering systematic proposals from the perspective of strategic security military policy, legal ethics, technological security, rulemaking and international cooperation. Last February, the Chinese government released the Global Security Initiative concept paper, which clearly stated that China is willing to strengthen communication and exchange with the international community on AI security governance, promote the establishment of an international mechanism for universal participation, and form a governance framework and standard with broad consensus. We stand ready to work with the international community to actively implement the Global Development Initiative, Global Security Initiative and Global Civilization Initiative proposed by President Xi Jinping. In the field of AI, we will continue to prioritize development, maintain common security, promote cross-culture exchanges and cooperation, and work with other countries to share the benefit of AI while jointly prevent and respond to risks and challenges. I thank you, Mr. President.

President – United Kingdom:
I thank the representative of China for their statement. And I now give the floor to the representative of the United States.

United States:
Thank you, Mr. President. Thank you to the U.K. for convening this discussion. And thank you to the Secretary General, Mr. Jack Clark, and Professor Yang. Thank you for your valuable insights. Mr. President, artificial intelligence offers incredible promise to address global challenges, such as those related to food security, education, and medicine. Automated systems are already helping to grow food more efficiently, predict storm paths, and identify diseases in patients, and thus, used appropriately, AI can accelerate progress toward achieving the Sustainable Development Goals. AI, however, also has the potential to compound threats and intensify conflicts, including by spreading mis- and disinformation, amplifying bias and inequality, enhancing malicious cyber operations, and exacerbating human rights abuses. We therefore welcome this discussion to understand how the Council can find the right balance between maximizing AI’s benefits while mitigating its risks. This Council already has experience addressing dual-use capabilities and integrating transformative technologies into our efforts to maintain international peace and security. As those experiences have taught us, success comes from working with a range of actors, including member states, technology companies, and civil society activists, through the Security Council and other UN bodies, and in both formal and informal settings. The United States is committed to doing just that and has already begun such efforts at home. On May 4th, President Biden met with leading AI companies to underscore the fundamental responsibility to ensure AI systems are safe and trustworthy. These efforts build on the work of the U.S. National Institute of Standards and Technology, which recently released an AI risk management framework to provide organizations with a voluntary set of guidelines to manage risks from AI systems. Through the White House’s October 2022 blueprint for an AI Bill of Rights, we are also identifying principles to guide the design, use, and deployment of AI. automated systems so rights, opportunities, and access to critical resources or services are enjoyed equally and are fully protected. We are now working with a broad group of stakeholders to identify and address AI-related human rights risks that threaten to undermine peace and security. No member state should use AI to censor, constrain, repress, or disempower people. Military use of AI can and should also be ethical, responsible, and enhance international security. Earlier this year, the United States released a proposed Political Declaration on Responsible Military Use of AI and Autonomy, which elaborates principles on how to develop and use AI in the military domain in compliance with applicable international law. The proposed declaration highlights that military use of AI capabilities must be accountable to a human chain of command and that states should take steps to minimize unintended bias and accidents. We encourage all member states to endorse this proposed declaration. Here at the UN, we welcome efforts to develop and apply AI tools that improve our joint efforts to deliver humanitarian assistance, provide early warning for issues as diverse as climate change or conflict, and further other shared goals. The International Telecommunication Union’s recent AI for Good Global Summit represents one step in that direction. Within the Security Council, we welcome continued discussions on how technological advancements, including when and how to take action to address governments’ or non-states’ actors’ misuse of AI technologies to undermine international peace and security. We must also work together to ensure AI and other emerging technologies are not used primarily primarily as weapons or tools of oppression, but rather as tools to enhance human dignity and help us achieve our highest aspirations, including for a more secure and peaceful world. The United States looks forward to working with all relevant parties to ensure the responsible development and use of trustworthy AI systems serves the global good. I thank you.

President – United Kingdom:
I thank the representative of the United States. I now give the floor to the representative of Brazil.

Brazil:
Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today and for joining us in the briefing. I also thank Mr. Jack Clark and Mr. Yi Zheng for their statements. The rapid development of AI holds immense potential to bolster our global security architecture, augment decision-making processes, and enhance humanitarian efforts. We must also address the multifaceted challenges it poses, including the potential for autonomous weapons, cyber threats, and the exacerbation of existing inequalities. As we embark on this crucial discussion, let us seek a comprehensive understanding of the risks and opportunities associated with AI and work towards harnessing its potential for the greater benefit of humanity while ensuring the preservation of peace, stability, and human rights. The paragraph I just read was entirely written in CHAT-GPT. Although with conceptual imprecisions, it shows how sophisticated these tools have become. This technology is developing so fast that even our best researchers are still unable to assess the full scale of the challenges that await us. us and the benefits these new technologies can provide. Any discussion today must be couched in the humility that we do not fully know what it is that we do not know about AI. What we know for sure is that artificial intelligence is not human intelligence. Most AI relies on large amounts of data and, through complex algorithms, manage to establish patterns and relationships that allow them to generate contextually appropriate results. The outcomes, therefore, are crucially dependent on the inputs. Human oversight is essential to avoid bias and errors. Otherwise, we are running the risk that the aphorism, trash in, trash out, will become a self-fulfilling prophecy. Unlike other innovations with potential implications for security, AI has been mostly developed as a civilian application. Hence, it would be premature to see AI primarily through the lens of international peace and security, as its peaceful uses are likely to produce the most significant effects on our societies. Nevertheless, we can predict with certainty that its applications will be extended to the military field with relevant impact on peace and security. While the Council should remain vigilant and ready to respond to any incidents involving the use of AI, we must also be cautious not to overly securitize this topic by concentrating discussions in this chamber. Due to the intrinsically multidisciplinary nature of AI, which deals with every aspect of life, international discussions must remain open and inclusive. Only a wide and diverse range of views will allow us to scratch the surface and start to make sense of the different facets of AI. This briefing is a good start for bringing in different views on the development and use of AI. Nevertheless, in light of the wide-ranging implications and impacts of AI, the General Assembly, with its universal composition, is the forum best suited for a structured, long-term discussion on artificial intelligence. AI is crucial among the different topics under the mandate of the ongoing Open-Ended Working Group on ICTs, which will hold its fifth substantive session next week. This group, which is open to all UN member states, has been able to make progress on gradually developing global, common understandings on ICTs issues related to international peace and security, despite challenging geopolitical circumstances. Due to their particular nature, this is what we should aim for when discussing challenges deriving from cyber technologies. Mr. President, Madam President, military applications of AI, and especially in the use of force, must strictly abide by international humanitarian law, as enshrined in the Geneva Conventions and other pertinent international commitments. Brazil has been consistently guided by the concept of meaningful human control. As approved in 2019 by the High Contracting Parties of the Convention on Certain Conventional Weapons, Guiding Principle B indicates that human responsibility for decisions on the use of weapon systems must be retained, since accountability cannot be transferred to machines. The centrality of the human element in any autonomous system is essential for the establishment of ethics standards and for full compliance with international humanitarian law. There is no replacement for human judgment and accountability. Military applications of AI must be based on transparency and accountability throughout their life cycle, from development to deployment and use. Moreover, weapon systems with autonomous functions should eliminate bias. bias in their systems operations. We must move ahead swiftly with the progressive development of regulations and norms governing the use of autonomous weapons systems via robust norms to prevent biases and abuses and to guarantee compliance with international law, particularly international humanitarian law and human rights law. Compliance with international law is mandatory to state uses of AI technologies as well as to any use this Council may wish to make of them in its peacekeeping missions or in its broader mandate for the preservation of international peace and security. Beyond the challenges posed by conventional weapons with autonomous functions, we should not shy away from issuing a very stern caution as to the inherent risks posed by the interaction of AI and weapons of mass destruction. We took note with alarm of the news that AI-assisted computer systems were capable of developing, in a matter of hours, new poisonous chemical compounds and of designing new pathogens and molecules. We also must not allow the possibility that nuclear weapons might be linked to AI at the risk of our common future. Madam President, AI has tremendous potential both to remake and to break our societies in the coming years. Navigating between the two will require a broad and concerted international effort, which will include but is in no way limited to this Council. The UN remains the only organisation capable of promoting the global coordination needed to oversee and shape the development of AI so that it works to the betterment of humanity and according to the shared purposes and principles that have brought us here. Thank you.

President – United Kingdom:
I thank the representative of Brazil for their statement. And I now give the floor to the representative of Switzerland.

Switzerland:
Thank you, Madam President. We are grateful to the Secretary General, Antonio Guterres, for participating in this important debate. My thanks also go to Mr. Jack Clarke and Professor Yi Zeng for their valuable and impressive contributions. …time before we see thousands of robots like me out… they’re making a difference. Those words are the words of the robot Ameka, speaking to a journalist at the AI for Good conference that the Secretary General has just mentioned, on which was co-organized by the International Telecommunication Union in Switzerland. It took place in Geneva two weeks ago. Artificial intelligence can be a challenge because of its speed and apparent omniscience, but it can and must serve peace and security as well. As we turn our attention to the new agenda for peace, it is in our hands to ensure that AI makes a difference for the benefit of and not to the detriment of humanity. In this context, let’s seize the opportunity to lay the groundwork towards AI for good by working closely with cutting-edge science. In this regard, the Swiss Federal Institute of Technology is developing a prototype of an AI-assisted analysis tool for the United Nations Operations and Crisis Center. This tool could explore the potential of AI for peacekeeping, in particular for the protection of civilians and peacekeepers. In addition, Switzerland recently launched the Swiss Call for Trust and Transparency Initiative, where academia, the private sector and diplomacy jointly seek practical and rapid solutions to AI-related risks. The Council must also work to counter the risks to peace posed by artificial intelligence. We are very grateful, therefore, to the United Kingdom for having organized this important meeting. Let us look, by way of example, at cyber operations and disinformation, false narratives undermine public trust in governments and peace missions. In this respect, AI is a double-edged sword. Whilst it can accentuate disinformation, it can also be used to detect false narratives and hate speech. So how can we harvest the benefits of AI for peace and security while minimizing the risks? I’d like to make three suggestions in this regard. First, we need a common framework shared by all the players involved in the development and application of this technology. This means governments, states, companies and civil society and research organizations. And I think Mr. Clark was very clearly saying that. AI does not exist in a normative vacuum. Existing international law, including the United Nations Charter, international humanitarian law and human rights, applies. Switzerland is involved in all UN processes aimed at reaffirming and clarifying the international legal framework for AI and, in the case of lethal autonomous weapon systems, at developing prohibitions and restrictions. Second, AI must be human-centered. As Professor Zeng was saying this so rightly, AI should never pretend to be human. We call for its development, deployment and use to always be guided by ethical and inclusive considerations. Clear responsibility and accountability must be maintained, both for states and for companies or individuals. Finally, the relatively early stages of AI development offers us an opportunity to ensure equality and inclusion and to counter discriminatory stereotypes. AI is only as good and reliable as the data we provide it with. If this data reflects prejudices and stereotypes, for example of gender, or is simply not representative of its operational environment, AI will give us poor advice for maintaining peace and security. It is the responsibility of developers and users, both governmental and non-governmental, to ensure that AI does not reproduce the harmful societal biases we are striving to overcome. President, the Security Council has a responsibility. to proactively monitor developments around AI and the threat that it may pose to the maintenance of international peace and security. It should be guided by the results of the General Assembly on the related legal framework. The Council must also use its powers to ensure that AI serves peace, such as by anticipating risks and opportunities or by encouraging the Secretariat and peace missions to use this technology in innovative and responsible ways. My delegation used artificial intelligence in our first debate on trust under our presidency, as well as in the context of an exhibition on digital dilemmas with the ICRC. We were able to recognize the impressive potential of this technology at the service of peace. We therefore look forward to making artificial intelligence for good an integral part of the new agenda for peace. I thank you.

President – United Kingdom:
I thank the representative of Switzerland for their statement. And I now give the floor to the representative of Ghana.

Ghana:
Madam President, I thank the United Kingdom for convening this high-level debate on artificial intelligence and Secretary General António Guterres for his important statement to the Council this morning. We are equally grateful for the expert views provided by Mr. Jack Clark and Mr. Zeng Yi to this meeting. The emergent dominance of artificial intelligence as a pervasive fabric of our societies is one that could positively impact on several vistas, including beneficial applications for medicine, agriculture, environmental management, research and development, the realm of arts and culture, and for trade. As we see in the horizon, opportunities to enhance outcomes in different areas of life by embracing the further application of artificial intelligence, we can also already see dangers that must motivate all of us to work quickly and collaboratively to avert risk that could be detrimental to our common humanity. Artificial intelligence, especially in peace and security, must be guided by a common determination not to replicate the risk that powerful technologies have created for the world by the ability to unleash disaster of global proportions. We must constrain the excesses of individual national ambitions for combative dominance and commit to work towards. the development of principles and frameworks that will govern AI technologies for peaceful purposes. For Ghana, we see opportunities in the development and application of AI technologies for identifying early warning signs of conflicts and defining responses that have a higher rate of success and that may also be more cost effective. Such technologies can facilitate the coordination of humanitarian assistance and improve risk assessment. The application for law enforcement is already well appreciated in many jurisdictions and where law enforcement has been effective, risk of conflicts are usually low. Moreover, the application of AI technologies for peace mediation and negotiation efforts has revealed early remarkable outcomes that must be pursued for the cause of peace. The deployment of AI technologies, for instance, in determining the Libyan population’s reactions to policies has facilitated the peace as reflected in the improvements in that country’s 2022 global peace index. We also see in similar context and within ISR function of peacekeeping missions an opportunity to enhance the safety and security of peacekeepers and the protection of civilian populations through the responsible deployment of AI technologies. Madam President, despite these encouraging developments, we see risk with AI technologies from the perspective of both state actors and non-state actors. The integration of AI technologies into autonomous weapon systems is a foremost source of concern. While states seeking to develop such weapon systems may be genuinely interested in reducing the human cost of their involvement in conflicts, it belies their commitment to a pacific world. The history of our experience with mankind’s mastery in atomic manipulation shows that should such desires persist, it only generates in equal measure efforts by other states to cancel the advantage. that such a deterrent seeks to create. The additional danger of non-human control over such weapon systems is also a risk that the world cannot afford or ignore. The increasingly digitalized world and the creation of virtual reality means that the capacity to tell the difference between what is real and what is made up is diminishing by the day. This can create unchecked platforms that non-state actors especially could instrumentalize to destabilize societies or cause friction between or among states using AI technologies. While AI technologies can be used to counter misinformation, disinformation, and hate speech, they also have the capacity to be used by negative forces in pursuing the campaigns of their nefarious and malicious agenda. Madam President, the potential of AI technologies for good should however lead us to work towards its peaceful uses. As indicated earlier, there is a need for the development of some principles and frameworks, mindful of the fact that we do not yet have a full sense of the evolution of AI technologies. Such a process should however not be the preserve of the Security Council, but of the wider membership of the United Nations that have an equal stake in how we guide the further evolution of AI technologies. Without global consensus, it would be difficult to limit the flourishes of AI technologies. Since presently a significant part of the developments on AI technologies occur within the private sector and academia, it is important to also broaden the dialogue beyond governments to ensure that in filling industry gaps, there can be no diversion or misuse of AI technologies, including unarmed aerial vehicles. There are negative consequences for peace and security, including on the African continent, where terrorist groups may be experimenting with such technologies should be anticipated and abated. In recognition that AI technologies can create disruptions in military balance, it is important that states should deliberately pursue confidence-building measures that rest on a shared interest in preventing conflicts that are not deliberately intended. This can be done through setting standards for voluntary information sharing and notifications concerning AI-enabled systems, strategies, policies, and programs implemented by states. We hope that in considering the Secretary General’s upcoming policy brief on the new agenda for peace, Member States will be able to advance durable solutions to address new threats to international peace and security. We indicate our support for the Secretary General’s efforts in this regard. In the process, we must also deepen work on existing initiatives and ongoing processes such as the Secretary General’s roadmap on digital cooperation, the ongoing negotiations on a global convention on countering the use of ICT for criminal purposes, and the open-ended working group on security of and in the use of information and communication technologies. Equally, we encourage the Security Council’s further engagement with the strategy for the digital transformation of United Nations peacekeeping under the A4P-Plus initiative. During the upcoming peacekeeping ministerial meeting in Accra, Ghana would welcome conversations on how AI can be deployed to enhance peacekeeping operations under the relevant themes. In Africa, our continental digital transformation strategy spanning the period 2020 to 2030 would also continue to be an important ancillary to the African continental free trade area, which is an anchor for addressing many of the underlying security challenges. challenges of the continent and silencing the guns in Africa. Finally, Madam President, I affirm Ghana’s commitment in advancing constructive discussions on AI technologies for the peace and security of our world. We highlight the need for a whole-of-society approach that leverages the potentials of the private sector, especially the tech giants, and which retains the human rights of citizens at the core of all ethical principles. I thank you.

President – United Kingdom:
I thank the representative of Ghana for their statement, and I now give the floor to the representative of France.

France:
Madam President, I thank the Secretary-General, as well as Mr. Clark and Yijing for their briefings. Artificial intelligence is the revolution of the 21st century as we see the emergence of a harsher world, punctuated by competition, ravaged by hybrid wars. It is essential that artificial intelligence be made a tool for the benefit of peace. France firmly believes that artificial intelligence can play a decisive role in the maintenance of peace. These technologies can contribute to the safety of blue helmets, to the performance of operations, inter alia to improve protection of civilians. The technology can also help to facilitate conflict resolution by facilitating civil society mobilization, but also possibly in the future by facilitating the delivery of humanitarian assistance. Artificial intelligence can also help to advance sustainable development goals achievement. This is the aim of our contribution to the Global Digital Compact. On the fight against climate change, preventing natural risks, there can be more accurate meteorological predictions advanced, and AI can help to support implementation of the Paris agreements to reduce greenhouse gas emissions. Turning to the development of AI, it also includes risks. There’s a need for us to consider these risks head on. AI is liable to heighten the cyber threat. These technologies help and facilitate. malicious actors in their waging of increasingly sophisticated cyber attacks. Systems with artificial intelligence themselves can be more vulnerable to cyber attacks. Securing these systems is thereby a key challenge at the military level. There’s a vital need for AI to be modified to reflect the nature of conflict, and this is possible. It’s possible that AI can be modified based on nature of conflict. We need to develop an applicable framework for autonomous lethal weapons. This framework can help to ensure future conflicts are conducted in a way that respects international humanitarian law. Generative artificial intelligence can particularly exacerbate the war of information through the massive spread at large scale and low cost of artificial content and messages tailored to certain recipients. One can consider the massive disinformation campaigns currently underway in the CAR and Mali, and those that have compounded the war waged by Russia against Ukraine. This interference in elections also destabilizes countries and undermines the pillars of democracies. France is committed to advancing an ethical and responsible approach for artificial intelligence. This is the aim of the Global Partnership, which we launched in 2020 with the European Union and the Council of Europe. France has been working on rules to regulate and support its developments. Given these developments, the United Nations offers an invaluable framework. We applaud the work underway of the New Agenda for Peace and the forthcoming organization of the Summit of the Future, which will help us to collectively consider these challenges and to devise norms for the future. France will fully contribute to ensuring that artificial intelligence is advanced for the prevention of conflict, for peacekeeping, and for peacebuilding. Thank you.

President – United Kingdom:
I now give the floor to the representative of Ecuador.

Ecuador:
Thank you, Madam President. The primary obligation of intelligence is to distrust itself, said the Polish writer Stanislaw Lem, that this element is something that cannot be expected of artificial intelligence. I therefore underline the relevance of the topic which brings us here today upon the initiative of the United Kingdom, and I’m grateful for the briefings from the Secretary General, Antonio Guterres, and the other briefers. The question cannot be whether or not we support the development of artificial intelligence. Against the backdrop of rapid technological change, artificial intelligence has already developed at breakneck speed and will continue to do so. Artificial intelligence, like any other technological tool, can contribute to peacekeeping and peacebuilding efforts or it can undermine these objectives. Artificial intelligence can contribute to preventing conflicts and moderating dialogues in complex situations, as was the case with COVID-19. Emerging technologies were essential to overcome the obstacles thrown up by the pandemic. Artificial intelligence can assist in protecting humanitarian personnel, allowing greater access and action by them, including through predictive analysis. The preparation, early warning, and timely reaction can benefit from this tool as well. Technological solutions can help United Nations peacekeeping operations to fulfill their mandates more effectively. effectively, inter alia, by facilitating the adaptation to changing dynamics of conflict. On the 30th of March 2020, Ecuador co-sponsored Security Council Resolution 2518, which supported more integrated use of new technologies with a view to improving the situational awareness of personnel and their capacities. This was reiterated in the presidential statement of the 24th of May 2011. All of this must include artificial intelligence, because of its ability to improve the security of camps and convoys by monitoring and analyzing conflicts. As an organization, we cannot become more effective if we are not equipped with the tools which allow us to overcome new challenges to security. Our responsibility is to promote and make the most of technological development as a facilitator of peace. This must be done while strictly upholding international public law, human rights law and international humanitarian law. We cannot ignore the threats stemming from the misuse or abuse of artificial intelligence for malicious or terrorist ends. The artificial intelligence system also brings with it other risks, such as discrimination or massive surveillance. Ecuador also rejects the militarization or weaponization of arms placement in artificial intelligence. We reiterate the risks posed by lethal autonomous weapons and the need for all armaments systems to be used upon the basis of a human decision, control and judgment. by the only viable framework, which is that of responsibility and accountability. The ethical principles of responsible behavior are indispensable, but are not sufficient. The response to benefit from artificial intelligence, without exacerbating the threats it poses, is to establish a legally binding international framework. And Ecuador will continue to advocate for this. In cases where it is not possible to ensure sufficient human control over lethal autonomous weapons or to ensure the principles of distinction, proportionality and precaution, they must be prohibited. I agree with the concern expressed by the Secretary General on the alarming potential link between artificial intelligence and nuclear weapons. We welcome the recommendations made today in relation to the new agenda for peace. And also agree with the need to reduce the digital divide and foster partnerships and cooperation which allow us to seize the opportunities offered by emerging technologies for peaceful ends. As well as ethical considerations, robotization of conflict is a great challenge for disarmament efforts and an existential challenge that this Council ignores at its peril. Madam President, AI researchers today talk about problems of alignment. So how can we, in other words, ensure that their discoveries will serve us rather than destroying us? This was the question asked by Oppenheimer, Heinstein and Newman and other scientists, for example, and it is the challenge that we still have before us today. Thank you.

President – United Kingdom:
I thank the representative of Ecuador for their statement, and I now give the floor to the representative of Malta.

Malta:
Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I also thank the Secretary General for enriching our discussion with his thoughts and insights. Artificial intelligence is reshaping the way we work, interact and live. Peaceful applications of AI can help achieve the Sustainable Development Goals or support UN peacekeeping efforts. Such efforts include the use of drones for humanitarian assistance deliveries, monitoring and surveillance. On the other hand, the proliferation of AI technologies raises significant risks that demand our attention. The potential misuse or unintended consequences of AI, if not carefully managed, could pose threats to international peace and security. Malicious actors could exploit AI for cyber attacks, disinformation and misinformation campaigns or autonomous weapons systems, leading to increased vulnerabilities and geopolitical tensions. There can also be negative human rights consequences associated with AI, including through discriminatory algorithmic decision-making. We must address these risks collectively, though through international cooperation frameworks and norms. Madam President, Malta believes that the cooperation of multiple stakeholders across the various levels and sectors of international, regional and national communities is essential for implementing ethical frameworks regarding AI around the world. In this regard, the international community needs to develop universal instruments that focus not only on the articulation of values and principles, but also on their practical realisation with a strong emphasis on the rule of law, human rights, gender equality and protection of the environment. As governmental and non-governmental actors race to be first in the development of AI, governance and control practices must be developed at a comparable pace for safeguarding international peace and security. Therefore, the Security Council must push for strong AI governance and ensure its inclusive, safe and responsible deployment through the sharing of experiences and governmental frameworks. Since 2019, Malta is developing an ethical AI framework in alignment with the EU Ethics Guideline for Trustworthy AI. This framework is based on four guiding principles. 1. Build on a human-centric approach. 2. Respect for all applicable laws and regulations, human rights and democratic values. 3. Maximise the benefits of AI systems while preventing and minimising their risks. 4. Align with emerging international standards and norms around AI ethics. Malta is ready to work hand-in-hand on AI with all stakeholders involved to develop a global agreement on common standards for the responsible use of AI. Moreover, within the EU we are working on an Artificial Intelligence Act which seeks to ensure that citizens can trust what AI has to offer. It takes a human-centric and innovation-friendly approach to AI based on fundamental rights and the rule of law. In this line, Malta strongly supports the work of the UN Open-Ended Working Group on ICT and International Security and underlines that confidence-building measures are essential to increase the level of dialogue and trust for more transparency in the use of AI to ensure better accountability. Malta expresses its concerns regarding the use of AI systems in military operations since machines cannot make human-like decisions involving the legal principles of distinction, proportionality and precaution. We believe that lethal autonomous weapons systems currently exploiting AI should be banned and only those weapons systems that are in full respect of international humanitarian law and human rights law should be regulated. Likewise, the integration of AI, international security, counter-terrorism and law enforcement systems raises fundamental human rights, transparency and privacy concerns which must be addressed. To conclude, Madam President, Malta believes that the Security Council has an important anticipatory role to play on this issue. We have the responsibility to monitor developments closely and address any threats to international peace and security that may arise in a timely manner. It is only by promoting responsible governance, international cooperation and ethical considerations that we can harness the transformative power of AI whilst mitigating potential risks. Thank you.

President – United Kingdom:
I thank the representative of Malta for their statement and I now give the floor to the representative of Gabon.

Gabon:
Thank you, Madam President. I thank the United Kingdom for organizing this debate on artificial intelligence at a time when technological innovation is constantly proliferating and revolutionizing our societies whilst also having an impact on international security. I also thank the Secretary General of the United Nations, Antonio Guterres. Professor Yi Xing and Mr. Jack Clark for their briefings. Artificial intelligence is as fascinating as it is bewildering, and in just a few years has already revolutionized our ways of life and our method of production, our ways of thinking, and has shifted the horizons of our everyday life. As a result of their precision and their ability to resolve complex problems, AI systems set themselves apart from more classic IT mechanisms and offer numerous opportunities for peacekeeping and international security. Madam President, peacekeeping and international security have for many years been solidly anchored in a robust technological ecosystem which allows us to bolster the management and crisis prevention capacities but also fosters heightened understanding of situations on the ground, whilst improving the protection of civilians, particularly in complex environments. Artificial intelligence is making its own specific contribution by increasing manifold the analytical capacities of early warning systems. It is now quicker and easier to detect emerging threats by analyzing vast quantities of data from various sources very quickly. Thanks to AI, the operational mechanisms of the United Nations peace missions are performing ever better. Indeed, the use of drones, night vision systems and geolocalization enable us in Terralia to detect activities of armed and terrorist groups, to secure the delivery of humanitarian aid in difficult-to-reach areas and to improve the ceasefire monitoring missions or detections of mines on the ground. AI also buttresses the implementation of very complex peacekeeping mandates, particularly civilian protection. AI also plays a major role in peacebuilding processes. It contributes to the reconstruction efforts of states in post-conflict situations and fosters the implementation of quick-impact projects whilst offering opportunities of employment to young people and reintegration opportunities for former combatants. However, to better exploit the advantages of AI for peace and security, particularly when deploying peacekeepers, keeping operations. It is essential for local communities to take on ownership of and absorb these new technologies so as to perpetuate the beneficial effects after with the withdrawal of international forces. If they are not anchored at the local level, the benefits of AI are doomed to disappear and crises to resurface. States, national and international organizations and local populations must be educated about the processes of manufacturing and dissemination so as to bolster trust in the legitimacy of AI systems used. AI, Madam President, certainly contributes to bolstering international peace and security, but also poses numerous risks which we must address right away. Terrorist and criminal groups can have recourse to the great potential of AI to pursue their illicit activities. Over recent years, hacker networks have stepped up their cyber attacks, disinformation actions, and theft of sensitive data. The threats resulting from the malicious use of AI must stop the international community in its tracks and constitute the point of departure for greater control over the development of new technologies. This means bolstering transparency, international governance with the United Nations as a guarantor, but above all, accountability. The UN must absolutely bolster international cooperation to develop a regulatory framework with appropriate control mechanisms and robust security systems. Information sharing and establishing ethical norms will also allow us to prevent abuse and to preserve international peace and security. Madam President, Gabon remains attached to promoting the peaceful and responsible use of new technologies, including artificial intelligence. Against this backdrop, it is important to foster the sharing of best practices for security and control, to encourage states to adopt national regulatory policies, to to initiate, right away, awareness-raising programs about the challenges of artificial intelligence, particularly amongst young people. To conclude, Madam President, clearly, artificial intelligence offers a whole range of opportunities. It supports sustainable development initiatives, helps us to prevent humanitarian and security crises, to combat climate change and its negative effects. However, in the absence of reliable regulation and effective management and control tools, AI can constitute a real threat to international peace and security. Therefore, our enthusiasm for these increasingly sophisticated technologies must be marked by prudence and restraint. Thank you.

President – United Kingdom:
Representative of Gabon for their statement. And I now give the floor to the representative of Albania.

Albania:
Thank you, Madam President, for convening this important meeting and for bringing this issue to the Security Council for a first-ever debate on the matter. Artificial intelligence has been around since decades as part of the world’s scientific and technological drive. The recent spike in its development has opened vast avenues for its use in almost every sector of human activity, as mentioned earlier by the Secretary General, our briefers, and other colleagues. Everything indicates that it is having to be front and center in an ever-seeing revolutionary technological advancement in the years to come. Our world is no stranger to scientific evolution, and including disruptive technological drive. This is part of the uninterrupted human quest for progress, part of our genome, and engraved in human history. But there is something fundamentally different with the artificial intelligence. It stands out for both its rate of progress and its scope for applications, and holds great promise to transform the world like never before, to automate processes to a scale we cannot even imagine now. While this technology advances at a mind-blowing pace, we are caught between fascination and fear, weighing benefits and worries, anticipating applications that can transform the world, but also aware of its other side, the dark side, the potential risks that could impact our safety, our privacy, our economy, and our security. Some, more alarmist, go as far as to warn about AI’s risks for our civilization. The vertiginous rhythm of development of a technology with far-reaching consequences that we are not fully able to grasp raises serious questions, and rightfully so. Because of the nature of technology, because of the lack of transparency and accountability in how algorithms reach their results, because of the fact that often not even the scientists and the engineers who design the AI models fully understand how they arrive at the outputs they produce. Depending on how the data sets they are trained on, or the way in which the algorithm is organized, AI models and systems could lead to discrimination on the basis of race, sex, age, or disability. While these can be prevented or corrected, it is far more difficult to prevent the serious risks presented by those who use the technology with the intention to cause harm. Internet and social media have already shown how detrimental such behaviors can be. Some countries continually attempt to deliberately mislead information, distort facts, interfere in the democratic process of others, spread hatred, promote discrimination, incite violence or conflict by misusing digital technologies. Deep fakes and doctored photos are being used to create convincing but false information and narratives, to produce convincing complot theories, to undermine public trust, undermine democracy and even be used to cause panic. To all these actors, artificial intelligence will provide infinite possibilities for malicious activities. Colleagues, the misuse of the AI could have a direct impact on international peace and security and poses grave security challenges for which we are currently ill-prepared. AI may be used to perpetuate bias through large-scale disinformation attacks, to develop new cyber weapons, power autonomous weapons and to design advanced biological weapons. As we reap the benefits of the technological advancement, it is a matter of urgency to use the existing rules and regulations, improve and update them and define the ethics of the use of the AI. We must also establish at national and international level the necessary safeguards, governance frameworks and clear lines of responsibility and authority to ensure that AI systems are used appropriately, safely and responsibly for the good of all and they do not infringe on the human rights and freedoms and do not affect peace and security. We must promote standards for responsible state behaviour and the applicability of the international law in the use of the AI and its technologies, but also in the monitoring and assessment of risks and implications. And this provides a role for the Security Council in this respect. Albania will continue to promote open, free and safe AI technologies where human rights, fundamental freedoms and the rule of law are respected. And I thank you.

President – United Kingdom:
I thank the representative of Albania for their statement, and I now give the floor to the representative of the Russian Federation.

Russian Federation:
Thank you, Madam President. We welcome the participation in today’s meeting of the Secretary General of the United Nations. We listened closely to his statement. We also thank the briefers for their interesting observations. The Russian Federation attributes tremendous significance to issues of the development of advanced technologies which are designed to help to benefit humankind, to facilitate socioeconomic progress as well. Artificial intelligence is one of the most modern, groundbreaking technologies which, without a doubt, kindles significant scientific interest. However, its practical significance, the potential of its application, not to mention the hypothetical impact on political processes, is yet to be fully assessed. Given assessments that were voiced today by the briefers, the impression is created that artificial intelligence as a technology is at its early stages of development. And for now, there is a lot more unknown, many more unknowns in this area. So the selection of the organizers, the presidency as the Security Council for the platform for the discussion is not necessarily evident in our view. And even in the concept note for the meeting itself, it is clear that specific arguments in support of the premise of some kind of organic link between AI and issues of international peace and security, at least for now, is not present. This merely lists a varied selection of trends that can be applied to any sphere. Madam President, in preparing for today’s meeting, I decided to pose a question about whether AI threatens international peace and security as a source, AI as a source. And here was the response. AI can pose certain threats to international peace and security, but it can also have a positive influence. These are some aspects which should be taken into account. Cyber security, development of AI could result in the emergence of new vulnerabilities and threats in cyberspace. AI can be used for the creation of potent cyber weapons, automization of cyber attacks, and circumvention of relevant defense systems. Autonomous weapons and systems, the development of autonomous weapons and systems. created based on AI can pose risks to security because such systems can make decisions about the use of force without the direct participation of individuals. This can result in certain consequences, unpredictable consequences in the emergence of armed conflict, and lastly, the spread of disinformation. Artificial intelligence can be used for the creation and the spread of disinformation and fake news, and this can undermine trust in information and help to result in greater socioeconomic tensions. So, for this reason, we believe, we view attempts to link the issue of AI to generically to threats to international peace and security as somewhat artificial, and as you see, artificial intelligence agrees with us on that. To begin, the international community has yet to determine the nature and the character of certain risks and threats, of possible risks and threats to assess their scale, possible response measures, and so on. To that end, what is necessary is a professional, scientific expertise-based discussion that can take several years, and this discussion is already underway at specialized platforms. Certain military aspects of AI, which theoretically could impact on global and regional security, are also discussed in specialized formats. With respect to lethal autonomous systems, that is the group of government experts, the states parties to the Inhumane Weapons Convention, questions of security and the use of ICTs are comprehensively discussed by the specialized U.N. Open-Ended Working Group under the aegis of the General Assembly. We believe that the duplication of these efforts is counterproductive. Madam President, as with any advanced technology, AI can can help to benefit humankind, but it can also generate destructive consequences, and this depends on who controls it and for what purposes it is applied. Today, unfortunately, we see how the West, led by the United States, with its actions is undermining trust in their own technological solutions and its own IT industry and companies. This is frequently uncovered, showing interference by American intelligence bodies and activities of major corporate sectors, the manipulation and content moderation algorithms, tracking of users, including built-in features with hardware and software. At the same time, the West has no ethical qualms about knowingly allowing AI to generate misanthropic statements in social networks if this advances a political agenda convenient to them. This is the case with the extremist Medicom Corporation and its permission for calls to destroy Russians. At the same time, algorithms instruct on how to spread fakes and disinformation and how to automatically block information that is deemed to be inaccurate by owners of networks and their sponsors and intelligence services. That is the shameful truth which is hidden. In the spirit of the notorious cancel culture, AI forces the adjustment to tailor to their needs of digital compilations, thereby generating false narratives. To summarize, the main threat generator and challenge generator in this area first and foremost is from unscrupulous champions of AI from among so-called advanced democracies. This issue has been seen in the UK deciding on the issue for today’s meeting. Today, there’s a popular premise about serious prospects of AI in terms of facilitating the emergence of new markets and sources. At the same time, we see a deceitful sidestepping of the issue of unequal distribution of these benefits. These aspects were considered in detail by the Secretary-General in the recently issued report on digital cooperation. Digital inequality has reached such a level that if in Europe, access to Internet is enjoyed by approximately 89% of residents. In low-income countries, only one-quarter of the population enjoys these benefits. Nearly two-thirds of global trade and services is currently carried out at the digital format, and yet the cost of a smartphone in South Asia and Africa, south of the Sahara, represents more than 40% of the average monthly income. And payment for mobile data for African users is more than three times greater than the global average. And citizens gaining access to digital skills is supported by governments in less than half of countries in the world. There’s an unequal distribution of benefits, and we see a handful of major platforms and states dominating here. Digital technologies have resulted in significant increase in production and generated value, but these benefits do not lead to general prosperity. The latest UNCTAD report, Technology and Innovation 2023, states that developed countries will enjoy most of the benefits of digital technology, including AI. Digital technologies are expediting the concentration of economic might in the hands of a smaller and smaller group of elites and companies. The aggregate wealth of technology billionaires in 2022 amounted to 2.1 trillion U.S. dollars. This divide reflects a massive gap in governance and cross-border governance in state investments. Historically, digital technologies were developed at

Secretary General – Antonio Guterres:
the private level. Governments frequently lag behind in regulating them for the benefit of the general public. This trend needs to be reversed. In the development of AI, governance mechanisms of leadership roles should be played by states. Any tools for self-regulating the sector should comply with national legislations of states where companies operate. We object to the establishment of supranational oversight bodies for AI. We also view as unacceptable the extraterritorial application of any norms in this area. The adoption of universal agreements in this area is acceptable only on the basis of equitable, mutually respectful dialogue among members of the international community, as well as in a way that reflects the legitimate interests and concerns of negotiating parties. Russia has already contributed to this process. In Russia, major IT companies have developed a national code of ethics for artificial intelligence, which establishes guidelines for developers and users for the safe and ethical use of systems with AI. This does not establish any legal obligations and is open for the inclusion of foreign specialized agencies, private companies, academic and social structures. This code was established as a national contribution to the implementation of the UNESCO recommendation on ethical aspects of AI. Madam President, to conclude, I wish to emphasize that no systems with AI should cast doubt on the moral and intellectual autonomy of the individual. Developers should regularly conduct risk assessments linked to the use of AI, and they should adopt measures to bring those to a minimum. Thank you for your attention.

President – United Kingdom:
I thank the representative of the Russian Federation for their statement. There are no more names inscribed on the list of speakers. I thank again our technical experts for joining us today and colleagues for your contributions. The meeting is adjourned.

Questions & Answers

What are the risks and opportunities of artificial intelligence for peace and security?

The UN Security Council held its first discussion on AI’s implications for international peace and security. Key speakers included the UN Secretary-General and representatives from various nations.

Secretary-General Antonio Guterres:
– Highlighted AI’s positive uses in peace and security efforts
– Emphasized risks: malicious use, cyberattacks, disinformation, autonomous weapons, and interaction with other technologies
– Proposed recommendations for member states, including developing national strategies and engaging in multilateral processes
– Announced a high-level advisory board for global AI governance

Jack Clark:
– Focused on AI threats due to misuse potential and unpredictability
– Emphasized the need for testing and evaluation of AI systems

Yi Zeng:
– Addressed opportunities (identifying disinformation, network defense) and risks (creating disinformation, autonomous weapons)
– Suggested a UN Security Council working group on AI for peace and security

United Kingdom:
– Highlighted AI’s potential for positive change and risks to global stability
– Announced a global summit on AI safety

Japan:
– Discussed risks beyond imagination and national borders
– Noted AI’s potential to enhance UN operations

Mozambique:
– Emphasized AI’s potential for disaster prediction and peacekeeping
– Warned of risks from AI surpassing creators’ understanding

United Arab Emirates:
– Discussed AI’s role in peace-building and conflict de-escalation
– Cautioned against over-regulation hindering innovation

China:
– Noted AI’s empowering role and concerns about misuse
– Suggested establishing ethical norms and regulations

United States:
– Highlighted AI’s potential to address global challenges
– Noted risks of intensifying conflicts and human rights abuses

Brazil:
– Discussed AI’s potential to enhance security and humanitarian efforts
– Expressed concern about AI-assisted weapons development

Switzerland:
– Noted AI’s potential in detecting false narratives and peacekeeping
– Launched the Swiss Call for Trust and Transparency Initiative

Ghana:
– Highlighted AI’s use in conflict early warning and peacekeeping
– Suggested developing principles and frameworks for AI

France:
– Discussed AI’s role in improving peacekeeper safety
– Committed to advancing an ethical approach to AI

Ecuador:
– Emphasized AI’s potential in conflict prevention
– Called for a legally binding international framework

Malta:
– Noted AI’s support for UN peacekeeping efforts
– Mentioned developing an ethical AI framework

Gabon:
– Emphasized AI’s role in early warning systems
– Stressed the importance of local ownership of AI technologies

Albania:
– Highlighted AI’s transformative potential and associated risks
– Committed to promoting open, free, and safe AI technologies

Russian Federation:
– Noted potential threats in cybersecurity and autonomous weapons
– Objected to supranational oversight bodies for AI

The discussion emphasized the need for careful regulation and international cooperation to mitigate AI risks while harnessing its potential for peace and security.

How could artificial intelligence be used to maintain peace and security?

The debate revealed broad consensus on AI’s potential to enhance UN Security Council’s work in maintaining international peace and security. However, concerns about risks and challenges were also highlighted. Many speakers emphasized the need for responsible AI development guided by ethical principles and international law. The discussion underscored the importance of addressing the digital divide and ensuring equitable AI benefits. While there was agreement on the need for global AI governance, opinions varied on specific mechanisms and forums best suited for this purpose.

Key Points from Speakers:

Secretary General – Antonio Guterres:
– AI can be used for identifying violence patterns, monitoring ceasefires, and strengthening peacekeeping efforts.
– Recommended developing national AI strategies, creating norms for military AI applications, and negotiating a legally binding instrument for autonomous weapons systems.

Jack Clark:
– Emphasized developing methods to test AI systems’ capabilities, misuses, and safety flaws.
– Suggested investing in safety testing and creating standards for evaluating AI systems.

Yi Zeng:
– Proposed establishing a UN Security Council working group on AI for peace and security.
– Stressed that AI should never pretend to be human or replace human decision-making.

Japan:
– Suggested using AI for early warning systems, sanctions monitoring, and countering disinformation.
– Welcomed efforts to utilize AI for mediation and peace-building activities.

Mozambique:
– Proposed using AI to enhance early warning capabilities, customize mediation efforts, and strengthen strategic communication in peacekeeping.

United Arab Emirates:
– Suggested using AI to analyze data for detecting terrorist activity and predicting climate change impacts on security.
– Emphasized AI’s potential to limit misattribution of attacks in conflict settings.

China:
– Recommended focusing on AI applications in conflict situations to enrich UN peacekeeping tools.
– Stressed the need for ethical norms and safety in AI development.

United States:
– Proposed using AI to improve humanitarian assistance delivery and provide early warnings for issues like climate change or conflict.

Switzerland:
– Suggested proactively monitoring AI developments and potential threats to international peace and security.
– Encouraged the UN Secretariat and peace missions to use AI responsibly.

Ghana:
– Proposed using AI for identifying early warning signs of conflicts, facilitating humanitarian assistance coordination, and enhancing peacekeeping operations.

France:
– Suggested using AI to enhance peacekeeping operations, facilitate conflict resolution, and combat climate change.

Ecuador:
– Proposed using AI to prevent conflicts, protect humanitarian personnel, improve early warning systems, and enhance UN peacekeeping operations.

Gabon:
– Suggested using AI to enhance early warning systems, improve peacekeeping operations, and aid peacebuilding processes.

United Kingdom:
– Emphasized shaping global governance of AI technologies.
– Proposed organizing a global summit on AI safety to consider risks and coordinate action.

Brazil:
– Suggested the General Assembly is better suited for long-term AI discussions.
– Emphasized ensuring AI military applications comply with international humanitarian law.

Malta:
– Proposed that the Security Council should push for strong AI governance and ensure inclusive, safe, and responsible AI deployment.

Russian Federation:
– Expressed skepticism about discussing AI in the Security Council at this stage.
– Suggested that existing specialized formats are more appropriate for discussing AI-related security issues.

What actions should be taken to mitigate the risks of AI to peace and security?

Common themes included the need for international cooperation, regulatory frameworks, and monitoring potential threats to peace and security posed by AI technologies.
Key suggestions from speakers included:

Secretary General Antonio Guterres:
– Develop common measures for AI transparency, accountability, and oversight
– Create AI that bridges divides and supports Sustainable Development Goals
– Develop national strategies for responsible AI use
– Establish norms for military AI applications
– Create a global framework for AI in counterterrorism
– Prohibit lethal autonomous weapons systems without human control

Jack Clark: Invest in developing ways to test AI systems’ capabilities, misuses, and safety flaws.

Yi Zeng: Create a working group on AI for peace and security.

Japan: Use AI to enhance Council efficiency and for mediation and peace-building activities.

Mozambique:
– Negotiate an intergovernmental treaty to govern AI
– Develop regulations for privacy and data security
– Promote a Global Digital Pact for knowledge sharing
– Implement AI-based early warning systems and sanctions monitoring

United Arab Emirates:
– Establish rules to govern AI, preventing promotion of hatred and misinformation
– Promote AI for peace-building while being aware of potential misuse
– Ensure AI is inclusive and doesn’t replicate biases

China: Explore AI applications in conflict situations to enhance UN peace toolkit.

United States: Continue discussions on AI’s impact on international peace and security.

Brazil: Suggested the General Assembly as a more suitable forum for AI discussions.

Switzerland:
– Monitor AI developments and potential threats
– Use AI to serve peace and encourage responsible use in peace missions

Ghana: Advocated for broader dialogue including private sector and academia, and confidence-building measures.

France: Advance ethical AI approach and support work on the New Agenda for Peace.

Ecuador:
– Establish a legally binding international framework for AI use
– Prohibit certain lethal autonomous weapons
– Address AI’s potential link to nuclear weapons

Malta: Push for strong AI governance and monitor developments closely.

Gabon:
– Develop a regulatory framework with control mechanisms
– Establish ethical norms and share best practices
– Encourage national regulatory policies and awareness programs

Albania: Promote standards for responsible state behavior and applicability of international law in AI use.

Russian Federation: Expressed skepticism about discussing AI in the Security Council, preferring specialized platforms.

Who are the key actors in mitigating the risks and harnessing the opportunities of AI for peace and security?

The speakers collectively emphasized the need for a multi-stakeholder approach and international cooperation to address AI challenges and opportunities in the context of peace and security.
Speakers identified the following key actors:

Secretary General Antonio Guterres:
– Governments and Member States
– The United Nations
– Private sector, civil society, and independent scientists
– A proposed new UN entity
– A multi-stakeholder, high-level advisory board for artificial intelligence

Jack Clark:
– Governments
– Private sector companies

Yi Zeng:
– Governments
– The UN Security Council
– The United Nations
– Humans in general
– Member states of the UN

President – United Kingdom:
– International organizations and initiatives
– Countries with pioneering AI strategies
– A wide coalition of international actors from all sectors
– AI developers and safety researchers
– World leaders

Japan:
– The United Nations and its bodies
– The G7
– Individual nations
– Stakeholders from around the world

Mozambique:
– Governments and UN member states
– AI specialists
– Companies
– Civil society

United Arab Emirates:
– Member states
– The United Nations and the Security Council
– Governments
– Key stakeholders

China:
– The international community
– The United Nations
– All countries, especially developing countries
– Leading technology enterprises
– International organizations, government departments, research and educational institutions, enterprises, and the public
– The Security Council

United States:
– Member states
– Technology companies
– Civil society activists
– The United Nations, particularly the Security Council and other UN bodies

Brazil:
– The United Nations and its bodies
– Humans in general, particularly in military applications

Switzerland:
– Governments and states
– Companies
– Civil society
– Research organizations
– The Security Council
– Developers and users of AI

Ghana:
– Governments and UN Member States
– Private sector and academia
– Tech giants
– The United Nations Security Council and other UN bodies

France:
– The United Nations
– France and international partnerships
– Individual countries

Ecuador:
– The United Nations and its peacekeeping operations
– The international community
– Human decision-makers
– AI researchers

Malta:
– The international community
– The Security Council
– Multiple stakeholders across various levels
– Governmental and non-governmental actors
– The UN Open-Ended Working Group on ICT and International Security

Gabon:
– States, national and international organizations, and local populations
– The United Nations
– The international community
– Young people

Albania:
– National and international bodies
– The Security Council
– States

Russian Federation:
– States
– AI developers

How should AI be governed and regulated at the international, regional and national levels?

Key points from speakers include:

Secretary General Antonio Guterres:
– Proposed creating a new UN entity for AI governance
– Announced formation of a high-level advisory board on AI
– Recommended national strategies for responsible AI development
– Called for multilateral process to develop norms for military AI applications

Jack Clark:
– Emphasized importance of evaluation and testing for effective regulation
– Suggested governments develop ways to test AI systems’ capabilities and safety

Yi Zeng:
– Proposed UN Security Council establish a working group on AI for peace and security
– Emphasized need for global framework for AI governance

United Kingdom:
– Proposed four principles for AI governance: open, responsible, secure, and resilient
– Emphasized international cooperation and involvement of existing multilateral initiatives
– Plans to host global summit on AI safety

Japan:
– Emphasized human-centric and trustworthy AI principles
– Advocated for inclusive stakeholder involvement in AI governance

Mozambique:
– Proposed negotiating intergovernmental treaty if existential risk emerges
– Suggested developing national regulations for privacy and data security
– Advocated for Global Digital Pact to share technological knowledge

United Arab Emirates:
– Suggested establishing commonly agreed-upon rules to govern AI internationally
– Emphasized using international law as a guide while recognizing need for adaptation
– Advocated for flexible regulations, particularly for developing countries

China:
– Proposed principles including ethics-first, safety, fairness, and peaceful utilization
– Supported UN’s central coordinating role in AI governance
– Emphasized need for risk awareness and equal access to AI technology

United States:
– Mentioned national steps taken to govern AI responsibly
– Encouraged international collaboration and shared principles

Brazil:
– Suggested UN General Assembly as best forum for long-term AI discussion
– Mentioned progress by Open-Ended Working Group on ICTs

Switzerland:
– Advocated for common framework shared by all players in AI development
– Emphasized existing international law applies to AI
– Stressed importance of equality and inclusion in AI development

Ghana:
– Emphasized need for principles and frameworks involving wider UN membership
– Highlighted importance of including non-governmental actors
– Advocated for comprehensive approach to AI governance

France:
– Committed to ethical and responsible approach to AI
– Mentioned need for specific regulations in certain areas, like autonomous weapons

Ecuador:
– Advocated for legally binding international framework
– Emphasized need for ethical principles and human control over AI systems

Malta:
– Suggested governance through international cooperation frameworks and norms
– Supported developing universal instruments focusing on values and practical realization

Gabon:
– Proposed strengthening international cooperation for regulatory framework
– Encouraged states to adopt national regulatory policies and awareness programs

Albania:
– Proposed using and updating existing rules and regulations
– Suggested establishing safeguards and governance frameworks at national and international levels

Russian Federation:
– Emphasized AI governance should be led by states, not supranational bodies
– Advocated for universal agreements based on equitable dialogue

The speakers generally emphasized international cooperation, ethical considerations, and balancing innovation with safety and human rights in AI governance.

What role should civil society, academia, the private sector, and other non-state actors play in the responsible use of AI?

Most speakers advocated for a multi-stakeholder approach in AI governance, with varying degrees of emphasis on the extent of non-state actor involvement.
Key points from speakers include:

Secretary General Antonio Guterres:
– Emphasized a comprehensive approach including non-state actors in AI governance
– Convening a multi-stakeholder advisory board for AI governance options

Jack Clark:
– Stressed need for broader involvement beyond private sector
– Emphasized developing robust evaluation systems for AI

Yi Zeng:
– Highlighted importance of education and human control in AI systems

United Kingdom:
– Advocated for involving a wide coalition of international actors
– Mentioned plans for a global summit on AI safety

Japan:
– Emphasized including diverse stakeholders to make AI more trustworthy

Mozambique:
– Promoted Global Digital Pact for sharing technological knowledge

United States:
– Committed to working with various actors to address AI challenges

Switzerland:
– Advocated for a common framework shared by all players in AI development
– Mentioned Swiss Call for Trust and Transparency Initiative

Ghana:
– Emphasized including private sector and academia in AI discussions
– Called for a whole-of-society approach

Malta:
– Stressed cooperation among multiple stakeholders for implementing ethical AI frameworks

Brazil:
– Emphasized inclusive discussions on AI
– Highlighted the General Assembly as the best forum for AI discussions

Ecuador:
– Called for a legally binding international framework for AI governance

Gabon:
– Emphasized education and international cooperation in AI governance

Albania:
– Stressed establishing safeguards and governance frameworks at national and international levels

Russian Federation:
– Emphasized limited role for non-state actors, subject to government oversight
– Mentioned national code of ethics for AI developed by IT companies

What initiatives or programs exist today to regulate military applications of AI and to use AI for peace?

The Security Council session addressed initiatives and programs to regulate military AI applications and use AI for peace. Key points from various speakers include:

Secretary General Antonio Guterres:
– Highlighted the 2018-2019 Guiding Principles on Lethal Autonomous Weapons Systems
– Mentioned UN Office of Counterterrorism’s work on AI and terrorism
– Announced a high-level advisory board for AI governance
– Called for a legally binding instrument to prohibit certain autonomous weapons by 2026

Yi Zeng:
– Suggested creating a UN Security Council working group on AI for peace and security

Japan:
– Contributing to rule-making through the Convention on Certain Conventional Weapons (CCW)
– Welcomed AI use in UN peacekeeping and conflict prevention

United Arab Emirates:
– Emphasized the need for AI guidelines and rules

China:
– Submitted position papers on military AI application and ethical governance
– Hosted a conference on emerging technologies’ impact on international peace
– Developing laws, regulations, and ethical norms for AI

United States:
– Released a Political Declaration on Responsible Military Use of AI and Autonomy
– Mentioned UN development of AI tools for humanitarian assistance and early warning

Brazil:
– Highlighted the Open-Ended Working Group on ICTs and the CCW

Switzerland:
– Involved in UN processes for AI legal framework
– Developing AI-assisted analysis tool for UN Operations and Crisis Center
– Launched Swiss Call for Trust and Transparency Initiative

Ghana:
– Mentioned several UN initiatives and Africa’s digital transformation strategy
– Suggested pursuing confidence-building measures

France:
– Highlighted the Global Digital Compact and Global Partnership for AI
– Mentioned the New Agenda for Peace and Summit of the Future

Ecuador:
– Advocated for a legally binding international framework to regulate AI
– Supported prohibiting certain autonomous weapons

Malta:
– Developing an ethical AI framework
– Mentioned EU’s Artificial Intelligence Act and UN working group efforts

Russian Federation:
– Mentioned group of government experts on lethal autonomous systems
– Noted Russia’s national code of ethics for AI

Gabon:
– Highlighted AI use in UN peacekeeping missions for various purposes

Overall, speakers emphasized the need for further development of international frameworks and cooperation to manage AI challenges and opportunities in peace and security contexts.

How does international law apply to the development and use of artificial intelligence?

The Security Council session addressed the application of international law to AI development and use. Key points include:

1. General consensus that existing international law applies to AI, especially in military contexts.
2. Recognition of the need for new frameworks and regulations to address AI’s unique challenges.

Key speaker contributions:

Secretary General Antonio Guterres:
– Recommended national strategies for responsible AI development
– Called for multilateral processes to develop norms for military AI applications
– Urged for a global framework to regulate AI in counterterrorism
– Advocated for a legally binding instrument to prohibit certain autonomous weapons systems

Japan: Emphasized responsible and transparent military use of AI based on international law.

Mozambique: Suggested new international agreements and regulations for AI governance.

United Arab Emirates: Affirmed that international law applies to AI but noted the need for adaptation.

United States: Focused on ethical and responsible military use of AI, emphasizing human accountability.

Brazil: Stressed that military AI applications must comply with international humanitarian law and retain human responsibility.

Switzerland: Emphasized that existing international law applies to AI and mentioned UN processes to clarify the legal framework.

France: Called for a framework for autonomous lethal weapons that respects international humanitarian law.

Ecuador: Emphasized AI development in accordance with international law and called for a legally binding framework.

Malta: Stressed the need for universal instruments focusing on practical realization of AI principles and regulation of weapons systems.

Albania: Called for promoting standards for responsible state behavior and applicability of international law in AI use.

Russian Federation: Emphasized equitable dialogue in adopting universal agreements on AI and objected to supranational oversight.

China: Called for international cooperation in AI governance and development of ethical norms and regulations.

Gabon: Stressed the importance of international cooperation in developing a regulatory framework for AI.

Ghana: Emphasized the need for global consensus on AI governance.

United Kingdom: Highlighted the urgency of shaping global governance for AI technologies.

Jack Clark: Emphasized the need for government involvement in AI development.

Yi Zeng: Stressed the importance of human control in AI systems, particularly in military applications.

In conclusion, while international law is seen as applicable to AI, there is a strong emphasis on the need for new frameworks, international cooperation, and adaptations to effectively govern AI’s unique challenges.

Albania

Speech speed

159 words per minute

Speech length

762 words

Speech time

288 secs


Arguments

AI offers vast opportunities but also poses serious risks to international peace and security

Supporting facts:

  • AI can transform the world and automate processes to an unprecedented scale
  • AI poses risks to safety, privacy, economy, and security
  • Some warn about AI’s risks for civilization


AI can be misused for malicious activities, posing grave security challenges

Supporting facts:

  • AI can be used for large-scale disinformation attacks
  • AI may be used to develop new cyber weapons
  • AI could power autonomous weapons and design advanced biological weapons


Report

In this speech to the UN Security Council, Albania’s representative addressed the opportunities and challenges posed by artificial intelligence (AI). The speaker acknowledged AI’s potential to transform various sectors but emphasised the need for responsible development and governance.

Key points included:

1. AI’s rapid advancement offers unprecedented opportunities but also poses significant risks to safety, privacy, economy, and security.

2. Albania supports the development of ethical AI governance frameworks and international cooperation to address these challenges.

3. The country advocates for open, free, and safe AI technologies that respect human rights and the rule of law.

4. Concerns were raised about AI’s potential misuse for malicious activities, including large-scale disinformation, cyber weapons, and advanced biological weapons.

5. Albania called for the promotion of responsible state behaviour and the applicability of international law in AI use.

6. The speaker emphasised the Security Council’s role in monitoring and assessing the risks and implications of AI for international peace and security.

Overall, Albania’s stance supports harnessing AI’s benefits while mitigating its risks through international cooperation and robust governance frameworks.

Brazil

Speech speed

138 words per minute

Speech length

1073 words

Speech time

468 secs


Arguments

AI has tremendous potential but also poses risks

Supporting facts:

  • AI can bolster global security architecture
  • AI can augment decision-making processes
  • AI can enhance humanitarian efforts
  • AI poses challenges including potential for autonomous weapons and cyber threats


Report

The speaker addressed the rapid development of artificial intelligence (AI) and its implications for global security and international relations. Whilst acknowledging AI’s potential to enhance decision-making processes and humanitarian efforts, the speaker emphasised the need to address associated challenges, including autonomous weapons and cyber threats.

The speech highlighted that AI’s primary development has been in civilian applications, cautioning against overly securitising the topic within the UN Security Council. Instead, the speaker advocated for broader, more inclusive discussions in forums such as the UN General Assembly, given AI’s wide-ranging impacts across various aspects of life.

Regarding military applications of AI, the speaker stressed the importance of adhering to international humanitarian law. They emphasised the concept of meaningful human control, arguing that human responsibility and accountability must be retained in decisions involving weapon systems.

The speaker raised concerns about the potential risks of AI interacting with weapons of mass destruction, citing alarming developments in AI-assisted creation of chemical compounds and pathogens. They urged caution against linking AI with nuclear weapons.

In conclusion, the speaker affirmed the UN’s unique position in promoting global coordination for AI development, emphasising the need for a concerted international effort to ensure AI benefits humanity whilst adhering to shared principles and purposes.

China

Speech speed

130 words per minute

Speech length

1341 words

Speech time

619 secs


Arguments

China supports the UN’s central coordinating role in AI governance

Supporting facts:

  • China supports the central coordinating role of the U.N. in this regard
  • China supports the Secretary General’s efforts in holding discussions amongst all parties


China has actively explored AI development and governance

Supporting facts:

  • China issued the New Generation Artificial Intelligence Development Plan in 2017
  • China has continuously improved relevant laws and regulations, ethical norms, intellectual property standards, safety monitoring, and evaluation measures


China is willing to strengthen international cooperation on AI security governance

Supporting facts:

  • China released the Global Security Initiative concept paper
  • China is willing to strengthen communication and exchange with the international community on AI security governance


Report

In this speech to the UN Security Council, China’s representative outlined their stance on artificial intelligence (AI) governance and development. The speaker emphasised China’s support for the UN’s central role in coordinating international AI governance efforts, advocating for inclusive dialogue amongst all nations.

China proposed several key principles for AI governance:

1. Prioritising ethics and ensuring AI benefits humanity
2. Maintaining safety and controllability to mitigate risks
3. Promoting fairness and inclusiveness, particularly for developing countries
4. Encouraging openness and opposing exclusive technological clubs
5. Adhering to peaceful utilisation, especially in military contexts

The speaker highlighted China’s active involvement in AI development and governance, citing their 2017 New Generation Artificial Intelligence Development Plan and ongoing efforts to improve relevant laws, regulations, and ethical norms.

China expressed willingness to strengthen international cooperation on AI security governance, referencing their Global Security Initiative and commitment to implementing the Global Development Initiative, Global Security Initiative, and Global Civilization Initiative proposed by President Xi Jinping.

Throughout, China emphasised the importance of multilateral cooperation and the need to balance technological progress with security concerns in AI development and governance.

Ecuador

Speech speed

119 words per minute

Speech length

712 words

Speech time

359 secs


Arguments

AI can contribute to peacekeeping and peacebuilding efforts

Supporting facts:

  • AI can contribute to preventing conflicts and moderating dialogues in complex situations
  • AI can assist in protecting humanitarian personnel


AI can improve UN peacekeeping operations

Supporting facts:

  • Technological solutions can help UN peacekeeping operations fulfill their mandates more effectively
  • AI can improve the security of camps and convoys by monitoring and analyzing conflicts


Ecuador recognizes the risks associated with AI misuse

Supporting facts:

  • Ecuador acknowledges threats stemming from misuse or abuse of AI for malicious or terrorist ends
  • AI systems bring risks such as discrimination or massive surveillance


Report

In this speech, Ecuador’s representative addresses the implications of artificial intelligence (AI) for peace and security. The speaker acknowledges AI’s potential to contribute positively to peacekeeping and peacebuilding efforts, citing its ability to prevent conflicts, moderate dialogues, and protect humanitarian personnel. They highlight how AI can enhance UN peacekeeping operations by improving situational awareness and adapting to changing conflict dynamics.

However, the speech also emphasises the risks associated with AI, including its potential misuse for malicious purposes, discrimination, and mass surveillance. Ecuador firmly rejects the militarisation or weaponisation of AI and expresses concern about lethal autonomous weapons systems.

The speaker advocates for a balanced approach to AI development, stressing the need for human control and judgment in its application, particularly in military contexts. Ecuador calls for a legally binding international framework to govern AI use, emphasising that ethical principles alone are insufficient. The speech concludes by urging the Security Council to address the challenges posed by AI, including its potential impact on disarmament efforts and nuclear weapons.

France

Speech speed

156 words per minute

Speech length

566 words

Speech time

218 secs


Arguments

AI can play a decisive role in maintaining peace

Supporting facts:

  • AI can contribute to the safety of blue helmets
  • AI can improve protection of civilians
  • AI can facilitate conflict resolution by mobilizing civil society


AI poses risks that need to be addressed

Supporting facts:

  • AI can heighten cyber threats
  • AI systems can be vulnerable to cyber attacks
  • AI can exacerbate information warfare through massive spread of artificial content


Report

The French representative addressed the United Nations Security Council on the topic of artificial intelligence (AI) and its implications for peace and security. The speaker emphasised that AI could play a decisive role in maintaining peace by enhancing peacekeeping operations, improving civilian protection, and facilitating conflict resolution. They also highlighted AI’s potential to advance sustainable development goals and combat climate change.

However, the speech also acknowledged the risks associated with AI, including heightened cyber threats, vulnerabilities in AI systems, and the potential for AI to exacerbate information warfare through the spread of artificial content. The speaker stressed the need to address these risks head-on, particularly in military applications and autonomous weapons systems.

France’s commitment to advancing an ethical and responsible approach to AI was underscored, citing the launch of the Global Partnership in 2020 with the EU and Council of Europe. The representative expressed support for international cooperation through the UN framework to address AI challenges, including the New Agenda for Peace and the upcoming Summit of the Future. France pledged to contribute to ensuring AI is advanced for conflict prevention, peacekeeping, and peacebuilding.

Gabon

Speech speed

149 words per minute

Speech length

814 words

Speech time

329 secs


Arguments

AI offers opportunities for peacekeeping and international security

Supporting facts:

  • AI increases analytical capacities of early warning systems
  • AI improves detection of emerging threats
  • AI enhances operational mechanisms of UN peace missions


AI poses risks to international peace and security

Supporting facts:

  • Terrorist and criminal groups can use AI for illicit activities
  • Hacker networks have increased cyber attacks and disinformation actions


Report

In a speech on artificial intelligence (AI) and its impact on international security, Gabon’s representative highlighted both the opportunities and risks presented by this emerging technology. The speaker emphasised AI’s potential to enhance peacekeeping efforts, citing improved early warning systems, better threat detection, and enhanced operational mechanisms for UN peace missions. AI was also noted for its role in supporting peacebuilding processes and post-conflict reconstruction.

However, the speech also addressed the risks associated with AI, particularly its potential misuse by terrorist and criminal groups. The speaker highlighted increased cyber attacks and disinformation campaigns as growing concerns.

Gabon advocated for the responsible use of AI and new technologies, calling for international cooperation and governance frameworks. The country supports fostering best practices, encouraging national regulatory policies, and initiating awareness programmes about AI challenges. The speech stressed the importance of transparency, accountability, and the UN’s role as a guarantor in developing appropriate control mechanisms.

While acknowledging AI’s potential to support sustainable development and address global challenges, the speaker concluded by urging prudence and restraint in the adoption of these sophisticated technologies, emphasising the need for reliable regulation and effective management tools to mitigate potential threats to international peace and security.

Ghana

Speech speed

151 words per minute

Speech length

1220 words

Speech time

484 secs


Arguments

Ghana recognizes both opportunities and risks of AI for peace and security

Supporting facts:

  • AI can be used for identifying early warning signs of conflicts
  • AI can facilitate coordination of humanitarian assistance
  • AI can improve risk assessment
  • AI can enhance safety and security of peacekeepers
  • AI integration into autonomous weapon systems is a concern


Report

Ghana’s representative addressed the UN Security Council on the topic of artificial intelligence (AI) and its implications for peace and security. The speaker acknowledged both the opportunities and risks presented by AI technologies.

On the positive side, Ghana sees potential for AI to enhance conflict prevention, humanitarian assistance, and peacekeeping operations. Specific benefits highlighted include early warning systems, improved risk assessment, and enhanced safety for peacekeepers.

However, the speech also raised concerns about AI integration into autonomous weapons systems and the potential for AI to be misused by malicious actors to spread disinformation or destabilise societies.

Ghana advocated for the development of global principles and frameworks to govern AI technologies for peaceful purposes. They emphasised the need for a collaborative, whole-of-society approach involving governments, the private sector, and civil society. The speaker called for constraining “excesses of individual national ambitions” and building global consensus on AI governance.

The representative expressed support for ongoing UN initiatives, including the Secretary-General’s efforts to address new threats to international peace and security. Ghana also welcomed discussions on AI deployment in peacekeeping at the upcoming ministerial meeting in Accra.

In conclusion, the speech underscored Ghana’s commitment to advancing constructive dialogue on AI technologies for global peace and security, while emphasising the importance of maintaining human rights at the core of ethical principles.

Jack Clark

Speech speed

165 words per minute

Speech length

1339 words

Speech time

486 secs


Arguments

AI development should not be left solely to private sector actors

Supporting facts:

  • Private sector actors have sophisticated computers, large pools of data, and capital resources to build AI systems
  • Private sector development of AI poses potential threats to peace, security, and global stability


Governments must develop state capacity to regulate AI

Supporting facts:

  • Governments need to come together and make AI development a shared endeavor across all parts of society
  • Current AI development is dictated by a small number of firms competing in the marketplace


AI systems have potential for misuse and unpredictability

Supporting facts:

  • Beneficial AI capabilities can sit alongside potential misuses, such as biological weapons development
  • AI systems may exhibit subtle problems not identified during development once deployed


Report

The speaker argues that artificial intelligence (AI) development should not be left solely to private sector actors, emphasising the need for global governmental cooperation and regulation. They highlight the rapid advancement of AI technologies over the past decade, noting that private companies now possess the resources to create increasingly powerful systems.

While acknowledging the potential benefits of AI, the speaker warns of threats to peace, security, and global stability. These concerns stem from AI’s potential for misuse and unpredictability. The speaker likens AI to a form of human labour that can be bought and sold at computer speed, raising questions about who should have access to and control over this technology.

To address these challenges, the speaker advocates for developing robust testing and evaluation methods for AI systems. They argue that this is crucial for creating accountability among AI developers and enabling effective government regulation. The speaker notes that many countries are emphasising safety testing and evaluation in their AI policy proposals.

In conclusion, the speaker stresses the importance of international cooperation in AI governance. They argue that investing in evaluation systems is essential for maintaining a balance of power between AI developers and global citizens, ensuring that the benefits of AI can be reaped by the global community while mitigating potential risks.

Japan

Speech speed

99 words per minute

Speech length

432 words

Speech time

262 secs


Arguments

Japan emphasizes the importance of human-centric and trustworthy AI

Supporting facts:

  • Japan believes the key to addressing AI challenges is twofold: human-centric and trustworthy AI
  • AI should enhance human potential, not control humans


Japan proposes using AI to enhance UN and Security Council operations

Supporting facts:

  • Japan suggests considering how AI can enhance efficiency and transparency in Security Council decision-making
  • Japan proposes using AI for early warning systems, sanctions monitoring, and countering disinformation in UN operations


Report

In a speech to the UN Security Council, Japan’s representative emphasised the importance of addressing the challenges and opportunities presented by artificial intelligence (AI). The speaker outlined two key principles: human-centric and trustworthy AI.

Japan advocated for AI development that aligns with democratic values and human rights, stressing that AI should enhance human potential rather than control humans. The nation called for responsible, transparent, and lawful military use of AI, pledging to contribute to international rule-making processes.

The speaker highlighted the UN’s role in fostering trustworthy AI by bringing together diverse stakeholders. Japan expressed pride in leading discussions on AI misuse by terrorists and launching the G7 Hiroshima AI process.

Furthermore, Japan proposed leveraging AI to enhance UN and Security Council operations. Suggestions included using AI for early warning systems, sanctions monitoring, and countering disinformation in UN operations. The speaker also recommended considering how AI could improve the efficiency and transparency of Security Council decision-making.

In conclusion, Japan affirmed its commitment to actively participate in AI discussions at the UN and beyond, emphasising the need for global collaboration in addressing AI-related challenges and opportunities.

Malta

Speech speed

131 words per minute

Speech length

699 words

Speech time

319 secs


Arguments

Malta supports the development of universal instruments for ethical AI frameworks

Supporting facts:

  • Malta believes that cooperation of multiple stakeholders is essential for implementing ethical frameworks regarding AI
  • The international community needs to develop universal instruments that focus on articulation of values and principles and their practical realization


Report

Malta’s representative addressed the UN Security Council on the topic of artificial intelligence (AI) and its implications for international peace and security. The speaker emphasised the need for responsible innovation and ethical frameworks in AI development, advocating for universal instruments that focus on practical realisation of values and principles.

Malta is developing its own ethical AI framework aligned with EU guidelines, based on a human-centric approach, respect for laws and human rights, maximising benefits whilst minimising risks, and alignment with international standards. The country supports strong AI governance and inclusive, safe deployment through experience sharing and collaborative frameworks.

Concerns were raised about AI in military operations, with Malta calling for a ban on lethal autonomous weapons systems that exploit AI. The speaker stressed that only weapons fully respecting international humanitarian and human rights law should be regulated.

The Security Council was urged to play an anticipatory role in monitoring AI developments and addressing potential threats to international peace and security. Malta emphasised the importance of international cooperation, responsible governance, and ethical considerations to harness AI’s potential whilst mitigating risks.

Mozambique

Speech speed

113 words per minute

Speech length

857 words

Speech time

455 secs


Arguments

AI presents both opportunities and risks for international peace and security

Supporting facts:

  • AI can contribute to eradicating disease, combating climate change, and predicting natural disasters
  • AI poses risks of various kinds, including the potential of catastrophic outcomes


AI resources and capabilities are not evenly distributed globally

Supporting facts:

  • Resources such as data, computer power, electricity, skills, and technological infrastructure are not evenly distributed across the globe
  • This uneven distribution could reinforce inequalities and asymmetries


Report

The Mozambican representative addressed the UN Security Council on the topic of artificial intelligence (AI) and its implications for international peace and security. The speech highlighted both the opportunities and risks presented by AI’s rapid advancement.

Key points included:

1. AI’s potential to contribute positively in areas such as disease eradication, climate change mitigation, and natural disaster prediction.

2. Concerns about AI’s capacity to spread misinformation, facilitate scams, and enable other nefarious activities.

3. Mozambique’s advocacy for a balanced approach to AI governance, including:
– Negotiating an intergovernmental treaty to monitor AI use
– Developing regulations to safeguard privacy and data security
– Promoting a Global Digital Pact to facilitate knowledge sharing between nations

4. The importance of responsible innovation and addressing the peace and security implications of AI.

5. Recognition that AI resources and capabilities are not evenly distributed globally, potentially reinforcing inequalities and asymmetries.

The speech emphasised the need for collaborative efforts between AI specialists, governments, companies, and civil society to mitigate risks and foster responsible AI practices. Mozambique called for a balanced approach that harnesses AI’s potential while implementing necessary safeguards to prevent it from becoming a source of conflict or exacerbating global inequalities.

President – United Kingdom

Speech speed

123 words per minute

Speech length

1155 words

Speech time

562 secs


Arguments

AI will fundamentally transform every aspect of human life

Supporting facts:

  • AI may lead to groundbreaking discoveries in medicine
  • AI could boost productivity in economies
  • AI may help adapt to climate change and reduce violent conflict


AI will affect the work of the Security Council

Supporting facts:

  • AI could enhance or disrupt global strategic stability
  • AI challenges fundamental assumptions about defence and deterrence
  • AI poses moral questions about accountability for lethal decisions on the battlefield


Transformative Potential of AI

Supporting facts:

  • AI is poised to fundamentally change every aspect of human life, from medicine and education to climate adaptation and economic productivity.
  • It has the potential to help achieve the Sustainable Development Goals and reduce violent conflict, offering immense benefits to humanity


Need for Global Governance

Supporting facts:

  • AI’s borderless nature makes it essential to establish global governance frameworks that are open, responsible, secure, and resilient.
  • The UK advocates for international collaboration and engagement across sectors, building on existing initiatives by UNESCO, OECD, the G20, and other organizations.


Report

The United Kingdom convened the first-ever Security Council discussion on artificial intelligence (AI), emphasising its potential to transform every aspect of human life. The UK representative highlighted AI’s capacity to drive advancements in medicine, boost economic productivity, aid climate change adaptation, and potentially reduce violent conflict.

However, the speech also addressed AI’s implications for international peace and security. It was noted that AI could disrupt global strategic stability, challenge assumptions about defence and deterrence, and raise moral questions about battlefield accountability. The spread of disinformation facilitated by AI was identified as a significant threat to democracy and stability.

The UK proposed a vision for AI governance based on four principles: openness, responsibility, security, and resilience. These principles aim to support freedom and democracy, uphold the rule of law and human rights, ensure safety and predictability, and foster public trust.

To address the urgent need for global governance of transformative technologies, the UK announced plans to host the first major global summit on AI safety in autumn. The summit aims to bring together world leaders to consider AI risks and decide on coordinated action.

The speech concluded by emphasising the importance of seizing the opportunities presented by AI while addressing its challenges, particularly those related to international peace and security, through decisive and unified global action.

Russian Federation

Speech speed

153 words per minute

Speech length

1273 words

Speech time

501 secs


Arguments

Russia questions the appropriateness of discussing AI in the Security Council at this stage

Supporting facts:

  • The impression is created that artificial intelligence as a technology is at its early stages of development
  • There are many unknowns in this area
  • Specific arguments in support of the premise of some kind of organic link between AI and issues of international peace and security are not present


Russia highlights the issue of unequal distribution of AI benefits

Supporting facts:

  • In Europe, 89% of residents have access to the Internet, while in low-income countries, only one-quarter of the population enjoys these benefits
  • The cost of a smartphone in South Asia and Africa represents more than 40% of the average monthly income
  • Payment for mobile data for African users is more than three times greater than the global average


Report

In this speech to the UN Security Council, Russia’s representative expresses scepticism about discussing artificial intelligence (AI) in this forum at this stage. They argue that AI’s impact on international peace and security is not yet fully understood, and that more specialised, scientific discussions are needed before addressing it at the Security Council level.

The speaker criticises Western nations, particularly the United States, for undermining trust in their technological solutions and IT industry. They accuse these countries of manipulating content moderation algorithms, tracking users, and allowing the spread of disinformation when it suits their political agenda.

A significant portion of the speech focuses on the unequal distribution of digital technology benefits globally. The speaker highlights the stark digital divide between developed and developing nations, citing statistics on internet access, smartphone affordability, and mobile data costs. They argue that this inequality extends to the potential benefits of AI, with developed countries likely to reap most of the advantages.

The speech concludes by emphasising the concentration of economic power in the hands of a small group of technology elites and companies, suggesting that digital technologies, including AI, are exacerbating global inequality rather than promoting general prosperity.

Secretary General – Antonio Guterres

Speech speed

156 words per minute

Speech length

2023 words

Speech time

776 secs


Arguments

AI has unprecedented speed and reach with potential for global impact

Supporting facts:

  • CHET-GPT reached 100 million users in just two months
  • AI could contribute between 10 and 15 trillion US dollars to the global economy by 2030


AI has potential to enhance UN peacekeeping, mediation, and humanitarian efforts

Supporting facts:

  • AI is being used to identify patterns of violence, monitor ceasefires, and more


AI poses significant risks to international peace and security

Supporting facts:

  • AI-enabled cyberattacks are targeting critical infrastructure
  • AI could be used for terrorist, criminal, or state purposes


AI governance should involve multiple stakeholders

Supporting facts:

  • Guterres calls for integration of private sector, civil society, independent scientists in AI governance
  • Proposes a multi-stakeholder, high-level advisory board for AI


Governments should play a leading role in AI governance

Supporting facts:

  • Historically, digital technologies were developed at the private level
  • Governments frequently lag behind in regulating them for the benefit of the general public


Universal agreements on AI should be based on equitable dialogue

Supporting facts:

  • The adoption of universal agreements in this area is acceptable only on the basis of equitable, mutually respectful dialogue among members of the international community


AI systems should not undermine individual autonomy

Supporting facts:

  • No systems with AI should cast doubt on the moral and intellectual autonomy of the individual


Regular risk assessments for AI use are necessary

Supporting facts:

  • Developers should regularly conduct risk assessments linked to the use of AI, and they should adopt measures to bring those to a minimum


Report

In a landmark UN Security Council debate on artificial intelligence (AI), the speaker addressed the unprecedented speed and potential impact of AI on global affairs. They highlighted AI’s capacity to contribute significantly to the global economy and its applications in peacekeeping, mediation, and humanitarian efforts.

However, the speaker also emphasised the urgent need for global AI governance, citing potential risks to international peace and security. These include AI-enabled cyberattacks, the spread of disinformation, and the possible misuse of AI for terrorist or criminal purposes. The speaker called for a prohibition on lethal autonomous weapons without human control and urged negotiations for a legally binding instrument by 2026.

To address these challenges, the speaker proposed creating a new UN entity to support collective efforts in AI governance. This body would aim to maximise AI’s benefits, mitigate risks, and establish international monitoring and governance mechanisms. The speaker also announced the formation of a high-level advisory board on AI to explore global governance options.

The speech stressed the importance of multi-stakeholder involvement in AI governance, including the private sector, civil society, and independent scientists. It emphasised the need for governments to play a leading role in AI regulation and development, reversing the trend of private sector dominance in technological innovation.

The speaker concluded by calling for AI that bridges social, digital, and economic divides, urging the development of reliable and safe AI to address global challenges and advance the Sustainable Development Goals.

Switzerland

Speech speed

148 words per minute

Speech length

824 words

Speech time

335 secs


Arguments

AI has potential to serve peace and security

Supporting facts:

  • Switzerland is developing an AI-assisted analysis tool for the UN Operations and Crisis Center
  • AI can be used to detect false narratives and hate speech


Need for a common framework for AI development and application

Supporting facts:

  • Switzerland calls for a framework shared by governments, states, companies, civil society, and research organizations
  • Existing international law applies to AI


Report

In this speech, Switzerland’s representative addresses the potential of artificial intelligence (AI) in promoting peace and security while acknowledging its associated risks. The speaker emphasises the need for responsible AI development and application, calling for a common framework shared by governments, companies, civil society, and research organisations.

Key points include:

1. AI’s potential to serve peace and security, exemplified by Switzerland’s development of an AI-assisted analysis tool for the UN Operations and Crisis Center.

2. The importance of countering AI-related risks, such as cyber operations and disinformation, while leveraging AI to detect false narratives and hate speech.

3. The applicability of existing international law to AI, including the UN Charter, international humanitarian law, and human rights.

4. The necessity for human-centred AI development guided by ethical considerations, with clear accountability for states, companies, and individuals.

5. The opportunity to ensure equality and inclusion in AI development, avoiding the reproduction of harmful societal biases.

6. A call for the Security Council to proactively monitor AI developments, anticipate risks and opportunities, and encourage responsible use by the Secretariat and peace missions.

The speech concludes by advocating for the integration of AI for good as an integral part of the new agenda for peace.

United Arab Emirates

Speech speed

153 words per minute

Speech length

788 words

Speech time

310 secs


Arguments

Establish rules and guardrails for AI governance

Supporting facts:

  • There is a brief window of opportunity available now where key stakeholders are willing to unite and consider the guardrails for this technology
  • Member states should pick up the mantle from the Secretary General and establish commonly agreed-upon rules to govern AI before it’s too late


Use AI as a tool for peace-building and conflict de-escalation

Supporting facts:

  • AI-driven tools have the potential to more effectively analyze vast amounts of data, trends and patterns
  • AI can increase ability to detect terrorist activity in real time and predict adverse effects of climate change on peace and security


Report

The speaker addressed the United Nations Security Council on the topic of artificial intelligence (AI) and its implications for international peace and security. Four key points were emphasised:

1. The urgent need to establish rules and guardrails for AI governance. With AI development outpacing Moore’s Law, there is a brief window of opportunity for stakeholders to unite and agree upon common rules before it’s too late.

2. The potential for AI to be used as a tool for peace-building and conflict de-escalation. AI-driven tools can analyse vast amounts of data to detect terrorist activity and predict climate change impacts on security. However, the speaker cautioned against potential misuse of AI in targeting infrastructure or spreading false narratives.

3. The importance of ensuring AI is inclusive and does not replicate real-world biases. The speaker warned that progress against discrimination could be undermined if AI is not designed and accessed based on principles of equality.

4. The need to avoid over-regulating AI to maintain innovation, particularly in emerging nations. The speaker advocated for smart, effective regulations that encourage responsible behaviour without hindering technological evolution.

The speech concluded by urging proactive action to shape AI in a way that preserves international peace and security, rather than waiting for a crisis to occur.

United States

Speech speed

128 words per minute

Speech length

676 words

Speech time

318 secs


Arguments

AI offers potential benefits for addressing global challenges

Supporting facts:

  • Automated systems are helping to grow food more efficiently
  • AI can predict storm paths
  • AI can identify diseases in patients


AI poses potential risks to international peace and security

Supporting facts:

  • AI can spread mis- and disinformation
  • AI can amplify bias and inequality
  • AI can enhance malicious cyber operations
  • AI can exacerbate human rights abuses


Report

The United States recognises both the potential benefits and risks of artificial intelligence (AI) in relation to global peace and security. The speaker highlighted AI’s capacity to address challenges like food security, education, and healthcare, potentially accelerating progress towards Sustainable Development Goals. However, they also acknowledged AI’s potential to exacerbate threats and conflicts through misinformation, bias amplification, and human rights abuses.

The US advocates for a balanced approach to AI governance, emphasising responsible innovation and ethical military use. They have developed domestic initiatives, including an AI risk management framework and a blueprint for an AI Bill of Rights, to guide the safe and trustworthy development of AI systems.

Internationally, the US has proposed a Political Declaration on Responsible Military Use of AI and Autonomy, emphasising human accountability and encouraging all member states to endorse it. The speaker expressed support for UN efforts to leverage AI for humanitarian assistance and early warning systems, and welcomed continued discussions in the Security Council on AI’s impact on international peace and security.

The US commitment to working collaboratively with various stakeholders, including member states, technology companies, and civil society, was underscored. The speaker emphasised the importance of ensuring AI serves as a tool for enhancing human dignity and achieving shared aspirations for a more secure and peaceful world, rather than as a weapon or tool of oppression.

Yi Zeng

Speech speed

143 words per minute

Speech length

1185 words

Speech time

498 secs


Arguments

AI should be used for peace and security, not to enhance risks

Supporting facts:

  • AI should be used to identify disinformation and misunderstanding among countries
  • AI should be used for network defense, not to attack


Current AI systems are not truly intelligent and cannot be trusted for decision-making

Supporting facts:

  • Current AIs are information processing tools without real understanding
  • AI should not be used to directly make death decisions for humans


AI should be used to solve problems, not create them

Supporting facts:

  • Example of using AI to potentially deflect asteroids rather than for nuclear weapons
  • Humans should maintain responsibility for final decision-making on nuclear weapons


Report

Dr Yi Zeng presented his views on artificial intelligence (AI) in relation to international peace and security. He emphasised that AI should be used to promote peace and solve global challenges, rather than exacerbate risks or create problems.

Key points included:

1. AI should be utilised to identify disinformation and foster understanding between nations, not for military aggression.

2. Current AI systems lack true intelligence and understanding, making them unsuitable for critical decision-making roles.

3. Human control must be maintained over AI-enabled weapon systems, with effective and responsible oversight.

4. The UN Security Council should consider establishing a working group on AI for peace and security to address near-term and long-term challenges.

5. AI should be used to solve problems, such as potentially deflecting asteroids, rather than enhancing nuclear weapons capabilities.

6. The United Nations must play a central role in developing a global framework for AI governance to ensure peace and security.

Dr Zeng concluded by stressing the importance of international cooperation in setting up an agenda and framework for AI development and governance, emphasising the need for a shared future that leaves no one behind.