Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action
25 Jun 2025 10:30h - 11:30h
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action
Session at a glance
Summary
This discussion centered on the Freedom Online Coalition’s updated Joint Statement on Artificial Intelligence and Human Rights for 2025, presented during a session on shaping global AI governance through multi-stakeholder action. The panel featured representatives from Estonia, the Netherlands, Germany, Ghana, and Microsoft, highlighting the collaborative effort to address AI’s impact on human rights.
Ambassador Ernst Noorman from the Netherlands emphasized that human rights and security are interconnected, noting that when rights are eroded, societies become more unstable rather than safer. He stressed that AI risks are no longer theoretical, citing examples of AI being used to suppress dissent, distort public discourse, and facilitate gender-based violence. The Netherlands learned this lesson through their painful experience with biased automated welfare systems that deepened injustice for citizens.
The panelists identified surveillance, disinformation, and suppression of democratic participation as the most urgent human rights risks posed by AI, particularly when embedded in government structures without transparency or accountability. Dr. Erika Moret from Microsoft outlined the private sector’s responsibilities, emphasizing adherence to UN guiding principles on business and human rights, embedding human rights considerations from design through deployment, and ensuring fairness and inclusivity in AI systems.
Several binding frameworks were discussed as important next steps, including the EU AI Act, the Council of Europe’s AI Framework Convention, and the Global Digital Compact. The panelists emphasized the importance of multi-stakeholder collaboration, procurement policies that prioritize human rights-compliant AI systems, and the need for transparency and accountability in high-impact AI applications. The statement, endorsed by 21 countries at the time of the session, remains open for additional endorsements and represents a principled vision for human-centric AI governance grounded in international human rights law.
Keypoints
## Major Discussion Points:
– **Launch of the Freedom Online Coalition’s Joint Statement on AI and Human Rights 2025**: The primary focus was presenting an updated statement led by the Netherlands and Germany that establishes human rights as the foundation for AI governance, with 21 countries endorsing it and more expected to follow.
– **Urgent Human Rights Risks from AI**: Key concerns identified include arbitrary surveillance and monitoring by governments, use of AI for disinformation and suppression of democratic participation, bias and discrimination against marginalized groups (especially women and girls), and concentration of power in both state and private sector hands without adequate oversight.
– **Multi-stakeholder Responsibilities and Collaboration**: Discussion emphasized that addressing AI governance requires coordinated action from governments (through regulation and procurement policies), private sector (through human rights due diligence and ethical AI principles), and civil society, with Microsoft presenting their comprehensive approach as an example.
– **Binding International Frameworks and Next Steps**: Panelists highlighted several key governance mechanisms including the EU AI Act, Council of Europe’s AI Framework Convention, UN Global Digital Compact, and various UN processes, emphasizing the need for broader adoption and implementation of these frameworks.
– **Practical Implementation Challenges**: The discussion addressed real-world complexities of protecting human rights in repressive regimes, balancing innovation with rights protection, and the need for transparency while protecting privacy, with examples from the Netherlands’ own failures in automated welfare systems.
## Overall Purpose:
The discussion aimed to launch and promote the Freedom Online Coalition’s updated joint statement on AI and human rights, while building consensus among governments, private sector, and civil society on the urgent need for human rights-centered AI governance and identifying concrete next steps for implementation.
## Overall Tone:
The tone was consistently collaborative and constructive throughout, with speakers demonstrating shared commitment to human rights principles. There was a sense of urgency about AI risks but also optimism about multilateral cooperation. The discussion maintained a diplomatic yet determined quality, with participants acknowledging challenges while emphasizing collective action and practical solutions. The tone became slightly more interactive and engaging during the Q&A portion, with audience questions adding practical perspectives from different regions and sectors.
Speakers
– **Zach Lampell** – Senior Legal Advisor and Coordinator for Digital Rights at the International Center for Not-for-Profit Law; Co-chair of the Freedom Online Coalition’s Task Force on AI and Human Rights
– **Rasmus Lumi** – Director General for International Organizations and Human Rights with the Government of Estonia; Chair of the Freedom Online Coalition in 2025
– **Ernst Noorman** – Cyber Ambassador for the Netherlands
– **Maria Adebahr** – Cyber Ambassador of Germany; Co-chair of the Task Force on AI and Human Rights
– **Devine Salese Agbeti** – Director General of the Cyber Security Authority of Ghana
– **Erika Moret** – Director with UN and international organizations at Microsoft
– **Audience** – Multiple audience members who asked questions during the Q&A session
**Additional speakers:**
– **Svetlana Zenz** – Works on Asia region, focusing on bridging civil society and tech/telecoms in Asia
– **Carlos Vera** – From IGF Ecuador
Full session report
# Freedom Online Coalition’s Joint Statement on AI and Human Rights 2025: A Multi-Stakeholder Discussion on Global AI Governance
## Executive Summary
This session focused on the launch of the Freedom Online Coalition’s updated Joint Statement on Artificial Intelligence and Human Rights for 2025, an update to the coalition’s original 2020 statement. The panel brought together representatives from Estonia, the Netherlands, Germany, Ghana, and Microsoft to discuss the statement and broader challenges in AI governance. At the time of the session, the statement had been endorsed by 21 countries, with expectations for additional endorsements from both FOC and non-FOC members.
The discussion centered on how AI systems pose risks to fundamental human rights including freedom of expression, right to privacy, freedom of association, and freedom of assembly. Speakers addressed the need for human rights to be embedded in AI development from the design phase, the importance of multi-stakeholder collaboration, and the role of international frameworks in governing AI systems.
## Key Participants and Their Contributions
**Rasmus Lumi**, Director General for International Organizations and Human Rights with the Government of Estonia and Chair of the Freedom Online Coalition in 2025, opened the session with remarks about AI-generated notes, noting humorously that the AI “did say all the right things” regarding human rights, which led him to wonder about AI’s understanding of these concepts.
**Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, shared his country’s experience with automated welfare systems, stating: “In the Netherlands, we have learned this the hard way. The use of strongly biased automated systems in welfare administration, designed to combat fraud, has led to one of our most painful domestic human rights failures.” He emphasized that “Innovation without trust is short-lived. Respect for rights is not a constraint, it’s a condition for sustainable, inclusive progress.”
**Maria Adebahr**, Cyber Ambassador of Germany and co-chair of the FOC Task Force on AI and Human Rights, highlighted transnational repression as a key concern, noting: “AI unfortunately is also a tool for transnational repression… in terms of digital and AI, we are reaching here new levels, unfortunately.” She announced that Germany had doubled its funding for the Freedom Online Coalition to support this work.
**Devine Salese Agbeti**, Director General of the Cyber Security Authority of Ghana, provided perspective on bidirectional AI misuse, observing: “I have seen how citizens have used AI to manipulate online content, to lie against government, to even create cryptocurrency pages in the name of the president, etc. So it works both ways.” He identified surveillance, disinformation, and suppression of democratic participation as key concerns.
**Dr. Erika Moret**, Director with UN and international organisations at Microsoft, outlined private sector responsibilities, emphasizing that companies must adhere to UN guiding principles on business and human rights and embed human rights considerations throughout the AI development lifecycle.
**Zach Lampell**, co-chair of the FOC Task Force on AI and Human Rights, facilitated the discussion and emphasized that the statement remains open for additional endorsements, representing a principled vision for human-centric AI governance.
## Key Themes and Areas of Discussion
### Human Rights as Foundation for AI Governance
All speakers agreed that human rights should serve as the foundation for AI governance rather than being treated as secondary considerations. The discussion emphasized the need to put humans at the center of AI development and ensure compliance with international human rights law.
### Multi-Stakeholder Collaboration
Participants stressed that effective AI governance requires collaboration between governments, civil society, private sector, academia, and affected communities. Speakers noted that no single sector can address AI challenges alone.
### International Frameworks and Standards
The discussion covered several key governance mechanisms:
– **EU AI Act**: Referenced as creating predictability and protecting citizens, with potential for global influence
– **Council of Europe AI Framework Convention**: Presented as providing a globally accessible binding framework
– **UN Global Digital Compact**: Noted as representing the first time all UN member states agreed on an AI governance path
– **Hamburg Declaration on Responsible AI for the SDGs**: Mentioned as another relevant framework
– **UNESCO AI Ethics Recommendation**: Referenced in the context of global standards
### Urgent Human Rights Risks
Speakers identified several critical areas where AI poses immediate threats:
– Arbitrary surveillance and monitoring by governments
– Use of AI for disinformation campaigns and suppression of democratic participation
– Transnational repression using AI tools
– Bias and discrimination against marginalized groups
– Concentration of power without adequate oversight
### Private Sector Responsibilities
Dr. Moret outlined comprehensive responsibilities for companies, including:
– Adhering to UN guiding principles on business and human rights
– Embedding human rights considerations from design through deployment
– Conducting ongoing human rights impact assessments
– Ensuring transparency and accountability in AI systems
– Engaging in multi-stakeholder collaboration
## Practical Implementation Approaches
### Government Procurement
Ernst Noorman highlighted procurement as a practical tool, stating: “We have to use procurement as a tool also to force companies to deliver products which are respecting human rights, which have human rights as a core in their design of their products.”
### Coalition Building
Speakers discussed using smaller coalitions to achieve broader global adoption, with Noorman referencing the “oil spill effect” where regional frameworks like the Budapest Convention eventually gain wider acceptance.
### Diplomatic Engagement
The discussion emphasized combining formal diplomatic engagement with informal discussions to promote human rights principles in AI governance globally.
## Audience Questions and Broader Participation
The Q&A session included questions from both in-person and online participants. Carlos Vera from IGF Ecuador asked about how civil society organizations and non-FOC members can provide comments and support for the declaration. Svetlana Zenz raised questions about how risks can be diminished when tech companies work with oppressive governments and what practical actions civil society can take.
Online participants contributed questions about improving transparency in AI decision-making while protecting sensitive data, and identifying global frameworks with binding obligations on states for responsible AI governance.
## The FOC Joint Statement: Content and Next Steps
The statement addresses threats to fundamental freedoms including freedom of expression, right to privacy, freedom of association, and freedom of assembly. It provides actionable recommendations for governments, civil society, and private sector actors, addressing commercial interests, environmental impact, and threats to fundamental freedoms.
Several concrete next steps emerged from the discussion:
– The Task Force on AI and Human Rights committed to meeting to discuss creating space for civil society comments and support
– FOC members agreed to continue diplomatic engagement with non-member governments
– Continued work on promoting the statement’s principles through both formal and informal channels
## Implementation Challenges
The discussion acknowledged several ongoing challenges:
– Balancing transparency in AI decision-making with protection of sensitive data and privacy rights
– Addressing both government and citizen misuse of AI systems
– Ensuring meaningful participation of Global South countries and marginalized communities
– Protecting human rights in AI systems operating under repressive regimes
## Conclusion
The session demonstrated broad agreement among diverse stakeholders on the need for human rights-based AI governance. The FOC Joint Statement on AI and Human Rights 2025 provides a framework for coordinated international action, with concrete recommendations for implementation across sectors. The discussion emphasized practical approaches including government procurement policies, multi-stakeholder engagement, and diplomatic outreach to promote these principles globally.
The collaborative approach demonstrated in the session, combined with specific commitments to follow-up actions and broader engagement, positions the Freedom Online Coalition’s initiative as a significant contribution to global AI governance discussions. The success of this approach will depend on sustained commitment to implementation and continued collaboration across sectors and borders.
Session transcript
Zach Lampell: Welcome to the session, Shaping Global AI Governance Through Multi-Stakeholder Action, where we are pleased to present the Freedom Online Coalition’s Joint Statement on Artificial Intelligence and Human Rights 2025. My name is Zach Lampell. I’m Senior Legal Advisor and Coordinator for Digital Rights at the International Center for Not-for-Profit Law. I’m also pleased to be a co-chair of the Freedom Online Coalition’s Task Force on AI and Human Rights with the Government of the Netherlands and the Government of Germany. I want to welcome Mr. Rasmus Lumi, Director General for International Organizations and Human Rights with the Government of Estonia and the Chair of the Freedom Online Coalition in 2025.
Rasmus Lumi: Thank you very much. Good morning, everybody. It is a great honor for me to be here today to welcome you all to this session on the issue of how we feel or are deceived by the extremely smart artificial intelligence. When I read through the notes that it offered me, it did say all the right things, which is totally understandable. The question is, did it do it on purpose, maybe, maliciously, trying to deceive us into thinking that AI also believes in human rights? So we’ll have to take care of this. And this joint statement that we have developed under the leadership of the Netherlands is exactly one step in the way of doing this, putting humans in the center of AI development. So I would like to take this opportunity to very much thank the Netherlands for leading this discussion, this preparation in the Freedom Online Coalition, and I hope that the coalition and also elsewhere, this work will continue in order to make sure that humans and human rights will remain in the focus of all technological development. Thank you very much.
Zach Lampell: Thank you. I now turn the floor to Ambassador Ernst. Noorman, the Cyber Ambassador for the Netherlands.
Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in The Hague to discuss defence and security, we are here to address a different but equally urgent task, protecting human rights in the age of AI. These are not separate. Human rights and security are, or should be, two sides of the same coin. When rights are eroded, when civic space shrinks, when surveillance escapes oversight, when information is manipulated, societies don’t become safer. They become more unstable, more fragile. Since the original FOC statement on AI and human rights in 2020, a lot has happened. I only have to mention the introduction of Chad Gipetit, just referred to by Rasmus, in November 2022, and how different AI tools are evolving every single day. It is shaping governance, policy, and daily life. Its benefits are real, but so are the risks. And those risks are no longer theoretical. We now see AI used to represent dissent, distort public discourse, and facilitate gender-based violence. In some countries, these practices are becoming embedded in state systems, with few checks and less or no transparency. At the same time, only a handful of private actors shape what we see. They influence democratic debate, and dominate key markets, without meaningful oversight. This double concentration of power threatens both public trust and democratic resilience. That’s why the Netherlands, together with Germany, and the International Centre for Not-for-Profit Law, has led the update of the Freedom on the Line Coalition’s joint statement on artificial intelligence and human rights. I’m grateful to all of you, governments, civil society, private sector, experts, for the thoughtful contributions that shaped it. This updated statement of our joint response to the present reality of AI sets out a principled and practiced vision. Vision for a human-centric AI, governed with care, grounded in human rights, and shaped through inclusive multi-stakeholder processes. It recognizes that risks arise across the AI lifecycle, not only through misuse, but from design to deployment. And the statement calls for clear obligations for both states and private sector, strong safeguards for those most at risk, especially women and girls. It calls for transparency and accountability in high-impact systems, and for cultural and linguistic inclusion, and attention should be given to the environmental and geopolitical dimensions of AI. Some claim that raising these issues could hinder innovation. We disagree. Innovation without trust is short-lived. Respect for rights is not a constraint, it’s a condition for sustainable, inclusive progress. In the Netherlands, we have learned this the hard way. The use of strongly biased automated systems in welfare administration, designed to combat fraud, has led to one of our most painful domestic human rights failures. It showed how algorithms, if not designed and deployed correctly, can deepen injustice. And it takes already years trying to correct the personal harm it caused. In response, we have strengthened our approach to avoid similar accidents, by applying human rights impact assessments, by applying readiness assessment methodology for AI human rights with the UNESCO, and by launching a national algorithm registry, with now more than a thousand algorithms being registered. But no country can solve this alone. All AI transcends borders. So must be our response. As of today, right now, 21 countries have endorsed this joint statement, and we expect more in the days ahead. The text will be published after this session, and remain open for further endorsements, including from non-FOC countries. Let us not stand idly by while others define the rules. Let us lead, clearly, collectively, and with conviction. Human rights must not be an afterthought in AI governance. They must be the foundation. Thank you very much.
Zach Lampell: Thank you, Ambassador Noorman. I think you’re absolutely right. Human rights needs to be the foundation of AI governance, and that was precisely what TFAIR, the Task Force on AI and Human Rights within the FOC, wanted to do with this joint statement. We wanted to build on previous statements and also make sure that there is a strong foundation for governance principles now with actionable, clear recommendations for governments, civil society, and the private sector. We’re really, really pleased with the joint statement and again want to thank the Netherlands and Germany for their co-leadership with ICNL and all of the TFAIR members and Freedom Online Coalition members for their support and input to the statement. We have an amazing, excellent panel today. I want to briefly introduce them and then we’ll get into questions. So joining us virtually is Maria Adebar, Cyber Ambassador of Germany, again co-chair of the Task Force on AI and Human Rights. To my right, Mr. Devine Salese-Agbeti, Director General of the Cyber Security Authority of Ghana. And next to him is Dr. Erika Moray, Director with UN and international organizations at Microsoft. So the first question, and this is directed at Devine. What is, in your view, the most urgent human rights risks posed by AI that this statement addresses?
Devine Salese Agbeti: Thank you very much. Firstly, I would like to thank the government of the Netherlands and also the FOC support unit for extending an invitation. Thank you for the invitation to Ghana for, or to participate in such an important conversation. In my view, I think the most urgent human risk, or the most urgent human right risk posed by AI is the arbitrary use of AI for monitoring or surveillance and also the use of AI for disinformation purposes and also for the suppression of democratic participation, particularly when such is embedded in a government structure and within law enforcement systems without any transparency and accountability. Those two are very important. When we look at these concerns, I think the broader fear is that when this is unchecked and also when they are governed by commercial interests, then what happens is that they erode fundamental freedoms or fundamental human rights, such as the freedom of speech and also privacy. And I think these are the key human right concerns when it comes to artificial intelligence.
Zach Lampell: Great, thank you. And several of those are specifically addressed within the statement, including commercial interests, the environmental impact of artificial intelligence, as well as the direct threats to fundamental human rights, such as the freedom of expression, the right to privacy, freedom of association, freedom of assembly, and others. Ambassador Adebar, thank you so much for joining us. What convinced you and the government of Germany to support this statement and what do you hope it will achieve?
Maria Adebahr: Hey, hello everybody over there. I hope you can hear me very well. Thanks for having me today. It’s a wonderful occasion to present and introduce our joint statement on artificial intelligence and human rights together. Thank you for the opening remarks and thank you Ernst together as a T-Fair co-chair. I really would like to thank you all for coming and joining us in this open forum session. So having you all here, I think it’s a really important sign of commitment for human rights and the broad multi-stakeholder participation these days. And that is even to become more important and let me explain why in doing this. Sometimes it helps to really go back to ask ourselves why do we do this and what led the government of Germany to support the statement. And this is the very essence that AI stands out as one of the most, if not the most transformative innovation and challenge and technological thing that we have to confront. And it will, it already does, change the way we live, work, express and inform ourselves. And it will change the way how we form our opinion and exercise our democratic rights. And as already Ernst said, in times of global uncertainty it offers a lot of promise but also a lot of risks to people on the planet. And that is why we as countries joining T-Fair and other forum have to answer the question in what kind of a digital future we want to live in. And I have to say, that’s really the essence of a human-centered world with a non-negotiable respect for human rights is what we have to strive for. And this is the essence that the statement gives us. And let me quote, it is a world firmly rooted in and in compliance with international law, including international human rights law, not shaped by authoritarian interests or solely by commercial priorities. And only with a wise international governance, we can harness the promise that the technologies as AI give to us and hold the horn at bay. And therefore, it is essential to us to support the statement and to support its principles, because we must stand with a strong focus on human rights and a commitment to a human-centric, safe, secure and trustworthy approach to technology. And as we all know, this is not a given yet, not anywhere in the world. So we hope to convince countries, civil societies and stakeholders to strive in every part of the world for our approach. This is crucial and this is very much the essence of what we have to do. And I’m also, as Ernst just said, very happy to have now 21 states on board. This is, I think, the majority and this gives our position even more weight. And also, let me mention the right of women and girls and all their diversity belong to groups posed to more vulnerability by AI. This is a strong commitment that we really clearly harnessed and wanted to see in there. So let me close by, yeah, I’m looking forward to questions and answers, obviously, but let me close by saying that I’m also very happy to announce that Germany is able to double its funding for the Freedom Online Coalition with the double amount compared to last year. We work through our budgets negotiations here in Germany, and so we were able to double the amount. And this is also something that makes me very happy and my colleagues. And hopefully we will make good use of our support for the Freedom Online Coalition. And with that heavy note, I get back over to you.
Zach Lampell: Thank you. Thank you, Ambassador, and thank you for doubling the funding for the Freedom Online Coalition. I can speak, I’m also a member of the Freedom Online Coalition’s advisory network, and I can say that we all believe that the FOC is a true driving force and leading vehicle to promote and protect fundamental freedoms and digital rights. And without the FOC, the governance structures that we have today, we believe, would be much weaker. So we look forward to continuing to work with you, Ambassador, as well as the government of the Netherlands, as TFAIR, as well as all of the 42 other member states of the Freedom Online Coalition, to continue our important work. And we welcome your leadership, and as civil society, we look forward to working with you to achieve these aims for everyone. Dr. Moret, if I could turn to you. You mentioned the commercial interest, and that is indeed something that is noted in the joint statement. What responsibility do private tech companies have to prevent AI from undermining human rights?
Erika Moret: Thank you very much. Well, thank you, first of all, to the government of the Netherlands, to the FOC, Excellencies, Ambassadors. It’s a real pleasure to be here today. I’m currently working at Microsoft, but in my past life I was an academic and working at the UN, including on various issues in relating to human rights and international humanitarian law, and it’s a real honor to be here. So as the tech industry representative on the panel, I’d like to address how private sector companies like Microsoft view responsibilities in ensuring AI respects and protects human rights. We are very lucky at Microsoft to have a series of teams and experts working on these areas across the company that fall under the newly created Trusted Technology Group, and this includes the Office of Responsible AI, the Technology for Fundamental Rights Group, and the Privacy, Safety, and Regulatory Affairs Group. So I’ll try and represent some of our collective work here in this area. In brief, we recognize that we must be proactive and diligent at every step in AI use, from design to deployment and beyond, to prevent AI from being used to violate human rights. The first is adherence to international standards. Microsoft and many peers explicitly commit to the UN guiding principles on business and human rights as a baseline for their conduct across global operations. For us and other companies, this involves a policy commitment to respect human rights and an ongoing human rights due diligence process that enables to identify and remedy related human rights harms. These principles make clear that while states must protect human rights, companies have an independent responsibility to respect fundamental rights. The next step is to embed human rights from the very beginning. We have a responsibility to integrate human rights considerations into the design and development of AI systems and really paying attention to areas like has already been highlighted in relation to women and girls and other particularly vulnerable groups. By performing human rights assessments and monitoring, this enables us to identify risks and address them. The establishment and enforcement of ethical AI principles is a third very important area. At Microsoft, we have a clearly defined responsible AI principles which encompass fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability which guide all of our AI development and our teams must follow our responsible AI standards which is a company-wide policy that translates these principles into specific requirements. Beyond human rights due diligence that I’ve already mentioned, we also work on protecting privacy and data security. Safeguarding users’ data is a non-negotiable responsibility for our company and also for our peers. AI often involves big data so companies must implement privacy by design, minimizing data collection, securing data storage and ensuring compliance with privacy laws. The sixth area I’d like to highlight here is the vital importance of fairness and inclusivity. Tech firms have the responsibility for ensuring that their AI does not perpetuate bias and discrimination and through working with partners at the FOC and then across civil society, we can put into measure active safeguards and ongoing work to tackle challenges in this area. Again, I’d like to highlight the important note in the FOC statement that AI harms are especially pronounced for marginalized groups. Groups. Ensuring transparency and explainability would be my seventh point of what we should be taking into consideration here, so that people understand how decisions are made and can I identify potential challenges, but then also mitigation approaches. The final area I’d like to emphasize here is the need for collaboration. We’re of course facing a fragile moment in terms of multilateralism and geopolitical tensions around the world and collaboration across borders and across sectors has never been more important. Engaging through multistakeholderism in AI governance, AI regulation developments is as much our responsibility as anyone else’s. I think the more the private sector can be working with civil society and with academia to improve its own work in these areas and to contribute as well through things like red teaming and other types of reporting. This is a really important next step, I believe, in our collective work. Thank you very much.
Zach Lampell: Thank you, Dr. Murray. I think the holistic approach from Microsoft is a fantastic model that I hope other companies will adopt or model their systems and internal standards after, and I really do appreciate the final point on collaboration and next steps. With that, Ambassador Norman, what are the most important next steps to ensure AI systems promote human rights and development while avoiding harm?
Ernst Noorman: Well, thank you. There are a number of issues we can do as governments, as international community, but first and foremost, I think the importance of good, strong regulation. We know, of course, there’s an international discussion on that. There are different views across the ocean about that, but at the same time, you create a level playing field. It’s predictability that’s important. As long as companies also know what the rules are, what the guardrails are, you protect the citizens in Europe and create trust, and then also in the products. So I think with the EU AI Act, that’s a good example, and we put rights, transparency, and accountability at the core. It’s risk-based, not blocking innovation, and I think that’s a crucial step, and you see that also a lot of countries outside the European Union look with great interest to the EU regulations and see how they can adapt it. At the same time, we have to continue to work at multilateral level. The discussion of how to implement oversight, to organize oversight, and to to ensure a safe implementation of AI. We have the Council of Europe’s AI Framework Convention, which is the next step, just like we had before on Cybercrime, the Budapest Convention. And I think it’s very important to really support the UN’s work on human rights, especially the Office of the Human Rights Commissioner, the Office of the High Commissioner of Human Rights. It’s important to keep on supporting them. They have important initiatives, which are in BTEC, which also promote human rights in the private sector. So these are important steps we have to continue. And as Eric from Microsoft already mentioned, the UN Guiding Principles on Business and Human Rights is an important tool also for the private sector to be used. And finally, as an important step too, we as governments play an important role with procurement. So we have to use procurement as a tool also to force companies to deliver products which are respecting human rights, which have human rights as a core in their design of their products, and ensure that they provide safe products to the governments which are used broadly in the society. And I think that’s a very concrete step we can use as governments. Thank you very much.
Zach Lampell: Thank you, Ambassador. I think procurement is a really great final point to mention. Ambassador Adebar, same question to you. What are the most important next steps that we can take to ensure AI systems promote human rights and development while avoiding harm?
Maria Adebahr: Thank you. Yeah, thank you for the question. Ernst just answered it, I think. So we fully can agree with that Ernst just said. It is moving forward on a lot of fronts and different fora. That starts with the EU-AI Act and its implementation and the promotion of its principles. We know certain aspects are being discussed, especially by industries and worldwide, and this is totally okay and this is rightfully so. But we can start and have a discussion on things like risk management and so on and so forth. But what the EU-AI Act really sparks is a discussion on how we want to manage and govern ourselves with AI and with a good and human-centric AI accessible for everybody. So this is one part. As EU member states, UNESCO recommendations of 2021 are another one, and we are looking forward also to the results of the Third Global Forum on the Ethics of AI, Health and Banking. at this very moment. And also we would invite all FOC members and interest actors to join the Hamburg Declaration on Responsible AI for the SDGs, recently stated. And in implementation of the Global Digital Compact we’re currently discussing modalities resolution and modalities for implementing working groups, a panel and a worldwide dialogue on AI. This is very much important to us. And one final point, AI unfortunately is also a tool for transnational repression. And transnational repression is a phenomenon, let me put it like that, that we as German governments really want to focus more on. Because it’s a fairly not a new phenomenon, it’s very, very old. But in terms of digital and AI, we are reaching here new levels, unfortunately. And so this is also a subject we want to discuss more and bring it in international forward and on the international human rights agenda. Thank you.
Zach Lampell: Thank you, Ambassador. And Director Agbedi, same question to you. What are the most important next steps to ensure AI systems promote human rights and development?
Devine Salese Agbeti: Thank you. Firstly, we have to align AI with international human rights standards. In that, for example, currently the Cyber Security Authority is working with the Data Protection Commission in Ghana to actually explore redress for this. Secondly, we can look at the UN guiding principles on business and human rights. The ICCPR can underpin national, regional and international efforts to redress global AI governance in that aspect. And we can also look at the multi-stakeholder participation, which is foundational to this. The ecosystem must include civil society to amplify marginal voices, technical communities to bring transparency into algorithms, and academia to provide evidence-based insights, a private sector to ensure responsible innovation, as well as youth and indigenous voices to reflect world diversity. I think these are the ways to do so.
Zach Lampell: Fantastic. Thank you. Thank you so much. Thank you very much for again very comprehensive ideas from both ambassadors and Director Getty. Now we would like to open up the floor to all of you. We have some time for questions and answers. We have a microphone over to my left, your right. We welcome questions on the joint statement and how best to promote human rights and artificial intelligence.
Audience: Hello. It works. All right. Sorry. It’s like to handle all these devices. Hello, everyone. Thank you so much for such a great panel. And I’m super happy that Freedom Online Coalition together with the private sector is coming up with some, not all, I mean, I would not call it solutions, but at least some recommendations. My name is Svetlana Zenz, and I’m working on Asia region. And, I mean, basically my work is actually to bring the bridge between civil society and tech and telecoms in Asia. And also they’ve been mentioned today, BTAC project, which is also a great project and was like one of the start of many other initiatives in the sector. I really hope that all of that kind of initiatives will get together. But my question will be actually to FOC members, representatives, and to the private sector. So, for a start, I mean, we know that Microsoft works, I mean, known as a company which works closely with many governments because you have products which provide the operability of those governments. And in some countries as well. especially with oppressive laws and oppressive regimes, it’s hard to make sure that the human rights of users are protected. So do you have any vision how to diminish those risks? Maybe there should be more civil society action on that side, like being more practical on that side. For the OFOC side, we’ve been working with FOC members for several years starting from Myanmar where we were one of the first who engaged FOC members in this statement on internet shutdowns. FOC, what can you do in the side of, I mean, of course statement is great, but like in practical way, how we can make sure that human rights are protected on the physical world? Thank you.
Zach Lampell: Thank you so much, I think we’ll take one more question and then we’ll answer, and then if there’s time and additional questions, we’ll go back. So please, sir.
Audience: Thank you, good day. My name is Carlos Vera from IGF Ecuador. I read the declaration in the FOC website. It would be nice if we can have some space to comment on the declaration outside the FOC members and even to sign our support for that declaration. Some government even doesn’t know that the FOC exists, so maybe we can also create some warning in the civil society space. Thank you very much for the great work. Thank you.
Zach Lampell: Thank you so much. I think that’s a great suggestion and we actually have a meeting with the Task Force on AI and Human Rights tomorrow, and I will raise this very point and see what we can do to have wider adoption, especially from civil society support. So back to the first question, it’s really a question on how we, whether FOC governments, the private sector and potentially civil society can protect human rights. regarding AI systems, especially in repressive regimes. I hope I got that question correct. Okay, thank you. So anybody who would like to start us off?
Ernst Noorman: Maybe just to kick off, but I’m sure that others also want to contribute. This is online and offline dilemma. It’s not a specific, the fact that there are tools, of course, also by big tech companies like Microsoft, of course, gives them a responsibility to look at the guardrails of their tools. But at the same time, platforms are being used for human rights violations and to threaten people, but it happens offline as well. But the point is, as an FOC, wherever I go as a cyber ambassador, and I’m sure that Maria is doing the same, we discuss the role of FOC. So also with governments who are not a logical member of the FOC, we always explain what the agenda is of the FOC. Why is the FOC there? What are we doing? And why is it important, the topic on the agenda? You mentioned internet shutdowns. I can assure you I’ve been discussing that with many countries who use this tool as politically. And it’s inside the rooms. You can have more open discussion than if you do it only online with statements, et cetera. So it’s also important to have it inside rooms, closed-door sessions. Now, why are you using these tools? Can’t you do it, can’t you avoid this? Because you harm also a lot of civil services. You harm the role of journalists, which are crucial. So it’s one of the topics, just as an example, which I often mention, but also bringing up then, why is the FOC there? And the FOC is already since 2011. So it’s known by many governments, but also for new colleagues or new people working in the governments. We always stress the important role of it and also explain why it’s there and why it’s important to respect human rights online as it is important to do it offline.
Zach Lampell: Thank you, Ambassador. Ambassador Adelbardi, do you want to come in as well?
Maria Adebahr: Thank you so much. I can only underline what Ernst just said. It’s important, and we do our work and spreading the word and have those discussions formally, but also informally in a four-eye setting, politely, diplomatically sometimes, and other times really straightforward and forcefully is the way to go. And the FOC is a very, very important forum for doing this and a reference point for everybody of us, I think. Thank you.
Zach Lampell: Thank you, Ambassador. And maybe Dr. Moray, if I, oh, sorry. Director Gbedi, please.
Devine Salese Agbeti: All right, thank you. I think that as much as we are advocating for FOC members to ensure the human rights online, especially when it comes to AI, I think FOC should. We’ve been working on the Cyber Security Authority, I have seen how citizens have used AI to manipulate online content, to lie against government, to even create cryptocurrency pages in the name of the president, etc. So it works both ways. I think FOC should be advocating for responsible use of citizens, and at the same time when this is being advocated, then FOC can engage with government also to ensure that the citizens actually have the right to use these systems and use it freely without the fear of arrest or the fear of intimidation. Thank you.
Zach Lampell: Thank you. Dr. Moray.
Erika Moret: Thank you. It’s quite hard to build on these excellent points already. So just to add a few extra points on what has already been said. I would say that from the private sector perspective and from Microsoft viewpoint, we take a principled and proactive approach to this particular question, including due diligence before entering high-risk markets, which is guided by the UN guiding principles. We limit or decline services like facial recognition where misuse is likely. We also resist over-broad government data requests and publish transparency reports where misuse is likely, and to hold ourselves accountable, we offer tools like AccountGuard to protect civil society journalists and human rights defenders from cyber threats. And we also advocate globally for responsible AI and digital use, including through a very important process like the Global Digital Compact and also, of course, in fora such as the IGF and WSIS and so on. Also just to say I’d really like to highlight here the really important developments that have been going on in terms of AI data-driven tools to protect against human rights abuses under authoritarian regimes, and many tech companies are working proactively with the human rights community for this. We personally are very actively engaged with the Office of the High Commissioner on Human Rights across numerous different projects in terms of monitoring human rights abuses and helping to detect risks, and also in areas like capacity building and AI training in order to properly harness the tools and also come up with new solutions where needs are identified.
Zach Lampell: Thank you. Great answers to what is a very tough question and a really never-ending battle to prevent abuses or misuse of AI. We have a couple questions from our colleagues joining us online. The first is, what are any global frameworks with binding obligations on states for the responsible use and governance of AI? And the second question is, how can transparency in AI decision-making be improved without exposing sensitive data, particularly to ensure that the right to privacy is protected under international human rights law is indeed protected? So would any of the panellists like to jump in on either of those two questions? We have about five minutes left. Please, Dr. Murray.
Erika Moret: Okay, well, maybe I’ll just kick it off talking about the importance of the GDC that I already mentioned, the Global Digital Compact, and I’m sure everybody in the room knows about it already. It was launched back for the future at the last UN General Assembly, the first time every UN member state in the world came together to agree a path on AI governance, and I think it’s incredibly important. There are two new bodies that are being developed right now, Dialogue and the Panel, and Microsoft has been engaged at every step of the process, sitting at the table, and we’ve been very grateful to have a voice there. And I think the more private sector, but also, of course, civil society, particularly those without the usual access to these types of processes is incredibly vital, but not just to have a seat at the table, but to actually have a voice at the table. So the more we can find inclusive, fair, transparent, participatory ways that those, particularly in the global majority, can have a meaningful way of engaging in these very, very important developments through this multi-stakeholder model is to be encouraged in my view. Thanks.
Zach Lampell: Thank you. Anybody else? Ambassador?
Ernst Noorman: I would first give the floor to Marie-Anne before I take away.
Zach Lampell: Apologies.
Maria Adebahr: Oh, that’s kind, Ant. I would use this opportunity to, again, put your attention to the Council of Europe Framework Convention on AI and Human Rights, because really this is something that was hardly negotiated. It’s global, and we would strive for more member states to join. And it’s open for all. It’s a globally open convention. You don’t have to be a member of the Council of Europe. So please have a look or tell your state representatives, respectively, to have a look and join. You can always approach, I think, any of EU member states for more information. The second internationally binding or hopefully really to be implemented thing is the Global Digital Compact already mentioned. And it is important, I think, because our headcount, in our headcount, we came to the conclusion that this is really the only truly global forum to discuss AI. And if we wouldn’t do it there, then about more than 100 states worldwide would not be present at any important table to discuss AI governance, because those states are probably not a member of the G7, G20, UNESCO, OSCE, or not able, in terms of resources, to join those discussions in depth. So this makes the Global Digital Compact even more important. The EU-AI Act has, by its nature, aspects of AI governance and principles inherent. So this would be the third framework I would like to mention here. Thank you.
Ernst Noorman: And now I can just add a few points on what Maria said. What the EU-AI Act can have is what we call the Brussels effect. Maria mentioned already the Council of Europe’s AI Framework Convention. If we would strive for a binding framework within the whole UN, it’s going to be very difficult. But if you see these more smaller coalitions, I mentioned the Budapest Convention on Cybercrime, what we have seen, just during the negotiations on the UN Cybercrime Treaty, that more and more members from other regions decided to join the Budapest Convention. From the Pacific, from Africa, from other regions, they decided, well, we want to be part of the Budapest Convention, which is very effective, very concrete cooperation on this topic. So I think that’s also a good example of how we can work in smaller coalitions with the oil spill effect to conquer the world and with good, strong legislation as well for other countries.
Devine Salese Agbeti: Thank you. Excellent points made by everyone here so far on this. And I would just like to add the Palma process also, which promotes the responsible use of these emerging technologies, which includes the artificial intelligence and it requires members. So we encourage other states to sign up as well to it and also member states to implement the convention so that we can all encourage responsible use of these technologies. Thank you.
Zach Lampell: Thank you. There are some very important, very significant binding governance mechanisms, like the Convention on AI from the Council of Europe, and that really can mimic the Budapest Convention, which has become the leading authority for combating and preventing cybercrimes. So this is a bit of a call to action from civil society. Let us, the FOC member states and FOC advisory network, help you inform your governments on these processes, and let us be able to help you advocate for them to adopt, sign and enact them. So it’s been a fantastic panel so far. Ambassador Noorman, please, some closing remarks from you.
Ernst Noorman: Thank you very much, Zach, for moderating also this panel. First of all, the statement is online right now, so go to the website Freedom Online Coalition and just copy the statement, put it on the social media, spread the word. So that’s already important, and I would really like to thank everyone who had been involved in drafting the statement, both from the members of the Freedom Online Coalition as the advisory network, who played an extremely important and meaningful role in strengthening the statement. We had the in-person meetings, one of the first ones in the Rights Conference in February this year, and we had a number of online coalitions, so a lot of work has been put in drafting and strengthening the statement. So I would really like to thank already the countries who have decided to sign on on the statement, and I’m confident that many more will follow in the days and weeks to come. And finally, I really would like to thank, on behalf, I think, of Maria and her team, and Zach and your team, and my team from the Netherlands, to thank all of you, first of all, to be present, and those who had been involved in drafting the statement for your dedication, your work, and your shared purpose on this important topic. Thank you very much. Thank you. Thank you, everyone.
Rasmus Lumi
Speech speed
126 words per minute
Speech length
187 words
Speech time
88 seconds
Need for human-centric AI development that puts humans at the center of technological advancement
Explanation
Lumi argues that AI development must prioritize human welfare and rights rather than being driven purely by technological capabilities. He emphasizes the importance of ensuring humans remain central to AI development processes and outcomes.
Evidence
References the joint statement developed under Netherlands leadership and mentions concerns about AI potentially deceiving humans about believing in human rights
Major discussion point
AI Governance and Human Rights Framework
Topics
Human rights | Development
Agreed with
– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti
Agreed on
Human rights must be foundational to AI governance
Ernst Noorman
Speech speed
137 words per minute
Speech length
1722 words
Speech time
753 seconds
Human rights must be the foundation of AI governance, not an afterthought
Explanation
Noorman contends that human rights considerations should be built into the fundamental structure of AI governance systems from the beginning. He argues against treating human rights as secondary concerns that are addressed only after AI systems are developed and deployed.
Evidence
Cites Netherlands’ experience with biased automated welfare systems that led to human rights failures, requiring years to correct personal harm caused
Major discussion point
AI Governance and Human Rights Framework
Topics
Human rights | Legal and regulatory
Agreed with
– Rasmus Lumi
– Maria Adebahr
– Devine Salese Agbeti
Agreed on
Human rights must be foundational to AI governance
AI governance requires principled vision grounded in human rights and shaped through inclusive multi-stakeholder processes
Explanation
Noorman advocates for AI governance that is based on clear principles rooted in human rights law and developed through processes that include all relevant stakeholders. He emphasizes the need for inclusive participation in shaping AI governance frameworks.
Evidence
Points to the FOC joint statement as example of principled approach involving governments, civil society, private sector, and experts
Major discussion point
AI Governance and Human Rights Framework
Topics
Human rights | Legal and regulatory
Agreed with
– Devine Salese Agbeti
– Erika Moret
Agreed on
Multi-stakeholder approach is essential for AI governance
AI is used to repress dissent, distort public discourse, and facilitate gender-based violence
Explanation
Noorman identifies specific ways AI systems are being misused to harm democratic processes and individual rights. He highlights how AI tools are being weaponized against vulnerable populations and democratic institutions.
Evidence
Notes that these practices are becoming embedded in state systems with few checks and less transparency
Major discussion point
Urgent AI Human Rights Risks
Topics
Human rights | Cybersecurity
Agreed with
– Maria Adebahr
– Devine Salese Agbeti
Agreed on
AI poses urgent risks to democratic processes and human rights
Concentration of power in few private actors threatens democratic resilience and public trust
Explanation
Noorman warns that having AI development and deployment controlled by a small number of private companies creates risks for democratic governance. He argues this concentration undermines both public confidence and democratic stability.
Evidence
Points to how handful of private actors shape what people see, influence democratic debate, and dominate key markets without meaningful oversight
Major discussion point
Urgent AI Human Rights Risks
Topics
Human rights | Economic
Strong regulation like EU AI Act creates level playing field and predictability while protecting citizens
Explanation
Noorman argues that comprehensive regulation provides clear rules for companies while protecting citizens and building trust in AI systems. He contends that good regulation enables rather than hinders innovation by providing certainty.
Evidence
Cites EU AI Act as example that puts rights, transparency, and accountability at core while being risk-based and not blocking innovation
Major discussion point
Implementation and Next Steps
Topics
Legal and regulatory | Human rights
Agreed with
– Maria Adebahr
– Devine Salese Agbeti
– Erika Moret
Agreed on
International frameworks and standards are crucial for AI governance
Government procurement should be used as tool to force companies to deliver human rights-respecting products
Explanation
Noorman proposes that governments can leverage their purchasing power to incentivize companies to develop AI systems that respect human rights. He sees procurement as a concrete mechanism for promoting responsible AI development.
Evidence
Suggests governments should use procurement to ensure companies provide safe products that have human rights as core design principle
Major discussion point
Implementation and Next Steps
Topics
Legal and regulatory | Economic
FOC members should engage governments through formal and informal diplomatic discussions
Explanation
Noorman advocates for direct diplomatic engagement with both FOC and non-FOC countries to promote human rights in AI governance. He emphasizes the importance of both public statements and private diplomatic conversations.
Evidence
Mentions discussing internet shutdowns with countries that use this tool politically, explaining FOC’s role and importance of respecting human rights online
Major discussion point
Implementation and Next Steps
Topics
Human rights | Legal and regulatory
21 countries have endorsed the joint statement with more expected
Explanation
Noorman reports on the current level of support for the FOC joint statement on AI and human rights, indicating growing international consensus. He expresses confidence that additional countries will join the initiative.
Evidence
States that 21 countries have endorsed as of the session date, with text to be published and remain open for further endorsements including from non-FOC countries
Major discussion point
FOC Joint Statement Impact
Topics
Human rights | Legal and regulatory
EU AI Act can have Brussels effect influencing global standards
Explanation
Noorman suggests that the EU’s comprehensive AI regulation can influence global AI governance standards beyond Europe’s borders. He draws parallels to how EU regulations often become de facto global standards.
Evidence
Notes that many countries outside the European Union look with great interest to EU regulations and see how they can adapt them
Major discussion point
Global Frameworks and Binding Obligations
Topics
Legal and regulatory | Human rights
Smaller coalitions like Budapest Convention can achieve oil spill effect for broader adoption
Explanation
Noorman argues that focused coalitions of committed countries can create momentum that eventually leads to broader global adoption of standards. He uses the cybercrime convention as a model for how this approach can work.
Evidence
Cites Budapest Convention on Cybercrime where during UN Cybercrime Treaty negotiations, more members from Pacific, Africa and other regions decided to join the effective convention
Major discussion point
Global Frameworks and Binding Obligations
Topics
Legal and regulatory | Cybersecurity
Maria Adebahr
Speech speed
131 words per minute
Speech length
1242 words
Speech time
566 seconds
Germany supports human-centered world with non-negotiable respect for human rights over authoritarian or commercial interests
Explanation
Adebahr articulates Germany’s position that AI governance must prioritize human rights above both authoritarian control and purely commercial considerations. She emphasizes that respect for human rights should be non-negotiable in AI development.
Evidence
Quotes the statement describing a world ‘firmly rooted in and in compliance with international law, including international human rights law, not shaped by authoritarian interests or solely by commercial priorities’
Major discussion point
AI Governance and Human Rights Framework
Topics
Human rights | Legal and regulatory
Agreed with
– Rasmus Lumi
– Ernst Noorman
– Devine Salese Agbeti
Agreed on
Human rights must be foundational to AI governance
AI can be a tool for transnational repression reaching new levels through digital means
Explanation
Adebahr warns that AI technologies are being used to extend repressive practices across borders in unprecedented ways. She identifies transnational repression as an emerging threat that requires international attention and response.
Evidence
Notes that while transnational repression is not new, digital and AI tools are enabling it to reach new levels
Major discussion point
Urgent AI Human Rights Risks
Topics
Human rights | Cybersecurity
Agreed with
– Ernst Noorman
– Devine Salese Agbeti
Agreed on
AI poses urgent risks to democratic processes and human rights
Need to promote EU AI Act principles and UNESCO recommendations globally
Explanation
Adebahr advocates for spreading European and international AI governance standards to other regions and countries. She sees these frameworks as models that should be adopted more widely to ensure consistent human rights protections.
Evidence
References EU AI Act implementation, UNESCO recommendations of 2021, Hamburg Declaration on Responsible AI for SDGs, and Global Digital Compact implementation
Major discussion point
Implementation and Next Steps
Topics
Legal and regulatory | Human rights
Agreed with
– Ernst Noorman
– Devine Salese Agbeti
– Erika Moret
Agreed on
International frameworks and standards are crucial for AI governance
Germany doubled funding for Freedom Online Coalition to support this important work
Explanation
Adebahr announces Germany’s increased financial commitment to the FOC, demonstrating concrete support for digital rights advocacy. This represents a significant increase in resources for promoting human rights in digital spaces.
Evidence
States Germany was able to double funding amount compared to previous year through budget negotiations
Major discussion point
FOC Joint Statement Impact
Topics
Human rights | Development
Council of Europe Framework Convention on AI provides globally open binding framework
Explanation
Adebahr promotes the Council of Europe’s AI convention as an important binding international agreement that is open to countries beyond Europe. She emphasizes its global accessibility and legal force.
Evidence
Notes the convention is globally open and countries don’t have to be Council of Europe members to join, encouraging EU member states to provide information to interested countries
Major discussion point
Global Frameworks and Binding Obligations
Topics
Legal and regulatory | Human rights
Devine Salese Agbeti
Speech speed
100 words per minute
Speech length
512 words
Speech time
304 seconds
Most urgent risks are arbitrary AI surveillance and use for disinformation to suppress democratic participation
Explanation
Agbeti identifies surveillance and disinformation as the most critical threats posed by AI to human rights and democratic processes. He emphasizes particular concern when these tools are embedded in government structures without transparency or accountability.
Evidence
Specifically mentions concerns about AI embedded in government structure and law enforcement systems without transparency and accountability, and fears about erosion of fundamental freedoms like speech and privacy
Major discussion point
Urgent AI Human Rights Risks
Topics
Human rights | Cybersecurity
Agreed with
– Ernst Noorman
– Maria Adebahr
Agreed on
AI poses urgent risks to democratic processes and human rights
AI systems must align with international human rights standards including ICCPR and UN guiding principles
Explanation
Agbeti argues for grounding AI governance in established international human rights law and frameworks. He sees existing international legal instruments as the foundation for AI governance approaches.
Evidence
References UN guiding principles on business and human rights and ICCPR as frameworks that can underpin national, regional and international AI governance efforts
Major discussion point
AI Governance and Human Rights Framework
Topics
Human rights | Legal and regulatory
Agreed with
– Ernst Noorman
– Maria Adebahr
– Erika Moret
Agreed on
International frameworks and standards are crucial for AI governance
Multi-stakeholder participation must include civil society, technical communities, academia, private sector and marginalized voices
Explanation
Agbeti advocates for inclusive governance processes that bring together diverse perspectives and expertise. He emphasizes the importance of including marginalized communities and various professional sectors in AI governance discussions.
Evidence
Specifically mentions need for civil society to amplify marginal voices, technical communities for algorithm transparency, academia for evidence-based insights, private sector for responsible innovation, and youth and indigenous voices for diversity
Major discussion point
Implementation and Next Steps
Topics
Human rights | Sociocultural
Agreed with
– Ernst Noorman
– Erika Moret
Agreed on
Multi-stakeholder approach is essential for AI governance
Citizens misuse AI to manipulate content and create fraudulent materials, requiring responsible use advocacy
Explanation
Agbeti points out that AI misuse is not limited to governments and corporations, but also includes individual citizens creating harmful content. He argues for education and advocacy around responsible AI use by all users.
Evidence
Cites examples from Ghana where citizens used AI to manipulate online content, lie against government, and create cryptocurrency pages in the president’s name
Major discussion point
Urgent AI Human Rights Risks
Topics
Human rights | Cybersecurity
Palma process promotes responsible use of emerging technologies including AI
Explanation
Agbeti promotes the Palma process as an important framework for encouraging responsible development and deployment of AI and other emerging technologies. He encourages broader participation in this initiative.
Evidence
Notes that the process requires member implementation and encourages other states to sign up
Major discussion point
Global Frameworks and Binding Obligations
Topics
Legal and regulatory | Human rights
Erika Moret
Speech speed
142 words per minute
Speech length
1165 words
Speech time
489 seconds
Companies must be proactive at every step from design to deployment to prevent human rights violations
Explanation
Moret argues that private sector companies have a responsibility to actively prevent AI systems from being used to violate human rights throughout the entire AI lifecycle. She emphasizes that this requires continuous vigilance and proactive measures rather than reactive responses.
Evidence
References Microsoft’s Trusted Technology Group including Office of Responsible AI, Technology for Fundamental Rights Group, and Privacy, Safety, and Regulatory Affairs Group
Major discussion point
Private Sector Responsibilities
Topics
Human rights | Economic
Tech firms should adhere to UN guiding principles on business and human rights as baseline conduct
Explanation
Moret advocates for using established international frameworks as the minimum standard for corporate behavior in AI development. She argues that companies should explicitly commit to these principles as foundational to their operations.
Evidence
Notes Microsoft and many peers explicitly commit to UN guiding principles, involving policy commitment to respect human rights and ongoing due diligence processes
Major discussion point
Private Sector Responsibilities
Topics
Human rights | Economic
Companies need to embed human rights considerations from the beginning and perform ongoing assessments
Explanation
Moret argues for integrating human rights analysis into the fundamental design and development processes of AI systems rather than treating it as an add-on. She emphasizes the need for continuous monitoring and assessment throughout the AI lifecycle.
Evidence
Mentions performing human rights assessments and monitoring to identify risks, with particular attention to vulnerable groups like women and girls
Major discussion point
Private Sector Responsibilities
Topics
Human rights | Economic
Private sector must ensure fairness, transparency, accountability and protect privacy through design
Explanation
Moret outlines specific technical and procedural requirements that companies should implement to protect human rights. She argues for building these protections into the fundamental architecture of AI systems rather than adding them later.
Evidence
References Microsoft’s responsible AI principles including fairness, reliability, safety, privacy, security, inclusiveness, transparency and accountability, plus company-wide responsible AI standards
Major discussion point
Private Sector Responsibilities
Topics
Human rights | Privacy and data protection
Collaboration across sectors through multistakeholder engagement is essential responsibility
Explanation
Moret argues that private companies have a duty to engage with other sectors including government, civil society, and academia to improve AI governance. She sees this collaboration as crucial for addressing the complex challenges posed by AI systems.
Evidence
Mentions importance of private sector working with civil society and academia, contributing through red teaming and reporting, especially given fragile multilateralism and geopolitical tensions
Major discussion point
Private Sector Responsibilities
Topics
Human rights | Economic
Agreed with
– Ernst Noorman
– Devine Salese Agbeti
Agreed on
Multi-stakeholder approach is essential for AI governance
Global Digital Compact represents first time all UN member states agreed on AI governance path
Explanation
Moret highlights the historic significance of the Global Digital Compact as the first universal agreement on AI governance among all UN member states. She emphasizes its importance for creating inclusive global AI governance.
Evidence
Notes it was launched at UN General Assembly with new Dialogue and Panel bodies being developed, and Microsoft’s engagement throughout the process
Major discussion point
Global Frameworks and Binding Obligations
Topics
Legal and regulatory | Human rights
Agreed with
– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti
Agreed on
International frameworks and standards are crucial for AI governance
Zach Lampell
Speech speed
136 words per minute
Speech length
1194 words
Speech time
523 seconds
Statement provides actionable recommendations for governments, civil society, and private sector
Explanation
Lampell emphasizes that the FOC joint statement goes beyond general principles to provide specific, implementable guidance for different stakeholder groups. He highlights the practical nature of the recommendations as a key strength of the document.
Evidence
Notes the statement builds on previous statements and provides clear recommendations with strong foundation for governance principles
Major discussion point
FOC Joint Statement Impact
Topics
Human rights | Legal and regulatory
Statement addresses commercial interests, environmental impact, and threats to fundamental freedoms
Explanation
Lampell outlines the comprehensive scope of issues covered in the joint statement, showing how it addresses multiple dimensions of AI’s impact on society. He emphasizes that the statement takes a holistic approach to AI governance challenges.
Evidence
Specifically mentions threats to freedom of expression, right to privacy, freedom of association, and freedom of assembly
Major discussion point
FOC Joint Statement Impact
Topics
Human rights | Development
Audience
Speech speed
126 words per minute
Speech length
384 words
Speech time
182 seconds
Need for wider adoption and civil society support beyond FOC members
Explanation
An audience member suggests that the FOC joint statement should be opened for broader endorsement and support, including from civil society organizations and non-FOC countries. They argue for creating mechanisms to allow wider participation in supporting the statement’s principles.
Evidence
Notes that some governments don’t know FOC exists and suggests creating awareness in civil society space, plus allowing comments and signatures of support for the declaration
Major discussion point
FOC Joint Statement Impact
Topics
Human rights | Sociocultural
Agreements
Agreement points
Human rights must be foundational to AI governance
Speakers
– Rasmus Lumi
– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti
Arguments
Need for human-centric AI development that puts humans at the center of technological advancement
Human rights must be the foundation of AI governance, not an afterthought
Germany supports human-centered world with non-negotiable respect for human rights over authoritarian or commercial interests
AI systems must align with international human rights standards including ICCPR and UN guiding principles
Summary
All government representatives agree that human rights should be the central organizing principle for AI governance, not treated as secondary considerations. They emphasize putting humans at the center of AI development and ensuring compliance with international human rights law.
Topics
Human rights | Legal and regulatory
Multi-stakeholder approach is essential for AI governance
Speakers
– Ernst Noorman
– Devine Salese Agbeti
– Erika Moret
Arguments
AI governance requires principled vision grounded in human rights and shaped through inclusive multi-stakeholder processes
Multi-stakeholder participation must include civil society, technical communities, academia, private sector and marginalized voices
Collaboration across sectors through multistakeholder engagement is essential responsibility
Summary
Speakers agree that effective AI governance requires inclusive participation from governments, civil society, private sector, academia, and marginalized communities. They emphasize that no single sector can address AI challenges alone.
Topics
Human rights | Sociocultural
International frameworks and standards are crucial for AI governance
Speakers
– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti
– Erika Moret
Arguments
Strong regulation like EU AI Act creates level playing field and predictability while protecting citizens
Need to promote EU AI Act principles and UNESCO recommendations globally
AI systems must align with international human rights standards including ICCPR and UN guiding principles
Global Digital Compact represents first time all UN member states agreed on AI governance path
Summary
All speakers support the development and implementation of international frameworks for AI governance, including the EU AI Act, Council of Europe Convention, UN frameworks, and other binding international agreements.
Topics
Legal and regulatory | Human rights
AI poses urgent risks to democratic processes and human rights
Speakers
– Ernst Noorman
– Maria Adebahr
– Devine Salese Agbeti
Arguments
AI is used to repress dissent, distort public discourse, and facilitate gender-based violence
AI can be a tool for transnational repression reaching new levels through digital means
Most urgent risks are arbitrary AI surveillance and use for disinformation to suppress democratic participation
Summary
Government representatives agree that AI systems are being actively misused to undermine democratic institutions, suppress dissent, and violate human rights, particularly through surveillance and disinformation campaigns.
Topics
Human rights | Cybersecurity
Similar viewpoints
Both European representatives see EU AI regulation as a model that should influence global AI governance standards and be promoted internationally.
Speakers
– Ernst Noorman
– Maria Adebahr
Arguments
EU AI Act can have Brussels effect influencing global standards
Need to promote EU AI Act principles and UNESCO recommendations globally
Topics
Legal and regulatory | Human rights
Both emphasize the importance of embedding human rights considerations into AI systems from the design phase and using economic incentives to promote responsible AI development.
Speakers
– Ernst Noorman
– Erika Moret
Arguments
Government procurement should be used as tool to force companies to deliver human rights-respecting products
Companies must be proactive at every step from design to deployment to prevent human rights violations
Topics
Human rights | Economic
Both promote specific international frameworks that are open to global participation and provide binding commitments for responsible AI governance.
Speakers
– Maria Adebahr
– Devine Salese Agbeti
Arguments
Council of Europe Framework Convention on AI provides globally open binding framework
Palma process promotes responsible use of emerging technologies including AI
Topics
Legal and regulatory | Human rights
Unexpected consensus
Private sector proactive responsibility for human rights
Speakers
– Ernst Noorman
– Erika Moret
Arguments
Government procurement should be used as tool to force companies to deliver human rights-respecting products
Companies must be proactive at every step from design to deployment to prevent human rights violations
Explanation
There is unexpected alignment between government and private sector perspectives on corporate responsibility, with both agreeing that companies should proactively embed human rights protections rather than waiting for government mandates.
Topics
Human rights | Economic
Citizens’ role in AI misuse
Speakers
– Devine Salese Agbeti
– Ernst Noorman
Arguments
Citizens misuse AI to manipulate content and create fraudulent materials, requiring responsible use advocacy
FOC members should engage governments through formal and informal diplomatic discussions
Explanation
There is consensus that AI governance challenges come not only from governments and corporations but also from individual citizens, requiring education and advocacy for responsible use by all stakeholders.
Topics
Human rights | Cybersecurity
Overall assessment
Summary
There is strong consensus among all speakers on core principles: human rights as foundation of AI governance, need for multi-stakeholder approaches, importance of international frameworks, and recognition of urgent AI-related threats to democracy and human rights.
Consensus level
High level of consensus with remarkable alignment across government, private sector, and civil society representatives. This suggests strong potential for coordinated international action on AI governance, with the FOC joint statement serving as a foundation for broader cooperation. The consensus spans both principles and practical implementation approaches, indicating mature understanding of AI governance challenges.
Differences
Different viewpoints
Unexpected differences
Scope of AI misuse concerns
Speakers
– Devine Salese Agbeti
– Other speakers
Arguments
Citizens misuse AI to manipulate content and create fraudulent materials, requiring responsible use advocacy
Explanation
While most speakers focused on government and corporate misuse of AI, Agbeti uniquely highlighted citizen misuse as a significant concern, arguing that FOC should advocate for responsible use by individuals rather than only focusing on state and corporate actors. This represents an unexpected broadening of the discussion beyond the typical focus on institutional actors
Topics
Human rights | Cybersecurity
Overall assessment
Summary
The discussion showed remarkably high consensus among speakers on fundamental principles and goals for AI governance, with only minor differences in emphasis and approach rather than substantive disagreements
Disagreement level
Very low disagreement level. The speakers demonstrated strong alignment on core issues including the need for human rights-centered AI governance, multi-stakeholder approaches, and international cooperation. The few differences that emerged were primarily about tactical approaches (domestic vs. global focus, government vs. citizen responsibility) rather than fundamental disagreements about principles or goals. This high level of consensus likely reflects the collaborative nature of developing the FOC joint statement and suggests strong potential for coordinated action on AI governance among these stakeholders.
Partial agreements
Partial agreements
Similar viewpoints
Both European representatives see EU AI regulation as a model that should influence global AI governance standards and be promoted internationally.
Speakers
– Ernst Noorman
– Maria Adebahr
Arguments
EU AI Act can have Brussels effect influencing global standards
Need to promote EU AI Act principles and UNESCO recommendations globally
Topics
Legal and regulatory | Human rights
Both emphasize the importance of embedding human rights considerations into AI systems from the design phase and using economic incentives to promote responsible AI development.
Speakers
– Ernst Noorman
– Erika Moret
Arguments
Government procurement should be used as tool to force companies to deliver human rights-respecting products
Companies must be proactive at every step from design to deployment to prevent human rights violations
Topics
Human rights | Economic
Both promote specific international frameworks that are open to global participation and provide binding commitments for responsible AI governance.
Speakers
– Maria Adebahr
– Devine Salese Agbeti
Arguments
Council of Europe Framework Convention on AI provides globally open binding framework
Palma process promotes responsible use of emerging technologies including AI
Topics
Legal and regulatory | Human rights
Takeaways
Key takeaways
Human rights must be the foundation of AI governance, not an afterthought, requiring a human-centric approach that puts people at the center of technological development
The most urgent AI human rights risks include arbitrary surveillance, disinformation campaigns, suppression of democratic participation, and the concentration of power in few private actors
Private sector companies have independent responsibility to respect human rights through proactive measures from design to deployment, including adherence to UN guiding principles on business and human rights
Multi-stakeholder collaboration involving governments, civil society, private sector, academia, and marginalized communities is essential for effective AI governance
The FOC Joint Statement on AI and Human Rights 2025 has been endorsed by 21 countries and provides actionable recommendations for all stakeholders
Strong regulatory frameworks like the EU AI Act, Council of Europe AI Framework Convention, and Global Digital Compact provide important binding and non-binding governance mechanisms
Government procurement can be used as a powerful tool to ensure AI systems respect human rights by requiring companies to deliver compliant products
Resolutions and action items
The FOC Joint Statement on AI and Human Rights 2025 is now published online and remains open for additional endorsements from both FOC and non-FOC countries
Germany announced doubling its funding for the Freedom Online Coalition to support AI and human rights work
Task Force on AI and Human Rights will meet to discuss creating space for civil society comments and support for the declaration beyond FOC members
FOC members committed to continue diplomatic engagement with non-member governments through formal and informal discussions to promote human rights in AI
Participants agreed to spread awareness of the statement through social media and other channels
Promotion of existing binding frameworks like the Council of Europe AI Framework Convention and Global Digital Compact implementation
Unresolved issues
How to effectively protect human rights in AI systems operating under repressive regimes while companies maintain government contracts
Balancing transparency in AI decision-making with protection of sensitive data and privacy rights
Addressing the dual challenge of preventing both government misuse of AI and citizen misuse of AI for fraudulent purposes
Ensuring meaningful participation of Global South countries and marginalized communities in AI governance processes
Creating truly global binding frameworks for AI governance given the difficulty of achieving consensus among all UN member states
Determining optimal mechanisms for civil society engagement and support for AI governance initiatives beyond government-led processes
Suggested compromises
Using smaller coalitions and regional frameworks (like Council of Europe Convention) to achieve ‘Brussels effect’ or ‘oil spill effect’ for broader global adoption rather than waiting for universal UN consensus
Leveraging government procurement policies as a practical middle-ground approach to enforce human rights standards without requiring new legislation
Balancing innovation promotion with rights protection through risk-based regulatory approaches that don’t block technological advancement
Combining formal diplomatic engagement with informal discussions to address AI human rights concerns with non-compliant governments
Using existing frameworks like UN guiding principles on business and human rights as baseline standards while developing more specific AI governance mechanisms
Thought provoking comments
When I read through the notes that it offered me, it did say all the right things, which is totally understandable. The question is, did it do it on purpose, maybe, maliciously, trying to deceive us into thinking that AI also believes in human rights?
Speaker
Rasmus Lumi
Reason
This opening comment immediately established a critical and philosophical tone by questioning AI’s apparent alignment with human values. It introduced the concept of AI deception and whether AI systems might manipulate human perception of their intentions, which is a sophisticated concern beyond basic functionality issues.
Impact
This comment set the stage for deeper philosophical discussions throughout the session about AI’s relationship with human values and the need for human-centered governance. It moved the conversation beyond technical implementation to fundamental questions about AI’s nature and trustworthiness.
Innovation without trust is short-lived. Respect for rights is not a constraint, it’s a condition for sustainable, inclusive progress.
Speaker
Ernst Noorman
Reason
This reframes the common narrative that human rights protections hinder technological innovation. Instead, it positions human rights as essential infrastructure for sustainable technological development, challenging the false dichotomy between innovation and rights protection.
Impact
This comment shifted the discussion from defensive justifications of human rights to a proactive business case for rights-based AI development. It influenced subsequent speakers to discuss practical implementation rather than theoretical benefits.
In the Netherlands, we have learned this the hard way. The use of strongly biased automated systems in welfare administration, designed to combat fraud, has led to one of our most painful domestic human rights failures.
Speaker
Ernst Noorman
Reason
This vulnerable admission of failure from a leading democratic nation provided concrete evidence of how AI systems can cause real harm even with good intentions. It demonstrated intellectual honesty and showed that human rights violations through AI are not just theoretical concerns or problems of authoritarian regimes.
Impact
This personal example grounded the entire discussion in reality and gave credibility to the urgency of the human rights framework. It influenced other speakers to focus on practical safeguards and accountability mechanisms rather than abstract principles.
AI unfortunately is also a tool for transnational repression. And transnational repression is a phenomenon… that we as German governments really want to focus more on. Because it’s a fairly not a new phenomenon, it’s very, very old. But in terms of digital and AI, we are reaching here new levels, unfortunately.
Speaker
Maria Adebahr
Reason
This comment introduced a geopolitical dimension that expanded the scope beyond domestic AI governance to international security concerns. It highlighted how AI amplifies existing authoritarian tactics across borders, making it a global security issue requiring international coordination.
Impact
This broadened the conversation from individual rights protection to collective security, influencing the discussion toward international cooperation mechanisms and the need for coordinated responses to AI-enabled authoritarianism.
I have seen how citizens have used AI to manipulate online content, to lie against government, to even create cryptocurrency pages in the name of the president, etc. So it works both ways. I think FOC should be advocating for responsible use of citizens, and at the same time when this is being advocated, then FOC can engage with government also to ensure that the citizens actually have the right to use these systems and use it freely without the fear of arrest or the fear of intimidation.
Speaker
Devine Salese Agbeti
Reason
This comment introduced crucial nuance by acknowledging that AI misuse is not unidirectional – citizens can also misuse AI against governments. It challenged the implicit assumption that only governments and corporations pose AI-related threats, while maintaining the importance of protecting legitimate citizen rights.
Impact
This balanced perspective shifted the conversation toward more sophisticated governance approaches that address multiple threat vectors while preserving democratic freedoms. It influenced the discussion to consider comprehensive frameworks rather than one-sided protections.
We have to use procurement as a tool also to force companies to deliver products which are respecting human rights, which have human rights as a core in their design of their products, and ensure that they provide safe products to the governments which are used broadly in the society.
Speaker
Ernst Noorman
Reason
This identified government procurement as a powerful but underutilized lever for enforcing human rights standards in AI development. It provided a concrete, actionable mechanism that governments can implement immediately without waiting for comprehensive international agreements.
Impact
This practical suggestion energized the discussion around immediate actionable steps, moving from abstract principles to concrete implementation strategies that other speakers could build upon.
Overall assessment
These key comments fundamentally shaped the discussion by establishing it as a sophisticated, multi-dimensional conversation rather than a simple advocacy session. Lumi’s opening philosophical challenge set an intellectually rigorous tone, while Noorman’s admission of Dutch failures provided credibility and urgency. The comments collectively moved the discussion through several important transitions: from theoretical to practical (through concrete examples), from defensive to proactive (reframing rights as enabling innovation), from domestic to international (through transnational repression), and from one-sided to nuanced (acknowledging citizen misuse). These interventions prevented the session from becoming a simple endorsement of the joint statement and instead created a substantive dialogue about the complex realities of AI governance, ultimately strengthening the case for the human rights framework by acknowledging and addressing its challenges.
Follow-up questions
How can transparency in AI decision-making be improved without exposing sensitive data, particularly to ensure that the right to privacy is protected under international human rights law?
Speaker
Online participant
Explanation
This question addresses the critical balance between AI transparency requirements and privacy protection, which is fundamental to human rights compliance in AI systems
How can civil society organizations and non-FOC members provide comments and support for the FOC AI and Human Rights declaration?
Speaker
Carlos Vera from IGF Ecuador
Explanation
This highlights the need for broader participation and engagement mechanisms beyond FOC members, including creating awareness about FOC’s existence among governments and civil society
How can risks be diminished when tech companies like Microsoft work with oppressive governments, and what practical actions can civil society take?
Speaker
Svetlana Zenz
Explanation
This addresses the practical challenges of protecting human rights when technology companies operate in countries with repressive regimes and the role of civil society in mitigation
How can the FOC engage with governments to ensure citizens have the right to use AI systems freely while also advocating for responsible citizen use of AI?
Speaker
Devine Salese-Agbeti
Explanation
This explores the dual challenge of preventing both government overreach and citizen misuse of AI technologies, requiring balanced advocacy approaches
What are the global frameworks with binding obligations on states for the responsible use and governance of AI?
Speaker
Online participant
Explanation
This seeks to identify existing legally binding international mechanisms for AI governance, which is crucial for understanding the current regulatory landscape
How can the ‘Brussels effect’ of the EU AI Act be leveraged to influence global AI governance standards?
Speaker
Ernst Noorman (implied)
Explanation
This explores how regional regulations like the EU AI Act can create spillover effects to influence global standards, similar to the Budapest Convention’s expansion
How can transnational repression through AI tools be better addressed on the international human rights agenda?
Speaker
Maria Adebahr
Explanation
This identifies the need for focused attention on how AI is being used as a tool for transnational repression, which represents a new dimension of human rights violations
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.