AI That Empowers Safety Growth and Social Inclusion in Action

20 Feb 2026 10:00h - 11:00h

AI That Empowers Safety Growth and Social Inclusion in Action

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Peggy emphasizing that responsible AI must address day-to-day challenges and deliver practical safeguards for all people, not only those in advanced economies, through global standards, public-private collaboration and rights-based approaches [1-9]. She underscored corporate duties to respect human rights and the need for human-rights due diligence, while urging governments to create a level playing field and reward firms that act responsibly [10-13].


Tim introduced UNESCO’s stance that trust is earned through design, safeguards and accountability, citing the UNESCO ethics recommendation and the RAMS readiness assessments now used in more than 80 countries, including a recent India report [32-34]. He announced a UNESCO-LG AI Research MOOC on “ethics by design” that will provide practical tools for developers worldwide [37-44].


Rein Tammsaar explained the UN Global Dialogue on AI Governance, mandated by a General Assembly resolution, and outlined its four member-state priorities: trustworthy AI, closing capacity gaps, cross-border governance, and anchoring AI in human rights and international law [60-74]. He noted that standards translate principles into actionable risk-management tools for companies and regulators [78-79].


Ankit Bose described NASSCOM’s four-decade mission to build capacity, create open assets and guide startups, SMEs and large firms toward responsible AI, pointing out that startups often deprioritise governance amid resource constraints [94-124]. Alex Walden detailed Google’s internal framework, from corporate values and UN guiding principles to AI principles, model-level requirements, application-level guardrails, executive review and post-launch monitoring [130-162]. Hector Duroir outlined Microsoft’s Responsible AI office, its Sensitive Use Case program, board-level oversight and reliance on OECD and UNESCO guidelines, while highlighting recent Indian voluntary commitments on multilingual safety [170-199]. Yuchil Kim explained LG’s contribution to the UNESCO MOOC, its annual AI ethics accountability report and a proprietary AI-powered data-compliance system aimed at transparent, inclusive AI [209-213].


Namit Agarwal presented the World Benchmarking Alliance’s assessment of 2,000 tech firms, revealing that only about 10 % meet global governance expectations and none disclose human-rights impact assessments, and called for board-level AI oversight, product-level implementation and robust impact assessments [224-240]. Panelists agreed that siloed frameworks hinder implementation and advocated programmatic stakeholder engagement, trusted-tester programmes and open-source initiatives such as Google’s Amplify to bring civil-society and academic input into product development [302-310].


In closing, Peggy summarized that while incentives, standards and multi-stakeholder collaboration are emerging, concrete action is required to turn good intentions into trustworthy AI that respects human dignity worldwide [350-363].


Keypoints

Major discussion points


Global norms and multilateral governance are essential for responsible AI.


The opening remarks stress the need for “global standards, collaborative public-private solutions, and rights-based approaches” and for “responsible and effective AI governance and clarity of rules” to make AI work for all people [2-5][9-12]. The UN-led Global Dialogue on AI Governance further outlines member-state priorities – trustworthy AI, capacity-building, cross-border governance, and anchoring AI in human rights and international law [67-74].


Capacity-building and education are being operationalised through assessments and a UNESCO MOOC.


UNESCO’s RAMS (Readiness Assessment Methodology) reports are being rolled out in over 80 countries to translate the global ethics recommendation into local practice [33-35]. A new massive open online course on AI ethics, co-developed with LG AI Research, will teach “ethics by design” and provide practical tools for fairness, transparency, safety, accountability and inclusion [36-44].


Large tech companies are embedding responsible-AI principles into internal structures and product lifecycles.


Google cites its corporate policy on the UN Guiding Principles, AI principles, and a layered process of model-level requirements, application-level guardrails, executive review and post-launch monitoring [130-138][149-162]. Microsoft describes its Office of Responsible AI, the Sensitive Use-Case program, and board-level oversight, drawing on OECD and UNESCO principles [168-184].


Investors and benchmarking organisations can drive accountability and incentivise good governance.


The World Benchmarking Alliance provides comparable, credible data on companies’ AI disclosures, finding that only ~10 % meet global governance expectations and none publish human-rights impact assessments [224-230]. It recommends that investors demand board-level AI risk responsibility, alignment of executive incentives, and robust AI-specific human-rights impact assessments [236-241].


Inclusion, language diversity, and civil-society engagement are critical yet under-addressed.


Examples include voluntary commitments on multilingual safety tools in India [190-199] and Microsoft’s partnership with NGOs to build community-led benchmarks that reflect local cultural contexts [276-285]. Google’s “Amplify Initiative” and trusted-tester programs illustrate how companies can involve external stakeholders to improve language inclusion and overall safety [300-310].


Overall purpose / goal


The session aims to bring together UN bodies, governments, industry leaders, civil-society representatives, and investors to share concrete practices, identify gaps, and forge collaborative, rights-based mechanisms that translate high-level AI ethics standards into actionable safeguards, capacity-building programmes, and market incentives-ultimately ensuring that AI development and deployment are trustworthy, inclusive, and beneficial for all societies.


Overall tone and its evolution


Opening (0-15 min): Formal, optimistic, and forward-looking, emphasizing shared responsibility and the promise of global standards.


Mid-session (15-35 min): Becomes more explanatory and technical, highlighting concrete tools (MOOC, assessments) and the practical challenges companies face.


Later (35-50 min): Shifts to a candid acknowledgment of obstacles-fragmented frameworks, capacity gaps, and the need for stronger incentives-while still maintaining a collaborative spirit.


Closing (50-53 min): Moves to a reflective, call-to-action tone, urging participants to translate “good intentions” into concrete actions and sustain the multi-stakeholder momentum.


Overall, the discussion maintains a constructive and solution-oriented tone, but it deepens in nuance as participants move from high-level framing to detailed examples of implementation hurdles and the necessity of broader engagement.


Speakers

Peggy Hicks – Director, Office of the United Nations High Commissioner for Human Rights (OHCHR); moderator; expertise in human rights, AI governance, and responsible business conduct [S18][S19].


Tim Curtis – Regional Director for UNESCO South Asia; co-chair of the UN AI Dialogue; expertise in AI policy, ethics, and multistakeholder collaboration [S2].


Ankit Bose – Representative of NASSCOM (National Association of Software and Service Companies), India; focuses on responsible AI, industry capacity building, and tech ecosystem coordination.


Rein Tammsaar – Ambassador, Permanent Representative of Estonia to the United Nations; co-facilitator and co-chair of the UN Global Dialogue on AI Governance; expertise in AI governance and diplomatic engagement.


Namit Agarwal – Representative, World Benchmarking Alliance (non-profit); works on AI accountability, benchmarking of tech companies, and aligning capital-market incentives with responsible AI.


Yuchil Kim – Vice President, LG AI Research; leads LG’s AI ethics, transparency, and responsible AI initiatives, including development of an AI ethics MOOC.


Parvati Adani – Partner, Sero Amarchan Mangaldas (law firm); expertise in AI law, ethics, and the intersection of technology with human rights [S12].


Alex Walden – Global Head of Human Rights, Google; leads Google’s responsible AI policies, human-rights impact assessments, and stakeholder engagement [S14].


Hector Duroir – Director, Responsible AI Public Policy, Microsoft; oversees Microsoft’s AI principles, internal governance frameworks, and external collaborations on AI safety and inclusion.


Additional speakers:


Ambassador Reintesma – Ambassador of Estonia (mentioned by Tim Curtis as co-chair of the UN AI Dialogue); diplomatic role in AI governance.


Praveen – Mentioned by Peggy Hicks in closing remarks; affiliation not specified in the transcript.


Dhani – Mentioned alongside Praveen; affiliation not specified in the transcript.


Allie – Referred to by Peggy Hicks near the end; likely a mis-identification of an existing speaker (e.g., Alex Walden) but listed as a distinct name in the transcript.


Full session reportComprehensive analysis and detailed insights

Peggy Hicks opened the session by reminding participants that the challenges posed by artificial intelligence are “consequential … that have impacts in people’s lives on a day-to-day basis” and that any response must be grounded in “global standards, collaborative public-private solutions, and rights-based approaches” [1-2]. She stressed that responsible AI does not emerge spontaneously; it requires “deliberation, thought and engagement” to avoid pitfalls and to ensure that products “work for people, not only in advanced economies or for the dominant platforms” [3-8]. Hicks linked responsible governance to “clarity of rules for both companies and government” and called for “responsible and effective AI governance” that aligns with “global norms” [9-10]. She underlined the corporate duty to “respect human rights and address the risk to people stemming from their products” and presented human-rights due diligence as a pragmatic way to embed these obligations into operations [11-12]. Hicks also noted the complementary role of governments in creating a “level playing field” and rewarding firms that act responsibly, framing this as part of the BTEC project’s aim to “make this conversation happen” through convenings and the use of UN guidelines [13-16]. The BTEC project is hosted by the Office of the High Commissioner for Human Rights (OHCHR), and Tim Curtis later thanked this office for inviting the panel [13-16]. Peggy added that the UN Global Dialogue on AI Governance will be launched in July, with an inaugural convening in Geneva [13-16].


Tim Curtis, Director of UNESCO’s AI Ethics Programme, articulated UNESCO’s perspective. He argued that “trust is not something technology earns through ambition alone but … through design choices, safeguards and accountability” [32]. To operationalise the UNESCO Recommendation on the Ethics of AI, UNESCO has produced the Readiness Assessment Methodology (RAMS) reports, which have now been launched in “over 80 countries” and include a recent assessment for India [33-34]. Curtis announced the development of a joint UNESCO-LG AI Research massive open online course (MOOC) on “ethics by design”, to be delivered on Coursera, with the explicit goal of making AI-ethics learning “accessible to a wide global audience” and providing “practical … tools for day-to-day work” [37-44]. He positioned the MOOC as a bridge between high-level ethical recommendations and the concrete decisions developers face [37-44], and outlined four concrete learner benefits: recognising common risks early, asking better questions during development, documenting decisions responsibly, and assessing impact on different groups [37-44]. The MOOC focuses on ethics-by-design, embedding ethical questions from the start [37-44].


Rein Tammsaar, co-chair of the United Nations Global Dialogue on AI Governance, contextualised the discussion within the UN system. He explained that the Dialogue was “mandated by all member states through a General Assembly resolution” and is therefore a “member-states-driven process” belonging to every country [60-62]. Tammsaar noted that the Dialogue has two co-chairs – one from El Salvador and the other from Estonia [60-62]. He presented the four priorities identified by member states: (i) safe, secure and trustworthy AI; (ii) closing capacity gaps, especially for developing nations; (iii) interoperable, cross-border governance; and (iv) anchoring AI in human rights and international law [67-74]. He argued that standards “turn principles into action”, shaping risk management, accountability and human oversight, and that the Dialogue will seek “common ground” rather than imposing a single model [78-79].


Ankit Bose, Senior Vice-President, NASSCOM, described the association’s four-decade mission to “build capacity, develop open assets and guide the ecosystem” from government to startups and SMEs [98-102]. He traced NASSCOM’s responsible-AI focus to a 2021 launch that identified a gap between rapid AI development and the missing “human element” of trust [95-98]. Bose highlighted that startups often place governance on a “second-or-probably the side-burner” because they must simultaneously build a product, a team and secure funding, a situation he warned is a “complete no-no” [120-124]. When asked how NASSCOM differentiates its engagement across company sizes, he explained that “big tech … are playing at the front foot”, services firms “follow their contracts”, mid-tier firms “try to understand how they grow … while building governance”, and startups need “much bigger support” because they struggle to prioritise governance amid day-to-day pressures [110-124].


Alex Walden, Senior Director of Responsible AI at Google, presented Google’s internal governance framework. He began by linking corporate values-freedom of expression, privacy and universal benefit-to the company’s AI responsibilities [130-131]. Google’s policy “commits to respect the UN Guiding Principles on Business and Human Rights” and is reinforced by its own AI Principles, which translate high-level values into operational guidance for teams across Google Cloud, YouTube and Search [135-137]. Walden listed the standards that inform Google’s work: the UN Guiding Principles, OECD AI Principles, UNESCO recommendations, the BTEC project and other peer-industry initiatives [138-141]. He described a layered process: “model-level requirements” that mandate data validation and testing; “application-level guardrails” that add further evaluations and mitigations; “executive review” where senior leaders assess risks before launch; and “post-launch monitoring” to capture novel or residual risks [154-162]. He framed this as a “multilayered approach” that embeds responsibility throughout the product lifecycle [149-162]. When pressed about the pressures of championing human-rights considerations within Google, Walden noted that market incentives already push the company to deliver “safe and trusted” products, given that Google’s consumer-facing services such as Search and Gmail shape public perception [149-152]. He explained that the internal processes-model requirements, application guardrails, executive sign-off and continuous monitoring-are the mechanisms that turn those market pressures into concrete safeguards [153-161].


Hector Duroir, Director of Responsible AI, Microsoft, outlined Microsoft’s evolution in responsible AI. He recounted that the Office of Responsible AI was created in 2019, building on “AI principles” established in 2018 around privacy, reliability, inclusion, fairness, safety and security [175-176][170-174]. Microsoft’s Sensitive Use-Case programme “triages … high-risk applications” and escalates them to the ITER ethics committee, which includes board-level representation [179-182]. The programme draws on the OECD AI Principles and UNESCO’s recommendation [184-185]. Duroir also highlighted recent Indian voluntary commitments that “encourage companies to forge multilingual capabilities” and to evaluate safety risks beyond English-centric norms, linking this to Microsoft’s principle of inclusion [188-199]. He described the Samishka project in India, a collaboration with NGOs that creates “community-led benchmarks” to develop safety tools grounded in local cultural contexts, warning that simply translating English tools would lose essential nuance [276-285].


The importance of linguistic and cultural inclusion was reinforced by several speakers. Alex added that Google’s “Amplify Initiative”, an open-source app, allows members of the public to fine-tune language models, thereby promoting language inclusion [308-310]. Parvati Adani later echoed this sentiment, arguing that any framework that ignores language, gender and cultural contexts is “incomplete by design” [336-338].


Yuchil Kim, Head of AI Ethics, LG, spoke about LG’s contribution to the UNESCO MOOC and its broader responsible-AI activities. He positioned the MOOC as a “bridge in the gap” for practitioners who struggle to apply ethical concepts in daily work, noting that LG also provides an “AI-powered data-compliance system” and publishes an “annual accountability report on AI” (now in its third edition) to share best practices and challenges [209-213]. Kim’s remarks underscored the need for transparent, inclusive reporting to support the global learning effort.


Namit Agarwal, Executive Director, World Benchmarking Alliance (WBA), presented the results of the latest assessment of 2 000 tech firms. He reported that “close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations on the governance aspect” and that “none of the 200 companies … disclose their reports on human-rights impact assessment” [227-228]. From this evidence, the WBA calls for three investor-driven actions: (i) board-level AI risk responsibility and alignment of executive incentives; (ii) product-level translation of ethical principles, including identification of high-risk use cases; and (iii) robust, AI-specific human-rights impact assessments with meaningful public summaries [236-241]. He framed investors as “catalytic” actors who can make “consequences for weak governance … consequential for companies to move in that direction” [231-232].


A tension emerged between Ankit’s concern about “framework fatigue” and Tim’s confidence that UNESCO’s RAMS assessments and the developing MOOC provide actionable guidance [258-266][33-44].


Across the panel, there was a strong consensus on the necessity of multi-stakeholder engagement. Peggy called for “continuous, programme-level engagement” with civil society, academia and affected communities [144-149]. Alex described Google’s “programme-level approach” that includes trusted-tester programmes, the Impact Lab’s community research and the open-source Amplify Initiative [302-307][308-310]. Hector highlighted Microsoft’s inclusion of NGOs and academia in the Samishka benchmarks [276-285]. Yuchil reinforced this by noting LG’s practice of publishing annual reports to share both successes and struggles, invoking the African proverb “If you want to go fast, go alone; if you want to go far, go together” [290-296]. These remarks illustrate a shared belief that siloed internal structures must give way to collaborative, cross-functional processes.


Parvati Adani delivered the closing reflections, using a provocative experiment in which she asked an AI tool whether it had “ethical limits”. The tool replied “I don’t know”, prompting her to note that AI “has no continuous thread of existence” and “cannot bear consequences” [327-332]. She argued that because AI lacks conscience, “human rights are not optional” and that frameworks must explicitly address language, gender and cultural inclusion or remain “incomplete by design” [334-338]. Adani warned that voluntary commitments, while “fantastic”, must be turned into concrete actions to avoid “good intentions and good ideas” without impact [340-345].


Finally, Peggy concluded by reiterating the session’s key messages. She acknowledged the “complex … dynamics within companies and externally and then globally” and stressed that all participants have a responsibility to “engage on these … each of us have different roles” [350-352]. She highlighted the need to move beyond “good practices” that are not universally applied, to create incentives that reward responsible behaviour, and to continue the dialogue so that AI innovation can be trusted and uphold human dignity [353-363]. The panel closed with a collective pledge to translate standards, capacity-building programmes and market incentives into concrete, accountable actions that benefit all societies [350-363].


Session transcriptComplete transcript of the session
Peggy Hicks

These are consequential challenges that have impacts in people’s lives on a day -to -day basis. And our session is going to address how global standards, collaborative public -private solutions, and rights -based approaches can enable responsible AI with meaningful real -world impact. And we know that these things don’t just happen on their own. It takes deliberation. It takes thought. It takes engagement to make sure that the products and approaches that we’re using in the AI field avoid some of the pitfalls that may be associated with them. And the companies are going to share some of the good practices that they’re engaging in about how that works in the real world. And we know if they don’t engage in that way, that the risks are there and very much present.

And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying to deliver these benefits for. Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. Companies, of course, have a responsibility to respect human rights and address the risk to people stemming from their products. And human rights due diligence, of course, is one of the process -based ways and a pragmatic way to weave this into corporate operations. But, of course, governments are the ones that also have a responsibility here, too, to create a level playing field, and we talk a lot about that.

We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well. Our BTEC project at OHCHR is aimed at how do we make this conversation happen. So through convenings like this one, through engaging with companies, pulling out their good practices and letting all of you hear about them and encouraging others to do the same is what that project is really about. And we are really looking at and working with, of course, how to use tools like the UN Guidelines. We’re working with the United Nations, the United Nations, and the United Nations to try to get the best out of the work that we’re doing.

And we’re working with the United Nations to try to get the best out of the work that we’re doing. So we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. UNESCO’s AI recommendations on ethics and figuring out how we weave those into the decisions and work that’s being done now.

And as I said, bringing this conversation to this summit where there is truly a global and multi -stakeholder effort is happening to really look at AI innovation and deployment has been incredibly important. So without further ado on that front, I want to hand over to my colleague and co -sponsor here, Tim, over to you.

Tim Curtis

Thanks, Peggy. And good morning, everyone. Ambassador Reintesma from Estonia and co -chair of the AI Dialogue that the United Nations is holding. Of course, Peggy and dear panelists, it’s really wonderful to be here with you today to be part of this conversation on responsible practices and industry standards. And as we all know now where AI is moving, you know, from something we discuss in theory to really something that is shaping the decisions in real time and real institutions. and of course for real people. I’d like to thank particularly the Office of the High Commission for inviting us to join in on this and for working with us on organising this event. It’s been a pleasure.

At UNESCO we often return to a simple idea that trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability and that’s why the recommendation on the ethics of AI we believe is so important because it does give the world a shared foundation, a first step on how AI could be built and used in ways that protect people’s rights, that promote fairness and support inclusion. So we’ve been translating this global agreement and framework into local realities and through what we call the RAMS, the Readiness Assessment Methodology Reports which we’ve now launched in around over 80 countries and just two days we did India’s readiness assessment report.

And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyond theory and towards this responsible human -centred deployment of AI we hear about. And so by grounding innovation in these evidence -based diagnostics, we hope to ensure that progress remains aligned with those shared values. But, of course, a recommendation only matters if it can be applied by people who are actually catering, creating and using AI. And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course, or a MOOC, as more commonly known, on the ethics of artificial intelligence.

And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide global audience, and to make a practical… for day -to -day work. And so, as I mentioned earlier, the key idea behind the MOOC is ethics by design. And so in simple terms, that we don’t wait until something goes wrong to ask these ethical questions. We should build these questions into the process from the beginning. And the course will help learners think through issues like fairness, transparency, safety, accountability and inclusion at the stage when decisions are still being made rather than after systems have already been deployed. The course, of course, is really going to be focusing on practical tools so that we can offer clear ways of thinking and working that can be used in everyday settings.

So it’ll help learners, for example, recognise common risks early, ask better questions during development, document the decisions made responsibly and think through the impact of AI systems on different groups of people. we’re moving beyond a one size fits all approach and we’ve done this by collaborating with experts from over 10 countries and 5 continents with some of the leading minds from the University of Oxford and the Alan Turing Institute and this global group, this global coalition is really vital because AI of course doesn’t operate in a vacuum it’s shaped by languages, it’s shaped by cultural norms and institutional capacities and of where it is developed and deployed so by integrating these diverse perspectives we’re trying to move from the theory again to the live reality so ultimately this MOOC is a capacity building effort with a simple purpose to help more people around the world build and use AI in ways that are responsible, inclusive and worthy of public trust we look forward of course to this continued collaboration with governments, with industry, with academia and civil society as we try to move forward we take it forward and we hope many of you will engage with the course when it launches, not only as learners, but also as partners in building a stronger culture of ethical innovation across the world.

Thank you very much.

Peggy Hicks

Thanks, Tim Curtis, UNESCO. We’re all looking forward to it. Now we have anticipation. We’re very fortunate to have an addition to our program today with Ambassador Tomsar, the permanent representative of Estonia, who’s one of the co -facilitators for the Global Dialogue on AI that will be launched in July, and a big responsibility. And he’s here to tell us a little bit about where it’s heading and how you all can contribute. Please, Ambassador.

Rein Tammsaar

Thank you very much. Good morning. I don’t know, is it morning? Yeah, maybe. So after three days here in India, I think that I lost time. I’m not track of understanding. Is it morning or evening? But thank you, UNESCO and Office of the High Commissioner for Human Rights. for convening this really important discussion, and of course to all our hosts here in India. And I also thank partners who contributed to this work. Today I’ll speak on behalf of two co -chairs of United Nations Global Dialogue on AI Governance, and two co -chairs are from Salvador and Estonia. The first Global Dialogue on AI Governance was mandated by all member states through a General Assembly resolution adopted in August 25.

So this is a member states driven process. It belongs to every country, to all member states. And its task is very practical, while its scope is multilateral. So this… The aim is, you know, to come together. It is a platform where governments and stakeholders exchange best practices and experiences, and this, we believe, can strengthen international cooperation on AI governance and ensure human -centric AI supports sustainable development and reduce, indeed, digital divides that are already there. So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attention four points from these priorities. So first, they want safe, secure, and trustworthy AI systems, and the trust here, of course, is an absolute key word.

Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to participate fully in AI economy, and inclusivity and equal access are essential here. Third, they want governance approaches that can work across borders. and be practical. So fragmentation raises the cost and weakens trust. So interoperability is absolutely key. And fourth, and that is, I think, quite actual here, they want AI anchored in human rights and international law. And this includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability. Now, we know human rights are not optional. They are part of a mandate agreed by member states. And today’s focus on responsible practices and industry standards responds directly to these priorities.

And standards turn principles into action. They shape risk management, they clarify accountability, they guide human oversight, and they give companies and regulators tools they can apply in real systems. so let me say that the global dialogue will not and I guess it cannot impose one single model we will listen we will identify common ground we will build on existing initiatives ethics of AI was mentioned here and it’s of course one of them we’ll avoid or try to avoid duplication and we will focus on practical value so I encourage you to bring your experience into this process share what works share what doesn’t work help us identify approaches that can scale across regions and level of capacity and in best case if we succeed and failure is not an option safety and trust will be visible in how systems are designed deployed and governed they will be reflected in real safeguards and in benefits that reach more people and this is very important for us thank you So I thank you and wish you a productive day, practical exchanges that move our common work forward.

And with this, I give it over to the real experts and panel. Thank you very much.

Peggy Hicks

Thank you, Ambassador. Wonderful to have you with us, and I think we’re all looking forward. We’re looking forward to having all of you join us in Geneva in July. So with that introduction by the three of us, we’re really, as the Ambassador said, going to turn it over to those who can really inform us about how this work is happening and I hope inspire us to both give support and emphasis, amplification to the work that you’re doing and bring more into the fold around responsible business conduct. So with that introduction, I’d really like to start, Ankit Bose, with you from NASSCOM. We had a great conversation yesterday. I’d love for you to inform our audience that NASSCOM represents the leading Indian tech industries and we want to hear more about your work and what you’re doing to encourage companies and help them to ensure a responsible work environment.

Thank you.

Ankit Bose

Thank you so much for having me here. And it’s my pleasure here to address the audience. So NASCOM has been there for almost four decades plus, right? We have been helping the tech industry in the country to shape and change the whole agenda for the country. I think that’s what we have been doing, specifically on responsible AI interests, right? I think the mission for NASCOM started in 2021. So we started with a gap that, you know, we were seeing a lot of AI was getting developed. But again, I think we found that there was some missing element that was the responsible, the trust, the human element. I think that that is how the mission started. From that point in time, our main core objective has been to develop open assets, right?

Build capacity, build, you know, adoption, right? And I think help all the different components in the ecosystem, right? Right from the government to the startup, the SMEs, right? All of them. So we have been trying to help them. And we’re trying to help them go up the ladder. and then really become aware, I think, not only the gloomy side of AI, but also the bright side if they adopt responsible AI governance practices right at the early. They can have a big upside. I think that’s what we have been doing.

Peggy Hicks

Can I ask, Ankit, how does this work? You mentioned that full range of companies that are involved, and one of the topics we spoke a bit about yesterday is the difficulties sometimes when you have big companies, we have some of them represented here, but also startups and small and medium enterprises. How do you differentiate? How do you make sure that we’re engaging across that very differentiated group of industry?

Ankit Bose

Yeah, so I think if I take it, I think there’s the big techs, right? Then there are the services companies. Then there are the middle -sized, small, and startup. I think all five of them have different sort of engagement, right? The big tech, I think, are playing at the front foot, right? The services companies have to follow their contracts, right? The bigger services companies. The medium tier companies, they are really trying to understand how they grow their AI base at the same time build, you know, that services or product using right governance principles. But again, I think the bigger support is needed from the, you know, the smaller startup, right? Because they are really, really fighting for day to day, right?

I think, and believe me, I think a startup founder has to first build a business, a tech, a team, right? And also get funding and apparently focus on, you know, a lot of things around governance, right? I think in that whole journey that we have seen, they are putting it at a second or probably the side burner, which is something which we see is a complete no -no. If you do that, you know, when you’re building a product, right, you might miss when you’re scaling. I think that’s what we are.

Peggy Hicks

Great. Thanks very much. I think we’re going to turn to the scale side of it now with Alex, you’re next in line. So, Alex Walden, you’ve been working on these issues within Google, and I think one of the insights that I’ve learned from you over the time we’ve known each other is really how complex it is to bring to product teams and those that are on the technologist side some of these issues of responsible business conduct and human rights, and give us the benefit of your wisdom about how that works and how we can do it better.

Alex Walden

Thanks for the question, and I love that you said that because I do see a very important part of my role as making sure that the stakeholders that we work with understand how things are working within companies because that helps us be better and you be better advocates for helping us improve. So, anyway, but to your question, because I know I’m going to be fast, I think where it really starts for us is from the values perspective. Obviously, we’re a company that’s founded on values around freedom of expression and privacy and bringing the benefit of our technology to everyone, and so that is where… That’s where it begins. But obviously, we have things like… Like, ultimately, it’s the sort of governance inside of the company that is what permeates throughout the 180 ,000 people that work at Google to ensure that we are being responsible in the way that we’re developing AI.

So as a baseline for us, responsibility and thinking about what responsibility means has to start with human rights, and then we can build from there. So we have a corporate policy that says that we have a commitment to respect the UN guiding principles on business and human rights. And we’ve also built on that with things like our AI principles that reinforce sort of more of an operational way in which we can manifest those values in all of the teams that are working to develop the various models or applications of the models in, say, Google Cloud or YouTube or Search. Just to maybe hone in a little bit on the types of standards that we’re using, because I think that’s important because there’s so much work being done in our ecosystem.

We use the UN guiding principles. We use the work happening. We use the work happening at the OECD, the work at UNESCO. engagement with our peers in industry through the BTEC project, through global network initiative, and this is just a few, but all of the guidance that comes out of those places and the dialogue that happens there helps us ultimately inform how things are working inside the company. And then just one layer down, then I’ll stop. I think having programs and processes like training and dedicated teams, ultimately that’s how you operationalize this through getting a product to market. And so I can say more, but I think those are kind of the big picture structures for what’s required for a company to do this at scale.

Peggy Hicks

So, you know, I’m not going to let you off the hook quite that easily. So we know that this isn’t always easy, though, right, that there are obstacles to really convincing people it’s worth the time. I’ve been in the room where hand -wringing is described as the, you know, no more hand -wringing about safety. We’re going to, we need to just move forward, and I’m sure there are pressures. that you face as the lead for human rights within this company trying to get your message heard. Tell me a bit about how you’ve been able to sometimes surmount some of those necessary challenges that you might see from that different perspective on whether or not these are hurdles or supports for the company to do its mission more effectively.

Alex Walden

Well, I mean, I think in general we have sort of corporations are incentivized to put products on market that are safe and that are trusted by our consumers. People know Google best through Google Search or Gmail, the varieties of consumer -facing ways they’re engaging with our products. And so we do have an inherent sort of market business reason to put out products that people trust and deliver good outcomes. And so we have to have processes inside that… that make that real. And so what we do is we have model requirements just at the most granular level. Before any product goes to market, there are model requirements. And so those teams are focused on ensuring that they’re validating the data and doing testing and doing evaluations.

And that’s at the model level. And then at the application layer, we have requirements for teams to be, again, doing testing, additional evaluations, setting additional guardrails, and focusing on what mitigations are going to be put in place for, again, things like Gemini before that gets launched. And then we have to have executives review these things. So before anything goes to market, leadership needs to understand what the risks are and how we’re mitigating them and have a plan in place to address that. So that is an important part of the process for us. And then last, we have post -launch monitoring, because obviously we can do all the testing in the world, but once you’ve launched a product, you have to be able to do all the testing.

there may be novel or new or residual risks that arise. And so we have to have a process for continuing to monitor that, understanding it, getting feedback, improving and improving

Peggy Hicks

Great. That’s super helpful, Alex, to understand that multilayered approach that needs to happen within companies, including, I think, that executive level that you mentioned. I mean, the signals from on top will actually inspire all of those other levels to do what we’re hoping they’ll do. And we have another example with us of some of these practices. I want to turn to Hector Duroir, who’s the director of responsible AI public policy at Microsoft. And we want to hear more about what you’re doing to embed responsible policy practices within Microsoft’s approach.

Hector Duroir

Thank you very much, Peggy, and thanks for having Microsoft here. So, yeah, I want to start with the inception of our responsible AI approach, which was in 2018. And at that stage, you didn’t have codes, directives, regulations. Frameworks guiding our approach. We’re nearly starting from a blank page. And we didn’t talk about foundational models or frontier models at that stage. It was all about specific AI system and applications, such as facial recognition, for instance, which was very popular. So we forged our AI principles around priorities such as privacy, such as reliability, inclusion, fairness, safety, security. And these high -level principles, the whole challenge was to translate them into practice afterwards. And so it’s really on this basis that we forged the Office of Responsible AI when we created it in 2019, around these principles, which then became our RAI standard, guiding all our actions across our different programs.

One of the programs that I want to reference here is our Sensitive Use Case program. So it’s a team within the Office of Responsible AI that is in charge of doing some triage, challenging basically sensitive use case coming from our different markets. on AI systems and models that could actually violate these principles that I was referencing. And so this team analyzed these use cases and then when it occurs that it’s necessary, bring them to our ITER committee, which is our AI ethics committee. And it involved Microsoft Board, both at the CTO level and the present level. And I think the board inclusion is very important in this kind of internal risk management framework. And so this work has been informed during the past years by many interesting developments.

So the OECD AI principles, obviously, but also the UNESCO recommendation on AI ethics. And I think all these principled approach that evolve or refine nuanced with AI capabilities are actually so important and very useful signals for us to refine our own AI governance program within Microsoft.

Peggy Hicks

Hector, you’ve talked a little bit about how you look at it from an internal perspective. But we wanted to hear a bit of how you look externally, like what are the drivers between how you engage. across the sector and with the government side as well.

Hector Duroir

Yeah, and I think we always navigate this very interesting interplay between best practices and international norms and regulatory standards. And a very good example here is the line of voluntary commitments that have been signed across the AI summits. And so if you look at Letchley Park in the UK or the South Korea summit that happened afterwards, it really helped us, as Alex was referencing, to ground our model testing approach, especially against public safety and national security risks. So when we talk about cybersecurity, for instance, or loss of control, or CBRN risks, that really grounded some very solid testing approach with some concrete operational triggers, concrete high -risk domain that we’re monitoring at the model level.

So that was one. The OECD hyper -reporting framework, which came out of the Hiroshima AI process, is another very good tool that I was involved in and I want to reference here. It was launched along the lines of the Paris AI Summit. And actually, it’s a very good way to understand how risk management transparency works in practice and how real -world deployment experience and transparency experience can guide upstream developments. And so it’s this kind of feedback loop that it creates that’s very interesting. And because we’re in Delhi, just to reference the voluntary commitments that were signed yesterday, I think that’s another very, very, very good and positive approach that the Indian government have been taking, especially on one of the commitments which basically encourages company to forge multilingual capabilities approach.

So basically build better evaluations against safety risk, not only in English norms, but beyond English norms. And I think that speaks about, you know, our principle of inclusion. That’s so important, and I’m very happy that they initiated this work.

Peggy Hicks

I have to say, one of the contrasts I’ve been making when I look at what’s been talked about here in Delhi as opposed to… prior summits is that issue of inclusion. And the language issue, I think, is so underrepresented in some of the conversations we have. So it’s wonderful that you’ve given that a shout out. We’re very fortunate, Yuchil, to have you with us as well. Yuchil Kim, who is the vice president at LG for AI research. So we’d really like to hear more about how you’re engaging with these global technical and policy standards. We talked about the UN Guiding Principles on Business and Human Rights, the UNESCO recommendation on AI ethics, and, of course, the MOOC that’s being worked on.

So give us a sense of how these frameworks are being engaged with by LG.

Yuchil Kim

So the essential of MOOC is for the practitioner The practitioner usually is struggling with the same question How do I actually apply this in my day -to -day work? So we are focusing on the bridge in the gap So we provide the best standard risk So we get a lot of risk that Timothy mentioned So we also contribute to our own experience So I previously mentioned about our process And we made also our AI -powered data compliance system And also I will mention soon We have an annual report about AI ethics activities So I hope the MOOC can be a good practice for everyone It will launch in this half So last one is I want to talk about transparency So we have a lot of activities about the AI Responsible AI Inclusive AI So we published our annual accountability report on AI So yesterday we released the third edition.

So here are some some track of that. I will spread out after our session. So please refer my documents.

Peggy Hicks

Well yeah. Wonderful. No I think it’s super interesting to understand both how you’ve been looking at that learning process within the company but also how that more global approach working with UNESCO is going to be very helpful and I think it’s one of those areas where we all know so much more needs needs to happen. But we’ve we’ve heard the the company perspective here and we’re very fortunate to have with us from the World Benchmarking Alliance, Namit Agarwal. And Namit, you know, I think one of the things that we’ve talked about is how we incentivize the race to the top amongst all of these actors in this space. And you’re going to, I hope, give us some some insights based on the the work that the World Benchmarking Alliance is doing about how capital and investment can be used to make sure that innovation is being approached in a responsible way.

Over to you, Nami.

Namit Agarwal

Thanks for having me here. And I’m not representing investors, but we do work with several stakeholders, including investors, civil society, governments, and companies. So we are a nonprofit, and we try to strengthen accountability of the world’s most influential companies so that their impact on people and planet can be sustainable. We also assess the world’s most influential tech companies on whether they are advancing a trustworthy, rights -respecting, and inclusive digital future using standards such as the UN Guiding Principles, but also others that were mentioned by my fellow panelists here. Our role is to provide comparable, credible, and standardized data that our stakeholders can use because of the challenges that we face. Because it’s an ecosystem approach, so how can they work together in doing that?

So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our latest assessments of 2 ,000 companies at Davos last month, and particularly from the tech side, what we found is close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations, global expectations on the governance aspect of it, and none of these 200 companies that we assess disclose their reports on human rights impact assessment. And I think that clearly shows that while there is a lot of intent, some work is happening, but governance and accountability are not really there, so we need a lot of work to happen there. And we believe responsible innovation requires incentives for long -term risk management, clear expectations that are tied to capital.

So I think that the way we’re going to do this is to look at the market, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and consequences for weak governance because it has to be consequential for companies to move in that direction.

And I think that is where investors have a very catalytic role to play. We convene a coalition of investors and civil society organizations

Peggy Hicks

Now, I mean, I think it’s so interesting that we work in a sector that is incredibly based on data, but yet we don’t necessarily bring it into this conversation in the ways that we need to, and that idea of both incentivizing the right practices and leverage within companies, but also, you know, too many conversations sort of focus on the tech industry as a whole and sort of group everybody together as if they’re all engaging in the same way. And so the work that you’re doing really helps us to understand those nuances more. Could you go a bit deeper and look a bit at some of the examples and concrete suggestions coming out of your work as to how to push that discussion forward more?

Namit Agarwal

Absolutely. So I think the first thing is engagement and dialogue, and I think that is a very important way. And we have been fortunate to have good engagement with both Google and Microsoft on this panel. but again it’s important to build on engagement because it’s a continuous process it’s important for investors to engage with some of the leaders but also engage with companies who are fence -sitters to bring them along faster the laggard will definitely catch up and come on board but for investors and if you want to for capital and finance to incentivize responsible innovation, responsible AI there are three things that we believe investors should definitely do first is on AI governance and board oversight investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are aligned with long term human rights risk mitigation and whether governance applies across the full AI value chain second is on implementation at product and business model level and we heard some examples just now investors need to move beyond policy statements and ask companies how ethical principles are translated into product level strategy How high -risk use cases are identified and whether there are internal mechanisms and controls to, you know, identify harms as they emerge.

And third is robust human rights impact assessments and asking whether companies conduct AI -specific impact assessments. Are they publishing meaningful summaries? Are mitigation measures integrated into product cycles? And I think this is an area where we have seen a lot of gaps.

Peggy Hicks

Great. Thanks, Namek. I wonder if we could actually take that one step further and get some input from the other members of the panel on what that looks like in practice. Because, of course, this is a panel that’s focused sort of on the company perspective. I think we have some of our real partners here on the civil society side. And as much as they understand that that conversation needs to happen, I think they sometimes find it difficult to be able to make sure that the way those risks are assessed really looks like bring in the voices, bring in the experiences of people, and particularly people in different contexts and different environments in which the companies are being, their products are being rolled out.

So those issues of stakeholder engagement, access, dialogue with the civil society side, it would be great to hear a little bit more about some of the lessons that you’ve all learned there. And I see you shaking your head. Please tell us from the NASSCOM perspective how you look at it.

Ankit Bose

Well, I think from an enterprise lens, right, I think when they are trying to implement responsible AI or trustworthy AI, right, I think the biggest issue is there are different groups internally, right, the tech group, the business group, the legal risk group, right, the finance group, right. And then all of them are working in silos, what we feel like, because the business want the best, you know, for the business, the tech wants to put the best technology, right. The risk is very, you know. Right. conservative, right? And finance always has upper limit on what they want to spend. So that’s what issue. I think what helps is if all of them build a collaboration which can be taken use case by use case.

I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first thing. The second thing is I think what from NASCOM what we are seeing, there’s a lot of frameworks which are getting developed, right? Every country, every place you go, there’s a new framework, right? But from the framework heavy or the concept heavy to action is not happening. I think that’s a big gap, right? So if a technologist is trying to implement responsible governance, right? A developer is trying to implement, right? He will be lost in the framework. He doesn’t know what’s actionable. I think what he should do. So I think that’s one big need.

I think that’s what we are also driving. We are trying to drive a multi, you know, multi -different organization -led approach where we have all different sizes of organizations, where we come together and start discussing, collaborating, implementing, right? I think that’s the second nugget. I think these are two points. I know time is up.

Peggy Hicks

No, that’s great. I mean, I think it shows that that collaborative effort is going to be super important rather than a siloed approach for so many practical reasons as well, that companies can only respond to so many different frameworks. And what they need to do is have the simple guidance and support that they need to actually implement at this stage. Headquarter, do you want to say quick comments from the company side about how you’re facing those challenges?

Hector Duroir

Yeah, two very quick examples on how we involve civil society and academia in this process. So our work really sits at the intersection of policy, research, and engineering groups. And to inform product development with our responsible AI principles, we regularly publish some internal policies. And it’s an iterative process with our research teams, with our product teams. And as part of this process, we actually include academics who have a specific REST domain expertise or think tanks and civil society organizations which have been thinking very deep about the deployment of 1AI system, 1AI model in certain contexts. And so that really informs the product that we do from the inception. And I think the second example that was raised is the big topic and the governance challenge that we face is the importance of refining AI evaluations.

That’s the constant thing. And in India, for instance, we’ve been working with some NGOs around a project named Samishka to build some community -led benchmarks, which is basically a safety tool that we include afterwards in the system construction, to really get data sets that are grounded in a community with a specific… cultural aspects, specific contextual aspects, because if you just translate safety tools from English to another language, you lose all the context for which this safety tool has been built. And so that’s another example of really an area where we need more cooperation between civil society and governments and companies. It’s really how do we build these safety tools beyond English norms such as an India.

Peggy Hicks

That’s great, and it takes work to do that, and the more we can spread, you’ve done some of the work, you know how to do some of it, and diffuse it amongst other companies that could learn from it. That’s part of what we’re trying to do with BTEC, but I think there’s a lot more to be done. Yuchil, do you want to come in?

Yuchil Kim

Yes, I agree with his comment. The safety, we should work together. So that’s the reason why we make our annual report, because sharing our best practice and also sharing our struggles, what we had a struggle of, that we think that’s a very important thing. So this is my colleague mentioned about it. There’s an African proverb that said, if we want to go fast, go alone. If we want to go far, go together. So building a trustworthy and safe ecosystem is not a sprint. So it’s a long journey, so we can go together.

Peggy Hicks

It’s a long journey with a lot of sprints happening day to day, as far as I can tell. Some of them here at the summit, but over to you, Allie.

Alex Walden

So much sprinting. I think maybe just to pick up specifically on the stakeholder engagement piece. So a few things. One, I think it’s important for companies to have a programmatic approach to stakeholder engagement, so we need to have ways in which we’re regularly engaging with stakeholders in general, not just on a specific product question. But so I would say first a programmatic approach, and then second, something that is more ad hoc. So when we need to consult specifically on a product, we need to have a sort of process and way to do that. The other thing is we have programs internally like trusted tester programs where we are working with third -party organizations to make sure that they have early access or pre -launch access to models or to a product in order to test it so that we can identify potential risks or errors ahead of time and address them before we launch a product.

And then last, just to highlight something that we do is similar to others, our research team called Impact Lab, which is part of the overall human rights programmatic work at the company, engages directly with communities in doing research to inform how we are improving our products and what we’re developing. So that work is also happening through the research team specifically. And they recently launched something called the Amplify Initiative, which is an open -source app. This is specifically on language inclusion that allows… members of the public and communities to engage in the fine -tuning work around our language models. So specifically that there is a lot of, there’s a wealth of information and expertise out there that we should all be benefiting from, that we can benefit from, and it’s open source so we can also share it with others in industry.

Peggy Hicks

That’s great to hear, and I’m sure more needs to be done on that front, but the amplification effect is so crucial. Look, we could probably go on talking all day, but I see the clock is ticking down. Now, fortunately, rather than us try to draw the conclusions from this, we’ve welcomed in another speaker to give us some concluding remarks to pull some of these pieces together. I’m very happy to invite Parvati Adani from Sero Amarchan Mangaldas to help us think through some of these issues. Please.

Parvati Adani

Thank you for that. I think it’s partly easy and partly tough because there was a lot to understand, but I think we can go back from this. conversation. But firstly, thank you. You’ve held the conversation beautifully. And thank you to UNESCO, United Nations, NASSCOM, and everybody in this room who brought their knowledge, their conscience to this conversation. Actually, I just want to talk a little bit about a conversation with a machine. As we were thinking about this topic, and we were engaging on this issue, I wanted to share something that I might, I feel resonates with a lot of what you’ve talked about. I did something in preparation, I decided to ask the tools that we’re talking about over here, a question that we avoid asking ourselves.

Do you, I’m talking to the tool, do you have ethical limits? Do you understand the difference between what it can do and what it should do? And I’m going to quote verbatim. On conscious, at a conscious level, the answer is I don’t know. And neither does anybody. Nobody else. The gap is a philosophical, uncomfortable position where think of me as having no home inside. I have no continuous thread of existence and I cannot verify about myself what you have asked me. I don’t have any consequences to bear. Now, what came back, though unexpectedly thoughtful, showed us about restraints, values and what it appears to have internalized. It acknowledged the difference between instruction and conscience, a lot of what we’ve talked about today.

And so, I think when we talk about this, we said human rights are not optional. We cannot ignore the impact on people and planet. we have to make incentives for good governance so when a tool cannot understand this for itself I think we have to do the job what we have chosen in India and when we are having this conversation this location is not ceremonial, it’s very deliberate that we have thought about innovation over restraint and we have to think about that being the right choice we allow innovation to be in a safe place without feeling the weight of the regulation and I think we have a lot to learn from all of you who have been doing this for so long the privacy, the safety, the impact on children and vulnerable groups the question is whether the people that we are talking about are going to be the subjects of the transformation or just its audience or just the object An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution.

So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design. I think the ideas of an interoperable, flexible system is a forward -looking and an inclusive one. I think a lot of what you mentioned, Alex, about governance inside the company, it’s wonderful what you’re talking about. And I think the voluntary commitments that have been reflected in this summit is also fantastic. So now we come to the harder work. The ambition is real. The infrastructure exists. But ensuring that… We don’t leave with just good intentions and good ideas, but action. Thank you.

Peggy Hicks

Thank you, Praveen and Dhani. It’s wonderful to hear those perspectives. We’re coming to the close of this session. Just a few parting words to all of you. I think we’ve done enough in this short conversation to really give a sense of how complex some of these issues are, the dynamics within companies and externally and then globally across different geographies, the challenges that are faced. But the reality is that all of us have a responsibility to engage on these, and we each have different roles. We’ve heard a bit about what some of the companies are doing. We’ve heard a little bit about how we can challenge them and incentivize the actions that they take in the space.

There are good practices, but they’re not universally applied. They’re not available to some companies. There are companies that may want to engage in this, and we can help them to do this. NASSCOM and I have been… We’ve been discussing a little bit about how we make, simplify things, bring in more into the fold of this conversation. And, of course, we’re here in an environment where we have governments that are looking at what do we need to do to create responsible business practices and incentivize them as well. So I hope everybody walks out of the room thinking, what can I do to continue this conversation? How can I differentiate between companies that are thinking about these issues in a way that will deliver for myself, for my children, for my future the ways that we want to see?

AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give those values, that will inform and give us human dignity going forward in the future. So thank you all so much for joining us. Thank you for fitting this into your schedule today, and enjoy the rest of the summit. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Responsible AI must work for people, not only in advanced economies or for dominant platforms”

The knowledge base notes that practical safeguards should ensure AI works for people beyond advanced economies and highlights human-rights due diligence as a key process [S3] and [S21].

Confirmedhigh

“Corporations have a duty to respect human rights and address risks from their AI products; human‑rights due diligence is a pragmatic way to embed these obligations”

Peggy Hicks’ emphasis on corporate human-rights duties and due-diligence is corroborated by multiple sources that describe integrating human-rights due diligence into standards and operations [S21] and [S119] and [S120].

Confirmedhigh

“Effective AI governance requires clear rules for companies and governments and should operate across development, validation and deployment stages rather than as an after‑thought”

The need for governance mechanisms that span the whole AI lifecycle is explicitly stated in the knowledge base [S73].

Confirmedhigh

“Governments should create a level playing field and reward firms that act responsibly”

The importance of a level playing field for fair competition and responsible corporate behaviour is highlighted in the knowledge base [S122].

!
Correctionmedium

“The UN Global Dialogue on AI Governance will be launched in July with an inaugural convening in Geneva”

The knowledge base indicates the Dialogue will be launched later in the year but does not specify July or Geneva as the inaugural venue [S128]; the reported timing is not confirmed.

Confirmedhigh

“Rein Tammsaar is co‑chair of the United Nations Global Dialogue on AI Governance”

The opening address of the AI Governance Dialogue lists the co-chairs, confirming Rein Tammsaar’s role [S77].

Additional Contextmedium

“AI challenges are consequential and require global standards, collaborative public‑private solutions and rights‑based approaches”

Other speakers in the knowledge base stress the need for inclusive, rights-based AI systems and proactive risk management, providing additional nuance to this claim [S115] and [S3].

External Sources (130)
S1
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assis…
S2
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Dr. Tawfiq Jilasi- Assistant Director General for Communication and Information (mentioned by Tim Curtis in introductio…
S3
AI That Empowers Safety Growth and Social Inclusion in Action — – Ankit Bose- Tim Curtis- Rein Tammsaar
S4
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S7
AI That Empowers Safety Growth and Social Inclusion in Action — – Peggy Hicks- Alex Walden- Rein Tammsaar
S8
TRUST AND ATTRIBUTION IN CYBERSPACE: — A former Ambassador of Switzerland, is a founder and President of the ICT4Peace Foundation, which since 2003 explore…
S10
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — I mean, again, I won’t spend too much time, but there’s a lot of this information available online. All the startups tha…
S12
AI That Empowers Safety Growth and Social Inclusion in Action — Parvati Adani from Sero Amarchan Mangaldas provided a powerful concluding perspective that reframed the technical and po…
S13
Keynote-Jeet Adani — -Moderator: Role involves introducing speakers and facilitating the discussion. Areas of expertise, specific role detail…
S14
Open Forum #34 How Do Technical Standards Shape Connectivity and Inclusion — – **Alex Walden** – Global Head of Human Rights, Google Alex Walden, Global Head of Human Rights at Google, articulated…
S15
WS #42 Combating misinformation with Election Coalitions — – Alex Walden – Global Head of Human Rights for Google 5. Government pressure: Alex Walden, Global Head of Human Rights…
S16
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Alexandria Walden: Global Head of Human Rights, Google – Nikki Muscati: Audience member who asked questions (role/aff…
S18
Internet Human Rights: Mapping the UDHR to Cyberspace | IGF 2023 WS #85 — Peggy Hicks, Director of the Office of the UN High Commissioner for Refugees, participated in the session as a discussan…
S19
New Technologies and the Impact on Human Rights — – **Peggy Hicks** – Director of the UN High Commission for Human Rights, human rights expertise Anita Gurumurthy, Rodri…
S20
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — In a recent discussion on Internet Governance, Peggy Hicks emphasized the importance of diverse participation in confere…
S21
Embedding Human Rights in AI Standards: From Principles to Practice — – **Peggy Hicks** – Director of Thematic Engagement at the Office of the UN High Commissioner for Human Rights Ernst No…
S22
https://dig.watch/event/india-ai-impact-summit-2026/ai-that-empowers-safety-growth-and-social-inclusion-in-action-2 — Hector, you’ve talked a little bit about how you look at it from an internal perspective. But we wanted to hear a bit of…
S23
What Proliferation of Artificial Intelligence Means for Information Integrity? — – **Peggy Hicks** – Director of the Thematic Engagement, Special Procedures and Rights to Development Division at the UN…
S24
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Peggy Hicks- UN High Commissioner for Human Rights
S25
Press Conference: Closing the AI Access Gap — The goal is to move from a narrative to action, where concrete steps are taken in both the policy side and the private s…
S26
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — This readiness is crucial for fostering peace, establishing justice, and ensuring the development of robust global insti…
S27
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Legal and regulatory | Development | Human rights Multi-stakeholder Collaboration and Policy Harmonization
S28
AI Governance Dialogue: Presidential address — – H.E. Mr. Alar Karis Human rights | Legal and regulatory | Development Importance of global cooperation and coordinat…
S29
Global AI Governance: Reimagining IGF’s Role & Impact — Human rights principles | Capacity development | Interdisciplinary approaches
S30
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier techn…
S31
Advancing digital inclusion and human-rights:ROAM-X approach | IGF 2023 — Alain Kiyindou:Thank you, Patrick. I am going to share my views based on the current out in the Benin, Niger, Ivory Coas…
S32
CLOSING CEREMONY | IGF 2023 — Cedric Thomas Frolick:Program Director, Excellencies, Honorable Members of Parliament, the large number of youth, women,…
S33
From principles to practice: Governing advanced AI in action — ## Industry Implementation Challenges ## Key Recommendations ## Ongoing Challenges – Ensuring inclusive governance th…
S34
AI Governance Dialogue: Steering the future of AI — The discussion aims to advocate for comprehensive, inclusive AI governance that ensures the benefits of AI are shared gl…
S35
AI for Good – food and agriculture — Dongyu Qu advocates for responsible and ethical AI development that respects human dignity and serves both humanity and …
S36
How to make AI governance fit for purpose? — – Jennifer Bachus- Anne Bouverot- Shan Zhongde- Chuen Hong Lew Given that AI technologies are inherently global, effect…
S37
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S38
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S39
DC-SIG & DC-IUI: Schools of IG and the Internet Universality Indicators — The speaker mentioned that UNESCO accompanies the research team and the country at every step of the assessment process….
S40
Futuring Peace in Northeast Asia in the Digital Era | IGF 2023 Open Forum #169 — In today’s globalised world, no single country can manufacture a product independently. Academic programs promoting coop…
S41
Responsible AI in India Leadership Ethics & Global Impact — And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. E…
S42
Responsible AI in India Leadership Ethics & Global Impact part1_2 — But. How it is actualized? and let me say how it’s translated into our products. And by the way, it’s in our products, i…
S43
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S44
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Investors can directly engage companies to improve their policies, practices, and governance. Investors have the potent…
S45
Harnessing Digitalisation for Greener Supply Chains in LDCs — Lastly, good governance is emphasised as a crucial element in policy implementation. In the Pentagon strategy, good gove…
S46
Enhancing CSO participation in global digital policy processes: Roles, structures, and accountability — Accessibility and inclusivity are recognized as areas with room for improvement The analysis highlights deep-seated cha…
S47
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — In conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It hi…
S48
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96 — The diversity of civil society and the global majority, including different languages and cultural norms, should be cons…
S49
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S50
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S51
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The conversation highlighted the need for changing incentive structures to support assurance adoption, including explori…
S52
Safe and Responsible AI at Scale Practical Pathways — “right which is can i share the data so i’ll focus on the i the incentive there has to be an incentive for someone to br…
S53
AI Governance Dialogue: Steering the future of AI — This metaphor became a central organizing principle for the discussion, leading directly into the introduction of the th…
S54
Global AI Governance: Reimagining IGF’s Role & Impact — A young researcher from Hong Kong, representing both PNAI and Asia Pacific Policy Observatory, sought advice on navigati…
S55
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S56
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S57
Inclusive AI_ Why Linguistic Diversity Matters — “Democratizing use of AI and ultimately making AI work for all”[6]. “It’s hackable, it’s privacy preserving, it’s multil…
S58
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S59
Safeguarding Children with Responsible AI — Cultural, contextual, and inclusion considerations
S60
Main Session on Artificial Intelligence | IGF 2023 — Reference to work on voluntary commitments The US government has made voluntary commitments in key areas like transpare…
S61
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its appl…
S62
New Technologies and the Impact on Human Rights — However, this corporate perspective faced significant challenge from civil society representatives who argued that volun…
S63
[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation — Effective governance requires different layers from core regulatory frameworks to voluntary commitments, as some aspects…
S64
Part 5: Rethinking legal governance in the metaverse — As negotiations progressed, however, it became clear that member states varied in their readiness to commit to such ambi…
S65
NATIONAL CYBER SECURITY FRAMEWORK MANUAL — Commitments may appear to be legal or political, voluntary or mandatory, but they usually have effects that extend outsi…
S66
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S67
A Digital Future for All (afternoon sessions) — AI governance requires a multi-stakeholder approach due to the diverse nature of opportunities, risks, and inclusivity c…
S68
WS #362 Incorporating Human Rights in AI Risk Management — Multi-stakeholder engagement between companies, civil society, academia, and governments is essential for cross-learning…
S69
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Different governments and countries are adopting varied approaches to AI governance. The transition from policy to pract…
S70
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — The discussion maintained a collaborative and constructive tone throughout, with panelists generally agreeing on core pr…
S71
Agentic AI in Focus Opportunities Risks and Governance — The discussion maintained a professional, collaborative tone throughout, with industry representatives positioning thems…
S72
Laying the foundations for AI governance — This disagreement is unexpected because it reveals fundamentally different views of industry motivation. Papandreou pres…
S73
WS #123 Responsible AI in Security Governance Risks and Innovation — Jingjie He: So I think the inclusive engagement across stakeholders is essential for the effective global governance of …
S74
How to make AI governance fit for purpose? — – Jennifer Bachus- Anne Bouverot- Shan Zhongde- Chuen Hong Lew Given that AI technologies are inherently global, effect…
S75
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically high…
S76
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S77
Opening address of the co-chairs of the AI Governance Dialogue — 3. Establishing international technical standards that allow policy and regulation to remain flexible and agile Tomas L…
S78
Empowering Civil Servants for Digital Transformation | IGF 2023 Open Forum #60 — They have an agreement with UNESCO focusing on capacity building on the topic of artificial intelligence. UNESCO has be…
S79
WS #110 AI Innovation Responsible Development Ethical Imperatives — Capacity development | Development Godoi states that capacity building is the first demand UNESCO receives from member …
S80
AI That Empowers Safety Growth and Social Inclusion in Action — “And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in par…
S81
DC-SIG & DC-IUI: Schools of IG and the Internet Universality Indicators — Major Discussion Point 2: Challenges and Opportunities in Implementing IUIs The speaker mentioned that UNESCO accompani…
S82
IGF Parliamentary track – Session 2 — 6. Capacity Building and Education Shuaib Afolabi Salisu: Thank you so much. Let me start on a note of appreciation to…
S83
Responsible AI in India Leadership Ethics & Global Impact — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S84
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S85
Industry leaders partner to promote responsible AI development — Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier …
S86
Leading tech companies commit to responsible development of AI at Seoul AI Summit — At an AI Seoul Summit 2024 meeting on Tuesday, sixteen companies leading the charge in artificial intelligence (AI) deve…
S87
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — The speakers refer to tech companies breaking laws as long as the gains outweigh the sanctions. The argument is made tha…
S88
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Investors can directly engage companies to improve their policies, practices, and governance. Investors have the potent…
S89
Evolving Threat of Poor Governance / DAVOS 2025 — Incentivizing Good Governance Tuggar shared an anecdote about losing his passport to illustrate how incentivizing good …
S90
Building Trust through Transparency — Digital tools can be utilized to disclose public purchases, tenders, and the entire decision-making process within gover…
S91
Multistakeholder digital governance beyond 2025 — Language barriers and cultural diversity must be addressed for inclusive participation
S92
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — In conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It hi…
S93
Internet standards and human rights | IGF 2023 WS #460 — In conclusion, standards have a significant impact on our lives and require an inclusive and diverse approach. Addressin…
S94
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S95
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S96
The role of standards in shaping an AI-driven future — The tone is consistently formal, authoritative, and optimistic throughout. The speaker maintains a confident and promoti…
S97
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S98
Thinking through Augmentation — In conclusion, the analysis highlights common concerns raised by Lacqua and Azhar. These include the potential for techn…
S99
IGF Daily Brief 4 — Currently more than100 ethical AI frameworksexist, but they remain voluntary and are not sanctioned. So what concrete me…
S100
Agenda item 5 : Day 4 Afternoon session — Japan:Thank you, Mr. Chair. Japan believes that capacity building is essential for maintaining peace and stability and p…
S101
Harmonizing High-Tech: The role of AI standards as an implementation tool — Sezio Onoe:Thank you, Philippe. Good afternoon, everyone. I can talk within two minutes. Actually, my belief that standa…
S102
Main Topic 3 –  Identification of AI generated content — Aldan Creo:Great. Hello. How are you, everyone? Well, it’s a pleasure to be able to have this session. I hope we’ll make…
S103
Open Forum #48 Implementation of the Global Digital Compact — The discussion maintained a constructive and collaborative tone throughout, with speakers demonstrating both urgency abo…
S104
Open Forum #47 Demystifying WSis+20 — Success will depend on balancing celebration of concrete achievements with honest acknowledgment of persistent gaps, par…
S105
High-Level Track Facilitators Summary and Certificates — These key comments transformed what could have been a routine closing ceremony into a substantive reflection on the fund…
S106
Multigenerational Collaboration: Rethinking Work, Learning and Inclusion in the Digital Age — The discussion maintained a professional yet urgent tone throughout, with speakers expressing both optimism about collab…
S107
Closing Ceremony — Olaf Kolkman: Thank you. It’s a little bit closer to my mouth. Excellencies, distinguished delegates, my name is Olof …
S108
Closing Session  — Wrottesley emphasized that the momentum generated at the summit must continue beyond the event itself, requiring long-te…
S109
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S110
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S111
Closing Ceremony — Multiple speakers addressed the transformative challenges posed by artificial intelligence and the need for new approach…
S112
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — Avendis Consulting: I thank my esteemed co-chair. We will continue with our list of speakers. I now give the floor to …
S113
Technology and Human Rights Due Diligence at the UN | IGF 2023 Open Forum #163 — Peggy Hicks:Great. Scott will stay online for, for interpretation of, of all of that, which some who are maybe not as de…
S114
Opening of the session — South Africa:Our comments will focus on sections A and B. The overview section of the report is well written and succinc…
S115
Intelligent Society Governance Based on Experimentalism | IGF 2023 Open Forum #30 — She highlighted the need for AI systems to be inclusive of diverse voices and ensure that they respond to the needs and …
S116
AI for food systems — Pieternel Boogaard references Stephen Hawking’s perspective on AI to emphasize the dual potential of artificial intellig…
S117
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S118
Open Forum #26 High-level review of AI governance from Inter-governmental P — 1. Governments: Responsible for balancing innovation and security, and creating appropriate regulatory frameworks. Andy…
S119
WS #133 Better products and policies through stakeholder engagement — Richard Wingfield: you you you you you you you and rights and lead our work with technology companies on how t…
S120
Child online safety: Industry engagement and regulation | IGF 2023 Open Forum #58 — Dunstan Allison-Hope:for the invitation to speak. Much appreciated. I’d love an invitation to Ghana as well, if that’s f…
S121
Scramble for Internet: you snooze, you lose | IGF 2023 WS #496 — The private sector must invest in the appropriate technological capabilities to prevent infrastructure compromise. Poorl…
S122
Trade Doublespeak: Could Digital Trade Non-Discrimination Rules Undermine Competition Policy and Other Forms of Digital Governance? ( Rethink Trade) — Advocating for a level playing field is crucial. It is believed that a fair and competitive environment will foster inno…
S123
WS #395 Applying International Law Principles in the Digital Space — Francisco Brito Cruz: Thank you. I hope you are all listening to me. Hello from Sao Paulo. I’m wanting to be with all of…
S124
The potential of technical standards to either strengthen or undermine human rights and fundamental freedoms in case of artificial intelligence systems and other emerging technologies — Niki Masghati:Gurshabad, thank you so much. You know, hearing you highlight the three sort of areas that you’re looking …
S125
Unlocking Multistakeholder Cooperation within the UN System: Global Partnerships for Open Internet — Isabel Ebert:Many thanks, Raquel, and many thanks to the organizers for the invitation. I think we have already heard pl…
S126
What is it about AI that we need to regulate? — Key principles are emerging for the Global Dialogue’s implementation. InOpen Forum #30, Juha Heikkila emphasized that”An…
S127
Zero draft resolution for Scientific Panel on AI and Global Dialogue on AI Governance published — As part of the intergovernmental process dedicated to defining terms of reference and modalities for the Independent Int…
S128
From summer disillusionment to autumn clarity: Ten lessons for AI — The Global Dialogue will bring governments and other stakeholders together to share experiences, best practices, and ide…
S129
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — Rosanna Fanni: Thank you. Thank you very much. and also thanks for all my fellow panelists. I think a lot of things …
S130
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — ## Foundational Context: UNESCO’s Mission and Approach – **Dafna Feinholz** – Acting Director of the Division of Resear…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Peggy Hicks
3 arguments169 words per minute2469 words876 seconds
Argument 1
Emphasized the need for global standards, collaborative public‑private solutions, and rights‑based approaches to ensure AI works for all people, not only advanced economies (Peggy Hicks)
EXPLANATION
Peggy stresses that addressing AI challenges requires worldwide standards, joint public‑private efforts, and a human‑rights framework so that AI benefits reach everyone, not just dominant platforms or wealthy nations. She links responsible governance, clear rules, and incentives to achieving this inclusive impact.
EVIDENCE
She introduces the session as focusing on global standards, collaborative public-private solutions, and rights-based approaches to enable responsible AI with real-world impact [2]. She highlights the need for practical safeguards that benefit all people, not only advanced economies [8], and calls for responsible and effective AI governance with clear rules for companies and governments [9]. She notes companies’ responsibility to respect human rights and the role of human-rights due diligence in corporate operations [10-11], and stresses that governments must create a level playing field and reward responsible companies [12-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Peggy’s call for worldwide standards and rights-based AI is documented in her briefing on embedding human rights in AI standards [S21] and reinforced in the AI Impact Summit where she highlighted inclusive AI development [S24]; press-conference remarks also stress public-private partnerships and global cooperation [S25][S26].
MAJOR DISCUSSION POINT
Need for global standards and inclusive AI governance
Argument 2
Stressed the importance of creating market incentives and a level playing field so that companies that act responsibly are rewarded, referencing the BTEC project’s work on incentives (Peggy Hicks)
EXPLANATION
Peggy argues that incentives and a fair competitive environment are essential to encourage companies to adopt responsible AI practices, and that rewarding responsible behavior will drive wider adoption. She points to the BTEC project as a mechanism to facilitate this conversation and promote good practices.
EVIDENCE
She states that incentives for companies should be in place so that those engaging responsibly are rewarded [13] and mentions the BTEC project at OHCHR aimed at making this conversation happen and sharing good practices through convenings like this one [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The BTEC project’s role in shaping market incentives is described in the AI That Empowers Safety Growth and Social Inclusion report featuring Peggy Hicks [S3]; the “Closing the AI Access Gap” press conference also emphasizes the need for incentives and a level playing field for responsible firms [S25].
MAJOR DISCUSSION POINT
Market incentives for responsible AI
Argument 3
Called for continuous, programmatic engagement with civil society, academia, and affected communities to ensure inclusive AI development (Peggy Hicks)
EXPLANATION
Peggy emphasizes that ongoing, structured engagement with a broad range of stakeholders is crucial to make AI development inclusive and to incorporate diverse perspectives, especially from civil society and vulnerable groups. She highlights the challenges of hand‑wringing and the need to overcome obstacles to embed human‑rights considerations.
EVIDENCE
She describes the difficulty of convincing people of the value of safety and human-rights work, noting pressures faced by human-rights leads and the need to surmount challenges [144-148], and calls for continuous programmatic engagement with civil society, academia, and communities to ensure inclusive AI development [144-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multi-stakeholder collaboration is highlighted in the Africa AI Readiness workshop which stresses ongoing engagement with civil society and academia [S27]; the AI Governance Dialogue presidential address underlines the importance of coordinated global cooperation and continuous stakeholder dialogue [S28].
MAJOR DISCUSSION POINT
Multi‑stakeholder engagement for inclusive AI
AGREED WITH
Alex Walden, Hector Duroir, Yuchil Kim, Parvati Adani, Namit Agarwal
T
Tim Curtis
1 argument158 words per minute740 words280 seconds
Argument 1
Highlighted UNESCO’s trust‑by‑design principle, RAMS readiness assessments, and the launch of a massive open online course (MOOC) on AI ethics to translate global agreements into local practice (Tim Curtis)
EXPLANATION
Tim explains that UNESCO’s recommendation on AI ethics promotes trust through design choices, safeguards, and accountability. He notes that UNESCO is operationalising this via RAMS readiness assessments in many countries and a new MOOC on Coursera to make AI ethics education widely accessible.
EVIDENCE
He states that trust is earned through design choices, safeguards and accountability, which is why the UNESCO recommendation on AI ethics is important [32]. He describes the RAMS (Readiness Assessment Methodology Reports) launched in over 80 countries, including a recent India assessment [33]. He announces a global MOOC on AI ethics to be delivered on Coursera, aiming to make ethics learning accessible and practical for day-to-day work [37-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tim’s description of the UNESCO MOOC and ethics-by-design approach is detailed in the AI That Empowers Safety Growth and Social Inclusion report on the Coursera course [S3]; UNESCO’s AI Ethics Recommendation and RAMS methodology are referenced in the IGF 2023 session on generative AI systems [S30].
MAJOR DISCUSSION POINT
UNESCO tools for operationalising AI ethics
AGREED WITH
Peggy Hicks, Rein Tammsaar, Alex Walden, Hector Duroir, Namit Agarwal
R
Rein Tammsaar
1 argument126 words per minute576 words273 seconds
Argument 1
Outlined the UN Global Dialogue on AI Governance priorities: safe and trustworthy AI, closing capacity gaps, cross‑border interoperable governance, and anchoring AI in human rights and international law (Rein Tammsaar)
EXPLANATION
Rein presents the four core priorities of the UN‑mandated Global Dialogue on AI Governance: ensuring AI systems are safe and trustworthy; addressing capacity gaps in developing countries; creating interoperable, cross‑border governance; and grounding AI in human‑rights law. These priorities guide the multilateral platform for sharing best practices.
EVIDENCE
He lists the four priorities: safe, secure, trustworthy AI systems [67]; closing capacity gaps for developing countries [68-69]; governance approaches that work across borders and are practical, emphasizing interoperability [70-73]; and anchoring AI in human rights and international law, protecting vulnerable groups and ensuring accountability [74-75].
MAJOR DISCUSSION POINT
Key priorities of the UN Global AI Dialogue
Y
Yuchil Kim
2 arguments146 words per minute272 words111 seconds
Argument 1
Described LG’s contribution to the UNESCO MOOC, its AI‑powered data‑compliance system, and the publication of an annual AI accountability report to promote transparency (Yuchil Kim)
EXPLANATION
Yuchil explains that LG is helping bridge the gap between AI ethics theory and daily practice by contributing to the UNESCO MOOC, deploying an AI‑powered data‑compliance platform, and issuing an annual accountability report that documents its responsible‑AI activities.
EVIDENCE
He notes that the MOOC targets practitioners struggling to apply ethics in daily work and that LG provides risk standards and its own AI-powered data-compliance system, while also publishing an annual AI accountability report, the third edition released recently [209-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
LG’s involvement in the UNESCO MOOC and its annual AI accountability reporting are mentioned in the AI That Empowers Safety Growth and Social Inclusion briefing on LG’s best-practice sharing [S3]; a follow-up comment confirms the importance of the annual report for sharing successes and challenges [S22].
MAJOR DISCUSSION POINT
LG’s practical tools and reporting for AI ethics
Argument 2
Emphasized LG’s practice of sharing best practices and struggles through its annual report, invoking the proverb “If you want to go fast, go alone; if you want to go far, go together” (Yuchil Kim)
EXPLANATION
Yuchil stresses that LG believes collaboration is essential for building a trustworthy AI ecosystem, and that publishing annual reports helps disseminate both successes and challenges, fostering collective progress rather than isolated efforts.
EVIDENCE
He affirms the importance of sharing best practices and struggles via the annual report [290-293], cites the African proverb about collaboration [294-296], and describes building a trustworthy ecosystem as a long journey requiring joint effort [297].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same AI That Empowers Safety Growth and Social Inclusion source highlights LG’s commitment to collective progress through its annual report and the proverb about collaboration [S3][S22].
MAJOR DISCUSSION POINT
Collaboration and transparency in AI governance
P
Parvati Adani
2 arguments133 words per minute564 words253 seconds
Argument 1
Reflected on the philosophical limits of AI tools and argued that frameworks must explicitly address language, gender, and cultural inclusion to avoid being “incomplete by design” (Parvati Adani)
EXPLANATION
Parvati points out that AI systems lack consciousness and cannot self‑regulate ethical limits, so governance frameworks must deliberately incorporate considerations of language, gender, and cultural context to prevent systemic exclusion and ensure truly inclusive AI.
EVIDENCE
She describes asking an AI tool about its ethical limits and receiving a non-committal answer, highlighting the philosophical gap [322-329]; she then argues that frameworks must address language, gender, and cultural inclusion, otherwise they are “incomplete by design” [336-338].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The quote about frameworks being “incomplete by design” appears in the AI That Empowers Safety Growth and Social Inclusion document [S3]; discussions on gender, language, and cultural inclusion in digital rights are further elaborated in the advancing digital inclusion briefing [S31]; inclusive governance recommendations are echoed in the principles-to-practice report [S33].
MAJOR DISCUSSION POINT
Need for inclusive AI frameworks addressing cultural and gender dimensions
Argument 2
Concluded that without concrete actions—beyond good intentions—AI governance will remain ineffective, urging all stakeholders to move from ideas to implementation (Parvati Adani)
EXPLANATION
Parvati calls for translating the ambition and existing infrastructure into real actions, warning that merely having good intentions and voluntary commitments is insufficient. She stresses that effective AI governance requires tangible steps and accountability.
EVIDENCE
She praises voluntary commitments and the existing infrastructure but stresses the need to ensure action rather than just ideas [340-345], emphasizing that the ambition must be turned into concrete implementation.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Governance Dialogue summary calls for coordinated, concrete governance mechanisms rather than mere declarations [S34]; the “Closing the AI Access Gap” press conference stresses moving from narrative to actionable steps [S25].
MAJOR DISCUSSION POINT
From commitments to concrete AI governance actions
A
Alex Walden
2 arguments184 words per minute1023 words332 seconds
Argument 1
Explained Google’s values‑driven governance model, use of UN Guiding Principles, AI principles, dedicated training teams, model‑level requirements, executive review, and post‑launch monitoring (Alex Walden)
EXPLANATION
Alex outlines that Google’s AI governance starts with corporate values and a commitment to UN Guiding Principles, reinforced by internal AI principles. He describes concrete mechanisms such as model‑level safety requirements, executive risk reviews, and ongoing post‑launch monitoring to operationalise responsible AI.
EVIDENCE
He notes Google’s founding values of freedom of expression and privacy, and a corporate policy committing to UN Guiding Principles on business and human rights [130-136]; he lists the use of UN principles, OECD, UNESCO guidance, and BTEC engagement to inform internal processes [138-141]; he mentions training programs and dedicated teams to operationalise these policies [142-143]; he details model-level requirements, application-layer testing, executive review before launch, and post-launch monitoring [154-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Google’s multi-level governance, model-level safety checks, and executive review processes are described in the AI That Empowers Safety Growth and Social Inclusion overview of Google’s governance structure [S3]; the company’s commitment to the UN Guiding Principles on Business and Human Rights is documented in the New Technologies and Human Rights briefing [S19].
MAJOR DISCUSSION POINT
Google’s internal AI governance structure
Argument 2
Described Google’s trusted‑tester programs, Impact Lab research, and the open‑source Amplify Initiative that lets communities fine‑tune language models (Alex Walden)
EXPLANATION
Alex highlights Google’s programmatic stakeholder engagement, including trusted‑tester programs that give external partners early access to test models, the Impact Lab that conducts research with communities, and the Amplify Initiative, an open‑source app enabling public participation in language model fine‑tuning.
EVIDENCE
He states that Google has a programmatic approach to stakeholder engagement and ad-hoc processes for product-specific consultation [302-304]; he describes trusted-tester programs that provide pre-launch access to third-party testers [305-306]; he mentions the Impact Lab’s community research and the Amplify Initiative, an open-source app for language inclusion [307-310].
MAJOR DISCUSSION POINT
Google’s tools for external stakeholder participation
H
Hector Duroir
2 arguments150 words per minute891 words356 seconds
Argument 1
Detailed Microsoft’s Office of Responsible AI, the Sensitive Use Case program, the ITER ethics committee, and alignment with OECD and UNESCO principles to operationalise high‑level ethics (Hector Duroir)
EXPLANATION
Hector explains that Microsoft created an Office of Responsible AI in 2019, built around high‑level principles such as privacy and fairness. He describes the Sensitive Use Case program that triages risky applications and escalates them to the ITER ethics committee, while aligning with OECD AI principles and UNESCO recommendations.
EVIDENCE
He recounts that Microsoft forged AI principles around privacy, reliability, inclusion, fairness, safety, and security in 2018 [175-176]; the Office of Responsible AI was created in 2019 to translate these principles into practice [177-178]; the Sensitive Use Case program analyses risky use cases and brings them to the ITER committee, which includes senior leadership [179-182]; he notes that the work is informed by OECD AI principles and UNESCO recommendations [184-185].
MAJOR DISCUSSION POINT
Microsoft’s internal responsible‑AI framework
Argument 2
Cited Microsoft’s collaboration with NGOs on community‑led benchmarks (e.g., the Samishka project) to build safety tools that respect local languages and cultural contexts (Hector Duroir)
EXPLANATION
Hector describes how Microsoft works with NGOs in India on the Samishka project to develop community‑led benchmarks, creating safety tools that incorporate local language and cultural nuances, thereby extending AI safety beyond English‑centric models.
EVIDENCE
He mentions involving NGOs in the Samishka project to build community-led benchmarks that feed safety tools with culturally specific data, addressing the limitation of translating safety tools from English to other languages [276-285].
MAJOR DISCUSSION POINT
NGO partnership for multilingual AI safety
A
Ankit Bose
2 arguments179 words per minute758 words253 seconds
Argument 1
Described NASSCOM’s mission to build capacity, develop open assets, and support companies of all sizes—highlighting the particular challenges startups face in balancing growth with governance (Ankit Bose)
EXPLANATION
Ankit outlines NASSCOM’s four‑decade history of shaping India’s tech agenda, focusing since 2021 on responsible AI. He emphasizes capacity‑building, open‑source assets, and assistance to governments, SMEs, and startups, noting that startups often deprioritise governance due to resource constraints.
EVIDENCE
He notes NASSCOM’s long history and its 2021 mission to address a gap in responsible, trustworthy AI [92-98]; he describes its core objectives of developing open assets, building capacity, and supporting the whole ecosystem from government to startups [99-102]; he highlights that startups must juggle building a business, team, and funding, often placing governance on the “side-burner” [119-124].
MAJOR DISCUSSION POINT
NASSCOM’s capacity‑building role and startup challenges
Argument 2
Highlighted NASSCOM’s observation that internal silos hinder responsible AI and advocated for cross‑functional collaboration and actionable guidance across frameworks (Ankit Bose)
EXPLANATION
Ankit points out that different internal groups (tech, business, risk, finance) often work in silos, impeding responsible AI implementation. He calls for collaborative, use‑case‑driven approaches and clearer, actionable guidance to move beyond the proliferation of frameworks.
EVIDENCE
He describes internal silos among tech, business, legal-risk, and finance groups, each with differing priorities [250-254]; he suggests building cross-functional collaboration on a use-case basis, prioritising high-impact cases [255-257]; he notes the proliferation of frameworks that are concept-heavy but lack actionable steps, leaving developers confused [258-266]; he advocates a multi-organisation approach to discuss and implement solutions [267-270].
MAJOR DISCUSSION POINT
Breaking internal silos for responsible AI
N
Namit Agarwal
1 argument175 words per minute681 words233 seconds
Argument 1
Presented the World Benchmarking Alliance’s assessment of 2,000 tech firms, revealing low compliance with AI governance and human‑rights impact assessments, and called for investor‑driven board oversight, incentive alignment, and robust impact assessments (Namit Agarwal)
EXPLANATION
Namit reports that the WBA’s latest assessment of 2,000 companies shows only a small fraction disclose AI principles and even fewer meet governance expectations or conduct human‑rights impact assessments. He argues that investors must demand board‑level AI risk responsibility, align executive incentives with long‑term risk mitigation, and require concrete product‑level implementation and impact assessments.
EVIDENCE
He states that about 40 % of assessed companies disclose AI principles but only just over 10 % meet global governance expectations, and none disclose human-rights impact assessments [227-228]; he calls for investors to ask about board-level AI risk responsibility, executive incentive alignment, and full-value-chain governance [236-238]; he stresses the need for product-level translation of ethical principles, identification of high-risk use cases, and internal controls [238-240]; he highlights gaps in robust human-rights impact assessments and mitigation integration [241-242].
MAJOR DISCUSSION POINT
Investor role in driving AI governance
Agreements
Agreement Points
Global standards and international frameworks are essential for responsible AI governance.
Speakers: Peggy Hicks, Tim Curtis, Rein Tammsaar, Alex Walden, Hector Duroir, Namit Agarwal
Emphasized the need for global standards, collaborative public‑private solutions, and rights‑based approaches to ensure AI works for all people (Peggy Hicks) Highlighted UNESCO’s trust‑by‑design principle, RAMS readiness assessments, and the launch of a massive open online course (MOOC) on AI ethics to translate global agreements into local practice (Tim Curtis) Outlined the UN Global Dialogue on AI Governance priorities, anchoring AI in human rights and international law (Rein Tammsaar) Described Google’s use of UN Guiding Principles, OECD and UNESCO guidance to inform internal AI governance (Alex Walden) Cited alignment with OECD AI principles and UNESCO recommendations in Microsoft’s responsible AI program (Hector Duroir) Referred to the World Benchmarking Alliance’s assessment framework that uses UN Guiding Principles on business and human rights (Namit Agarwal)
All speakers underscored that coherent, globally-agreed standards-such as the UN Guiding Principles, UNESCO recommendation and OECD principles-are the foundation for trustworthy, rights-respecting AI and must be operationalised across sectors and regions [2][8][9][10-13][32][33][34][37-44][67-74][138-141][184-185][224-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Emphasizes that global standards are a cornerstone of AI governance, reflected in calls for coordinated international frameworks such as those discussed at the IGF and UNESCO/EU initiatives [S53][S54][S55].
Multi‑stakeholder engagement (civil society, academia, NGOs, governments) is crucial for inclusive AI development.
Speakers: Peggy Hicks, Alex Walden, Hector Duroir, Yuchil Kim, Parvati Adani, Namit Agarwal
Called for continuous, programmatic engagement with civil society, academia, and affected communities to ensure inclusive AI development (Peggy Hicks) Described Google’s programmatic stakeholder engagement, trusted‑tester programmes and the Impact Lab’s community research (Alex Walden) Explained Microsoft’s inclusion of NGOs and academia in risk‑management processes and community‑led benchmarks (Hector Duroir) Emphasised sharing best practices and struggles through LG’s annual AI accountability report to foster collaboration (Yuchil Kim) Stressed that frameworks must explicitly address language, gender and cultural inclusion, requiring broad stakeholder input (Parvati Adani) Highlighted the importance of ongoing dialogue and engagement with a wide range of actors as a core WBA practice (Namit Agarwal)
A broad consensus emerged that structured, ongoing engagement with diverse stakeholders-including NGOs, academia, civil society and affected communities-is essential to translate standards into practice and to avoid siloed approaches [144-148][242-249][302-306][307-310][276-282][283-285][290-296][322-329][336-338][236-238].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder engagement is repeatedly highlighted as essential for trust and inclusive AI, e.g., in IGF discussions and policy guides stressing inclusion of civil society, academia, NGOs and governments [S50][S55][S67][S68][S69].
Market incentives and financial mechanisms are needed to reward responsible AI practices.
Speakers: Peggy Hicks, Namit Agarwal
Stressed the importance of creating incentives and a level playing field so responsible companies are rewarded (Peggy Hicks) Argued that investors can provide catalytic incentives, board oversight and alignment of executive incentives to drive responsible innovation (Namit Agarwal)
Both speakers agreed that without clear financial incentives and investor-driven governance, responsible AI adoption will lag; rewarding good practice is key to scaling impact [13][14-16][226-232].
POLICY CONTEXT (KNOWLEDGE BASE)
Market-level incentives and financial mechanisms are advocated by investors and policy papers, calling for board-level AI risk responsibility and insurance-based assurance to reward responsible practices [S49][S51][S52].
Capacity building and closing capacity gaps, especially for developing countries, are essential.
Speakers: Tim Curtis, Rein Tammsaar, Yuchil Kim, Namit Agarwal
Described UNESCO’s RAMS assessments in over 80 countries to provide evidence‑based diagnostics (Tim Curtis) Highlighted the need to close capacity gaps for developing nations to participate fully in the AI economy (Rein Tammsaar) Noted the MOOC and annual report as tools to make AI ethics accessible and build capacity (Yuchil Kim) Mentioned that capacity gaps are a priority in the UN Global Dialogue and WBA work (Namit Agarwal)
All four speakers emphasized that building technical and policy capacity-through assessments, training courses and targeted support-is a prerequisite for equitable AI deployment [33-35][68-69][209-213][226-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity building, particularly for developing nations, is identified as a pillar in AI governance dialogues, with emphasis on context-based analysis and leveraging existing toolkits to close gaps [S53][S56][S54].
Robust internal governance structures (model‑level requirements, executive oversight, post‑launch monitoring) are needed to operationalise responsible AI.
Speakers: Alex Walden, Hector Duroir
Outlined Google’s model‑level safety requirements, executive risk review and post‑launch monitoring (Alex Walden) Described Microsoft’s Office of Responsible AI, Sensitive Use Case program and ITER ethics committee for internal risk management (Hector Duroir)
Both corporate representatives concurred that responsible AI must be embedded in concrete internal processes, from technical checks to senior-level governance and ongoing monitoring [154-162][177-182].
POLICY CONTEXT (KNOWLEDGE BASE)
Robust internal governance-including model-level requirements, executive oversight and post-launch monitoring-is supported by recommendations for board responsibility and operationalising policies into practice [S49][S56][S51].
Similar Viewpoints
Both recognise that senior‑level leadership and executive accountability are critical levers to embed human‑rights considerations within AI product development [148-149][158-160].
Speakers: Peggy Hicks, Alex Walden
Peggy highlighted the pressure on human‑rights leads and the need for executive support (Peggy Hicks) Alex described executive review of AI risks before launch (Alex Walden)
Both see the translation of global standards into accessible, practitioner‑focused learning tools as essential for widespread adoption [37-44][209-213].
Speakers: Tim Curtis, Yuchil Kim
Tim announced a UNESCO‑backed MOOC to make AI ethics learning practical for day‑to‑day work (Tim Curtis) Yuchil described LG’s contribution to the MOOC and its annual report to bridge theory‑practice gaps (Yuchil Kim)
Both stress that human‑rights considerations are non‑negotiable foundations for AI policy and must be concretely embedded in governance frameworks [75-77][334-345].
Speakers: Rein Tammsaar, Parvati Adani
Rein asserted that human rights are not optional and must anchor AI governance (Rein Tammsaar) Parvati emphasized that frameworks must explicitly address human‑rights dimensions such as language, gender and cultural inclusion (Parvati Adani)
Both companies employ structured programmes that bring external experts and civil‑society actors into the AI development lifecycle to improve safety and inclusivity [276-282][302-306][307-310].
Speakers: Hector Duroir, Alex Walden
Hector described Microsoft’s collaboration with NGOs and academia for risk assessment (Hector Duroir) Alex detailed Google’s trusted‑tester programmes, Impact Lab and open‑source Amplify Initiative for external stakeholder participation (Alex Walden)
Unexpected Consensus
Inclusion of language and cultural diversity in AI safety tools.
Speakers: Hector Duroir, Alex Walden, Parvati Adani
Hector highlighted the Samishka project building community‑led benchmarks for multilingual safety tools (Hector Duroir) Alex mentioned the Amplify Initiative enabling public participation in fine‑tuning language models (Alex Walden) Parvati argued that frameworks that ignore language, gender and cultural context are “incomplete by design” (Parvati Adani)
While corporate speakers often focus on technical safeguards, both Microsoft and Google explicitly referenced programmes addressing multilingual and cultural nuances, aligning with civil-society concerns raised by Parvati-an unexpected convergence on the importance of linguistic and cultural inclusion [198-200][307-310][336-338].
POLICY CONTEXT (KNOWLEDGE BASE)
Linguistic and cultural diversity in AI safety tools is underscored as vital for democratizing AI and ensuring inclusion across different contexts [S57][S59].
Overall Assessment

The panel displayed a strong consensus on four pillars: (1) the necessity of global, rights‑based standards; (2) the central role of multi‑stakeholder, programmatic engagement; (3) the need for financial incentives and investor oversight; and (4) the requirement for concrete internal governance mechanisms. Capacity building and attention to language/cultural inclusion were also widely endorsed, though implementation gaps remain.

High consensus – the convergence across government, UN agencies, civil‑society, investors and leading tech firms indicates a shared understanding that coordinated standards, incentives and inclusive processes are essential. This creates a solid basis for joint actions, but the discussion also highlighted practical challenges (e.g., fragmented frameworks, siloed internal structures) that must be addressed to translate agreement into effective governance.

Differences
Different Viewpoints
Effectiveness of existing AI governance frameworks versus the need for concrete, actionable tools
Speakers: Ankit Bose, Tim Curtis
There are a lot of frameworks … but from the framework heavy or the concept heavy to action is not happening. He notes that developers are lost in the framework and do not know what is actionable. (Ankit Bose) [258-266] We’re translating this global agreement and framework into local realities … RAMS … launched in over 80 countries … and we announced a global MOOC on AI ethics to make ethics learning accessible and practical for day-to-day work. (Tim Curtis) [33-44]
Ankit argues that the proliferation of AI governance frameworks leaves practitioners confused and fails to provide actionable guidance, whereas Tim contends that UNESCO’s RAMS assessments and the new MOOC translate those frameworks into concrete tools for implementation. [258-266][33-44]
POLICY CONTEXT (KNOWLEDGE BASE)
Debates highlight that existing AI governance frameworks are often too abstract, prompting calls for concrete, actionable tools and better implementation pathways [S53][S54][S56].
How to create incentives for responsible AI – market‑level incentives versus investor‑driven governance mechanisms
Speakers: Peggy Hicks, Namit Agarwal
We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well. (Peggy Hicks) [13] Capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. Investors must ask whether there is clear board-level responsibility on AI risk, whether executive incentives are aligned with long-term human-rights risk mitigation, and whether governance applies across the full AI value chain. (Namit Agarwal) [226-229][236-242]
Peggy emphasizes creating market incentives and a level playing field, referencing the BTEC project to reward responsible firms, while Namit stresses that investors need to enforce board-level oversight and align incentives, noting that capital alone is insufficient. [13][14-16][226-229][236-242]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between market-level incentives and investor-driven mechanisms is reflected in discussions on aligning investor incentives with long-term risk mitigation and exploring insurance models to promote responsible AI [S49][S51][S52].
Reliance on voluntary commitments versus the need for enforceable actions
Speakers: Hector Duroir, Parvati Adani
Voluntary commitments … helped us … to ground our model testing approach, especially against public safety and national security risks. (Hector Duroir) [188-191] Voluntary commitments are fantastic, but we must ensure that we don’t leave with just good intentions and good ideas – we need concrete actions and accountability. (Parvati Adani) [340-345]
Hector views voluntary commitments as effective tools that have already informed Microsoft’s internal risk-management processes, whereas Parvati warns that without concrete implementation these commitments remain merely aspirational. [188-191][340-345]
POLICY CONTEXT (KNOWLEDGE BASE)
Voluntary commitments are contested; some stakeholders view them as insufficient and call for enforceable measures, while others cite them as useful interim steps, as seen in IGF sessions and civil-society critiques [S60][S61][S62][S63][S64][S65].
Unexpected Differences
Critique of AI governance frameworks by an industry body versus optimism from an intergovernmental organization
Speakers: Ankit Bose, Tim Curtis
Ankit says developers are lost in the proliferation of concept-heavy frameworks and need actionable guidance. (Ankit Bose) [258-266] Tim says UNESCO is translating global agreements into practical tools like RAMS and a MOOC to make ethics actionable. (Tim Curtis) [33-44]
It is unexpected that a leading industry association (NASSCOM) would openly criticize the very frameworks that UNESCO promotes as the basis for its operational tools, revealing a tension between industry perception of framework overload and UN optimism about their practical translation. [258-266][33-44]
POLICY CONTEXT (KNOWLEDGE BASE)
Industry bodies often critique existing frameworks as burdensome, whereas intergovernmental organizations express optimism about collaborative regulation, illustrating divergent perspectives on AI governance [S70][S71][S72][S62].
Different views on the sufficiency of voluntary commitments as a governance tool
Speakers: Hector Duroir, Parvati Adani
Hector highlights voluntary commitments as concrete inputs that have already shaped Microsoft’s testing approach. (Hector Duroir) [188-191] Parvati cautions that voluntary commitments are insufficient without concrete implementation and accountability. (Parvati Adani) [340-345]
While voluntary commitments are generally seen as a positive step, the contrast between Hector’s confidence in their practical impact and Parvati’s warning about their limited enforceability was not anticipated, indicating a split between internal corporate confidence and external civil-society expectations. [188-191][340-345]
POLICY CONTEXT (KNOWLEDGE BASE)
Divergent views on the adequacy of voluntary commitments are evident, with some actors praising them as pragmatic and others arguing they lack binding power, reflected in multiple IGF discussions [S60][S61][S62].
Overall Assessment

The panel shows strong consensus on the overarching goals of inclusive, trustworthy AI and the need for multi‑stakeholder engagement. Disagreements are confined to implementation pathways – specifically the usefulness of existing frameworks, the design of incentive mechanisms, and the reliance on voluntary commitments versus enforceable actions. These divergences are moderate and revolve around practical translation rather than fundamental values.

Moderate disagreement: while all speakers share the same high‑level objectives, they differ on the most effective means to achieve them. This suggests that future work should focus on harmonising standards with clear, actionable tools, aligning market incentives with investor governance, and establishing mechanisms to move voluntary commitments into binding actions.

Partial Agreements
All speakers agree that ongoing multi‑stakeholder engagement is essential for responsible AI, but they differ on the mechanisms: Peggy stresses broad, continuous dialogue; Alex focuses on structured programs and ad‑hoc consultations; Hector highlights NGO‑led benchmark projects; Yuchil relies on annual reporting and collective learning. [144-149][302-307][308-310][276-285][290-296]
Speakers: Peggy Hicks, Alex Walden, Hector Duroir, Yuchil Kim
Peggy calls for continuous, programmatic engagement with civil society, academia and affected communities. (Peggy Hicks) [144-149] Alex describes a programmatic approach to stakeholder engagement, trusted-tester programs, and the Impact Lab’s community research. (Alex Walden) [302-307][308-310] Hector mentions involving NGOs in the Samishka project to build community-led benchmarks for safety tools. (Hector Duroir) [276-285] Yuchil talks about publishing an annual AI accountability report to share best practices and struggles, emphasizing collaboration. (Yuchil Kim) [290-296]
All three stress the importance of linguistic and cultural inclusion in AI governance, but they propose different pathways: Parvati calls for explicit framework design, Yuchil points to internal risk standards and reporting, while Hector emphasizes community‑led benchmarks with NGOs. [336-338][209-213][276-285]
Speakers: Parvati Adani, Yuchil Kim, Hector Duroir
Parvati argues that frameworks must explicitly address language, gender and cultural inclusion or they are ‘incomplete by design’. (Parvati Adani) [336-338] Yuchil notes that LG provides risk standards, an AI-powered data-compliance system and an annual accountability report, and mentions multilingual considerations in the MOOC. (Yuchil Kim) [209-213] Hector describes the Samishka project with NGOs to create safety tools that respect local languages and cultural contexts. (Hector Duroir) [276-285]
Takeaways
Key takeaways
Global, rights‑based standards and collaborative public‑private mechanisms are essential for AI to benefit all people, not just advanced economies. UN bodies (UNESCO, OHCHR, UN Global Dialogue) are driving frameworks such as the AI Recommendations, RAMS readiness assessments, and a massive open online course (MOOC) to translate high‑level ethics into practical guidance. The UN Global Dialogue on AI Governance prioritises safe and trustworthy AI, closing capacity gaps, interoperable cross‑border governance, and anchoring AI in human rights and international law. Major tech firms are embedding responsible AI through values‑driven internal governance, model‑level requirements, executive oversight, post‑launch monitoring, and dedicated programs (Google’s AI Principles, Microsoft’s Office of Responsible AI and Sensitive Use Case program). Industry associations (NASSCOM) focus on capacity‑building, open assets and supporting companies of all sizes, while highlighting the particular challenges faced by startups. LG contributes by developing AI‑powered compliance tools, publishing annual accountability reports, and co‑creating the UNESCO MOOC. Investors and benchmarking organisations (World Benchmarking Alliance) see a gap between stated principles and actual governance; they call for board‑level AI oversight, alignment of incentives, and robust human‑rights impact assessments. Multi‑stakeholder engagement—including civil society, academia, NGOs, and affected communities—is critical for inclusive, culturally aware AI (e.g., community‑led benchmarks, trusted‑tester programs, open‑source initiatives). Inclusion of language, gender and cultural contexts must be built into frameworks; otherwise AI systems remain “incomplete by design.” Good intentions must be turned into concrete actions; voluntary commitments, continuous dialogue and shared best‑practice reporting are steps toward that goal.
Resolutions and action items
Launch the UNESCO‑LG MOOC on AI ethics via Coursera and promote global participation. Proceed with the UN Global Dialogue on AI Governance scheduled for July in Geneva, inviting broad stakeholder input. Continue the BTEC project’s work on incentives and benchmarking to reward responsible AI practices. Encourage companies to adopt board‑level AI risk oversight, align executive incentives with long‑term human‑rights risk mitigation, and publish AI‑specific impact assessments. Support the development of community‑led safety benchmarks (e.g., Microsoft’s Samishka project) and integrate them into product development cycles. Facilitate cross‑functional collaboration within firms (tech, business, legal, finance) to move from siloed frameworks to actionable governance. Promote the use of trusted‑tester programs and open‑source tools (e.g., Google’s Amplify Initiative) for early external testing and language inclusion. Invite investors, civil‑society groups and academia to engage continuously with companies, not only on ad‑hoc issues. Publish and share annual AI accountability reports (as LG does) to disseminate best practices and challenges.
Unresolved issues
How to achieve practical harmonisation of the many emerging national and sectoral AI frameworks into a single, actionable set of guidelines for companies, especially SMEs and startups. Concrete mechanisms for financing and delivering the capacity‑building needed in developing countries to close AI infrastructure and skills gaps. Metrics and verification methods to assess the real‑world impact of the UNESCO MOOC and other capacity‑building initiatives. Enforcement mechanisms or regulatory levers to ensure that voluntary commitments translate into binding obligations. Standardised processes for systematic, ongoing engagement with civil society and affected communities across diverse linguistic and cultural contexts. Clear pathways for investors to translate benchmarking data into effective market incentives without stifling innovation.
Suggested compromises
Adopt a flexible, non‑prescriptive approach in the UN Global Dialogue that seeks common ground rather than imposing a single governance model. Combine voluntary industry commitments with public‑private incentive structures to reward responsible behavior while allowing innovation to continue. Leverage existing standards (UN Guiding Principles, OECD AI Principles, UNESCO Recommendations) as building blocks rather than creating entirely new frameworks. Balance regulatory oversight with industry‑led self‑assessment tools (e.g., BTEC benchmarks, internal AI risk dashboards) to reduce fragmentation and lower compliance costs. Encourage collaborative development of safety tools that respect local languages and cultural norms, sharing outcomes across companies to avoid duplicated effort.
Thought Provoking Comments
Trust is not something technology earns through ambition alone but really it is earned through design choices, safeguards and accountability.
Frames trust as a product of concrete design and governance rather than a by‑product of innovation, setting a clear ethical baseline for AI development.
Shifted the conversation from abstract principles to actionable design practices; prompted the introduction of the UNESCO MOOC as a tool to teach ‘ethics by design’, influencing subsequent speakers to discuss concrete training and capacity‑building measures.
Speaker: Tim Curtis
We have four member‑state priorities: safe, secure and trustworthy AI; closing capacity gaps; cross‑border governance and interoperability; and anchoring AI in human rights and international law.
Synthesises the diverse concerns of governments into a concise, actionable framework, highlighting both technical and normative dimensions of AI governance.
Provided a roadmap that guided later remarks about standards, capacity‑building, and the need for scalable solutions; it also prompted participants to align their corporate practices with these four pillars.
Speaker: Rein Tammsaar
We have model‑level requirements, application‑level guardrails, executive review before launch, and post‑launch monitoring to continuously assess risk.
Offers a concrete, multi‑layered governance architecture that demonstrates how a large tech company operationalises responsible AI, moving the discussion from theory to practice.
Inspired other panelists (e.g., Hector and Alex later) to describe their own internal processes and sparked a deeper dive into how companies translate principles into day‑to‑day product development.
Speaker: Alex Walden
Our Sensitive Use Case program triages high‑risk applications, escalates them to the ITER ethics committee that includes board‑level representation, and is informed by OECD and UNESCO principles.
Shows how Microsoft embeds external normative frameworks into an internal risk‑management pipeline, linking policy, research, and engineering.
Highlighted the importance of board‑level oversight and external standards, leading to further discussion on stakeholder engagement and the role of voluntary commitments in shaping corporate safeguards.
Speaker: Hector Duroir
Only about 10 % of the 2,000 assessed companies meet global governance expectations and none disclose human‑rights impact assessments, revealing a huge gap between intent and accountability.
Provides hard data that challenges the narrative of widespread responsible AI practice, emphasizing the need for measurable accountability and investor‑driven incentives.
Shifted the tone toward a more critical assessment of current corporate performance, prompting calls for concrete investor actions and deeper engagement with laggard firms.
Speaker: Namit Agarwal
When I asked an AI tool whether it has ethical limits, it replied ‘I don’t know’ – highlighting that AI lacks conscience and cannot self‑regulate ethical boundaries.
Uses a striking, experiential demonstration to underscore the philosophical limits of AI autonomy and the necessity of human governance.
Served as a turning point that refocused the panel on the fundamental need for human oversight, reinforcing earlier points about standards and prompting participants to stress the role of civil society and policy.
Speaker: Parvati Adani
We run a programmatic stakeholder‑engagement approach, including trusted‑tester programs and an open‑source Amplify Initiative that lets communities fine‑tune language models for inclusion.
Illustrates innovative, inclusive mechanisms for external input, moving beyond internal compliance to collaborative model improvement.
Expanded the conversation on how companies can involve civil society and under‑represented groups, linking back to earlier themes of multilingual safety and inclusion raised by Hector and others.
Speaker: Alex Walden
Our annual AI ethics report and community‑led benchmarks (e.g., Samishka in India) aim to create safety tools that respect local cultural contexts rather than just translating English‑centric standards.
Highlights the necessity of culturally aware safety evaluations, addressing the critique that many frameworks are overly generic.
Reinforced the earlier point about language and inclusion, prompting acknowledgment from other speakers (e.g., Yuchil Kim) about the importance of collaborative, context‑specific standards.
Speaker: Hector Duroir
Frameworks are proliferating everywhere, but developers get lost because they don’t know what is actionable; we need a multi‑organisation, use‑case‑driven approach to turn concepts into practice.
Identifies a practical bottleneck—framework fatigue—and proposes a collaborative, use‑case focus as a solution, bridging the gap between policy and implementation.
Prompted the moderator to stress the need for simplified guidance and influenced later remarks about consolidating best practices and avoiding siloed approaches.
Speaker: Ankit Bose
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the dialogue from high‑level ideals to concrete mechanisms. Tim Curtis’s framing of trust as a design issue introduced the need for practical education, which was taken up by UNESCO’s MOOC and echoed throughout. Rein Tammsaar’s four‑point agenda gave the panel a shared policy lens, while Alex Walden and Hector Duroir supplied detailed corporate governance models that operationalised those points. Namit Agarwal’s data‑driven critique exposed the gap between rhetoric and reality, prompting calls for investor‑led accountability. Parvati Adani’s experiential query to an AI system starkly illustrated the philosophical limits of self‑governance, reinforcing the necessity of human oversight. Subsequent comments on stakeholder engagement, multilingual safety, and framework overload built on these turning points, steering the conversation toward collaborative, context‑sensitive solutions. Collectively, these insights reshaped the tone from abstract aspiration to actionable, multi‑stakeholder pathways for responsible AI.

Follow-up Questions
How does NASSCOM differentiate engagement across big companies, services companies, SMEs, and startups to ensure effective responsible AI practices?
Peggy asks for clarification on NASSCOM’s tailored approach to a diverse set of industry participants, highlighting a need to understand practical engagement mechanisms.
Speaker: Peggy Hicks
How have you been able to surmount challenges in getting human rights considerations heard within Google?
Peggy seeks insight into the internal advocacy tactics and obstacles Google faces when promoting human‑rights‑based AI governance.
Speaker: Peggy Hicks
What are the external drivers that shape Microsoft’s engagement with the sector and governments on responsible AI?
Peggy wants to know which external factors (e.g., voluntary commitments, regulatory trends) influence Microsoft’s cross‑sector and government collaborations.
Speaker: Peggy Hicks
Can you provide concrete examples and suggestions from the World Benchmarking Alliance on how to push the discussion on responsible AI forward?
Peggy asks for actionable recommendations from the WBA to translate high‑level intent into measurable, market‑driven incentives.
Speaker: Peggy Hicks
From the NASSCOM perspective, how do you address internal silos and translate frameworks into actionable steps for enterprises?
Peggy requests details on how NASSCOM helps break down departmental silos and turn numerous AI governance frameworks into practical, implementable guidance.
Speaker: Peggy Hicks
Could you share quick comments from Microsoft on how you are facing the challenges of responsible AI implementation?
Peggy asks for a concise update on the specific hurdles Microsoft encounters and the strategies it employs to overcome them.
Speaker: Peggy Hicks
What is the impact and effectiveness of UNESCO’s Readiness Assessment Methodology Reports (RAMS) across the 80+ countries where they have been deployed?
Tim highlights the need for research to assess whether RAMS are influencing policy and practice in diverse regional contexts.
Speaker: Tim Curtis
How effective is the UNESCO‑LG AI Research MOOC on AI ethics in reaching a global audience and changing day‑to‑day AI development practices?
Tim points to a gap in understanding the MOOC’s uptake, learning outcomes, and real‑world impact on practitioners.
Speaker: Tim Curtis
How can multilingual, culturally contextual safety tools and community‑led benchmarks (e.g., the Samishka project) be developed and validated for AI risk assessment?
Hector raises the need for research on extending safety evaluation beyond English‑centric models to reflect local contexts and languages.
Speaker: Hector Duroir
What mechanisms can be used to measure and ensure board‑level AI governance and alignment of executive incentives with human‑rights risk mitigation?
Namit identifies a research gap in evaluating corporate governance structures that hold senior leadership accountable for AI risks.
Speaker: Namit Agarwal
How are AI‑specific human rights impact assessments currently conducted and disclosed by major tech companies, and what standards can improve their transparency?
Namit notes the scarcity of disclosed impact assessments and calls for systematic study of assessment practices and reporting standards.
Speaker: Namit Agarwal
What is the measurable effect of voluntary commitments made at AI summits (e.g., Letchley Park, South Korea) on corporate testing, safety, and security practices?
Hector suggests investigating whether such voluntary pledges translate into concrete changes in model testing and risk mitigation.
Speaker: Hector Duroir
How can civil society and academia be effectively integrated into the co‑creation of AI safety benchmarks and policy frameworks?
Both speakers emphasize the need for research on collaborative models that bring external expertise into product development cycles.
Speaker: Hector Duroir, Alex Walden
What are the philosophical and technical implications of AI systems lacking self‑awareness of ethical limits, and how might this inform future governance frameworks?
Parvati raises a deeper question about AI’s inability to understand its own ethical boundaries, indicating a research area at the intersection of AI philosophy and policy.
Speaker: Parvati Adani
How can the proliferation of AI governance frameworks be streamlined into unified, actionable guidance that practitioners can readily implement?
Ankit points out the gap between numerous frameworks and practical action, calling for research into simplifying and harmonizing guidance for developers.
Speaker: Ankit Bose

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.