AI That Empowers Safety Growth and Social Inclusion in Action

20 Feb 2026 10:00h - 11:00h

AI That Empowers Safety Growth and Social Inclusion in Action

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting the urgent, day-to-day challenges of AI and the need for global standards, public-private collaboration and rights-based approaches to achieve responsible AI with real-world impact [1][2]. Speakers stressed that effective AI governance requires careful deliberation, stakeholder engagement and the sharing of good practices by companies to avoid pitfalls [4-6]. They affirmed that companies must respect human rights through due-diligence while governments should create a level playing field and incentivise responsible behaviour [9-13].


UNESCO emphasized that trust in AI is built through design choices, safeguards and accountability, and introduced its Readiness Assessment Methodology Reports (RAMS) that map regional AI landscapes in over 80 countries [32-34]. To translate the UNESCO ethics recommendation into practice, a massive open online course (MOOC) on AI ethics by design is being launched, teaching learners to embed fairness, transparency and inclusion early in the development cycle [36-42].


The newly-mandated UN Global Dialogue on AI Governance identified four priority areas: safe and trustworthy systems, closing capacity gaps, cross-border governance and anchoring AI in human-rights law [66-73]. Representing India’s tech sector, NASSCOM described its 2021-initiated mission to build open assets, develop capacity across government, startups and SMEs, and promote responsible AI adoption throughout the ecosystem [94-105]. Google outlined its corporate policy that commits to the UN Guiding Principles and UNESCO/OECD frameworks, embedding these values in AI principles and operational processes across product teams [130-138]. Microsoft recounted the evolution of its Office of Responsible AI since 2018, the Sensitive Use Case program and ITER ethics committee, and noted that its work is informed by OECD principles and UNESCO recommendations [169-184]. Externally, Microsoft cited voluntary commitments from AI summits, OECD hyper-reporting tools and recent Indian multilingual safety initiatives that reinforce inclusion and risk-based testing [188-200].


The World Benchmarking Alliance reported that while many firms publish AI principles, only a small fraction meet global governance standards or disclose human-rights impact assessments, underscoring the need for stronger incentives and board-level oversight [224-229]. It recommended that investors demand clear AI governance at the board level, concrete product-level implementation and robust human-rights impact assessments to close existing gaps [236-241]. Across the discussion, participants agreed that collaborative, multi-stakeholder engagement-spanning companies, civil society, academia and regulators-is essential to move from good intentions to actionable, inclusive AI systems [311-345].


Keypoints


Major discussion points


Global norms and multi-stakeholder governance are essential for responsible AI.


The opening remarks stress that “global standards, collaborative public-private solutions, and rights-based approaches can enable responsible AI” and that both companies and governments must create “clear rules… and alignment around the global norms” [2][9-11][13]. The UN-mandated Global Dialogue on AI Governance highlights four member-state priorities – trustworthy AI, capacity-building, cross-border governance, and anchoring AI in human rights – and frames standards as the bridge from principle to practice [67-74][78].


Capacity-building and education tools are needed to translate standards into everyday practice.


UNESCO’s RAMS assessments and the new massive open online course (MOOC) on AI ethics are presented as concrete ways to move “beyond theory and towards this responsible human-centred deployment of AI” [32-38][39-44]. LG’s contribution echoes this by stressing a “practitioner-focused” MOOC that bridges the gap between abstract standards and day-to-day work [209-213].


Companies are implementing layered internal governance to embed responsible AI.


Google describes a hierarchy of model-level requirements, application-level guardrails, executive review, and post-launch monitoring [149-162]. Microsoft outlines its Office of Responsible AI, the Sensitive Use Case program, and board-level oversight, all built on UN-based principles [168-184]. LG and Google also note programmatic stakeholder engagement, trusted-tester schemes, and open-source tools for language inclusion [300-307][308-310].


Investors and benchmarking bodies can drive accountability through market incentives.


The World Benchmarking Alliance reports that only ~10 % of the 2,000 assessed tech firms meet global governance expectations and none disclose human-rights impact assessments, underscoring the need for “board-level responsibility, aligned executive incentives, and robust AI-specific impact assessments” [226-241].


Inclusion-especially linguistic and cultural diversity-and civil-society partnership are critical gaps.


Participants point to the “language issue” and the need for multilingual safety tools, citing Microsoft’s community-led benchmarks in India and LG’s annual transparency report as examples of collaborative, culturally-aware practice [188-199][276-285][290-296].


Overall purpose / goal


The session is a convening of UN bodies, industry leaders, and civil-society representatives to share concrete practices, highlight gaps, and mobilise coordinated action so that AI development and deployment are governed by human-rights-based standards, are inclusive, and deliver real-world benefits across all economies [1][15][24][85-88].


Overall tone


Opening (0-5 min): Formal, optimistic, and forward-looking, emphasizing shared responsibility and the promise of standards [1-5][9-13].


Middle (5-30 min): Becomes more technical and candid as speakers detail specific tools, internal processes, and the challenges of scaling responsible practices [32-44][149-162][168-184][220-241]. The tone shifts to a problem-solving mode, acknowledging “obstacles” and “gaps” while showcasing concrete initiatives.


Closing (30-50 min): Reflective and motivational, urging continued dialogue, broader participation, and concrete action, ending on a call-to-action for all stakeholders [316-345][350-363].


Overall, the discussion moves from a high-level, collaborative framing to detailed, sometimes critical examinations of implementation, and finishes with a unifying, inspirational appeal to sustain momentum.


Speakers

Rein Tammsaar – Permanent Representative of Estonia; co-chair of the United Nations Global Dialogue on AI Governance. [S1][S2]


Namit Agarwal – Representative of the World Benchmarking Alliance, focusing on AI governance, investment incentives and accountability.


Peggy Hicks – Director, Office of the United Nations High Commissioner for Human Rights (OHCHR); moderator of the panel. [S5][S6]


Parvati Adani – Representative from Sero Amarchan Mangaldas (law firm), delivered the concluding remarks.


Tim Curtis – Regional Director for UNESCO South Asia; co-sponsor of the session. [S11][S12]


Ankit Bose – Senior executive representing NASSCOM, India’s technology industry association.


Hector Duroir – Director of Responsible AI Public Policy, Microsoft.


Alex Walden – Global Head of Human Rights, Google. [S17]


Yuchil Kim – Vice President, AI Research, LG.


Additional speakers:


Praveen – Mentioned in the closing remarks; role not specified.


Dhani – Mentioned in the closing remarks; role not specified.


Allie – Addressed by Peggy Hicks near the end; role not specified.


Full session reportComprehensive analysis and detailed insights

The session opened with Peggy Hicks (BTEC, Office of the High Commissioner for Human Rights) reminding participants that AI-related challenges affect people’s daily lives [?] and that “global standards, collaborative public-private solutions, and rights-based approaches can enable responsible AI with meaningful real-world impact” [2]. She emphasized that responsible outcomes do not happen automatically; they require “deliberation… engagement” to avoid pitfalls [4-5] and that companies must share “good practices” while governments create a “level playing field” and incentives for responsible conduct [12-14]. Hicks framed the BTEC project as a mechanism to convene stakeholders, extract best practices and feed them back into policy, noting that the work is anchored in UN tools such as the UN Guidelines and UNESCO’s AI ethics recommendations [15][23-24].


Tim Curtis (UNESCO) shifted the discussion to the foundations of trust, arguing that “trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability” [32]. He thanked the Office of the High Commissioner for Human Rights for its support [?] and explained that UNESCO’s Readiness Assessment Methodology Reports (RAMS) have been produced for more than 80 countries, providing a “clear-eyed look at how regional landscapes can evolve” and moving the debate “beyond theory” [33-35]. Curtis noted that the RAMS include an assessment of India [?] and that partner institutions such as Oxford and the Alan Turing Institute contributed [?]. To translate the UNESCO ethics recommendation into practice, UNESCO and LG AI Research are launching a massive open online course (MOOC) on “ethics-by-design” [?] that will be delivered on Coursera [?] and is “accessible to a wide global audience and provides practical, day-to-day tools” [?]. The MOOC is intended for practitioners who need concrete tools to bridge the gap between abstract standards and daily work [209-213].


Rein Tammsaar (UN-mandated Global Dialogue on AI Governance) outlined the Dialogue’s four member-state priorities: (i) safe, secure, and trustworthy AI systems; (ii) closing capacity gaps in developing economies; (iii) interoperable, cross-border governance; and (iv) anchoring AI in human-rights law, including protection of vulnerable groups [66-74]. He positioned the Dialogue as a “platform where governments and stakeholders exchange best practices” to strengthen international cooperation and reduce digital divides [65-66] and stressed that standards turn principles into action, shaping risk management, accountability and human-oversight [78-79].


Ankit Bose (NASSCOM) described the association’s 2021-initiated mission to fill the “responsible, trust, human element” gap that emerged as AI proliferated [95-98]. NASSCOM’s core objectives are to develop open assets, build capacity across government, startups and SMEs, and promote early adoption of responsible AI governance [98-104]. Bose highlighted internal silos – tech, business, legal, finance – that impede coherent action and argued that “collaboration… use-case by use-case” is needed, especially for high-impact projects [110-118]. He warned that many national and sectoral frameworks leave developers “lost in the framework” and called for “concrete, actionable guidance” [258-267].


Alex Walden (Google) detailed how the company embeds human-rights-based values into its AI lifecycle. Google has a corporate policy committing to the UN Guiding Principles on Business and Human Rights [?] and an internal set of AI principles that operationalise those values across products such as Cloud, YouTube and Search [135-137]. Governance is layered: model-level requirements and testing, application-level guardrails, executive review of risks before launch, and continuous post-launch monitoring to catch “novel or residual risks” [149-162]. Walden also described programme-level tools – regular stakeholder-engagement processes, “trusted-tester” schemes that give third parties early access, and the Impact Lab’s “Amplify Initiative” which lets communities fine-tune language models in an open-source fashion [300-310].


Hector Duroir (Microsoft) traced the evolution of Microsoft’s responsible-AI framework from its 2018 inception, when “codes, directives, regulations… were not yet there” [170-172], to the creation of the Office of Responsible AI (2019) and the Sensitive Use Case programme that triages high-risk applications and escalates them to the ITER ethics committee involving senior leadership [179-182]. Microsoft aligns its standards with the OECD AI Principles and UNESCO’s recommendation [184-185] and leverages voluntary commitments signed at AI summits – including Letchley Park and South Korea [188-192] – to ground model testing against public-safety and national-security risks. Duroir also cited the OECD hyper-reporting framework and India’s recent voluntary commitment to multilingual safety evaluations, which “encourages companies to forge multilingual capabilities” [193-199].


Yuchil Kim (LG) echoed the need for practitioner-focused tools, noting that LG’s AI-powered data-compliance system is part of its responsible-AI toolkit [?] and that its annual AI ethics accountability report (now in its third edition released “yesterday”) provides “the best standard risk” guidance and transparent documentation of AI activities [210-213]. Kim stressed that the UNESCO MOOC will “bridge the gap” for practitioners who struggle to apply abstract standards in daily work [209-210] and that transparency, inclusive AI and multilingual considerations are central to LG’s roadmap [214-218].


Namit Agarwal (World Benchmarking Alliance) presented the results of its latest assessment of 2 000 tech firms: while roughly 40 % disclose AI principles, only just over 10 % meet global governance expectations and none publish human-rights impact assessments [226-229]. From this gap, WBA derived three investor-focused recommendations, enumerated in three consecutive lines: (i) board-level AI risk responsibility and aligned executive incentives; (ii) product-level governance checks that translate ethical principles into concrete strategies; (iii) robust AI-specific human-rights impact assessments with public summaries [236-241]. Agarwal argued that “capital can definitely incentivise innovation and responsibility, but capital alone cannot do that” and called for a “race to the top” driven by clear market expectations [226-232].


Across the panel, participants repeatedly agreed that global norms and practical safeguards are essential for AI to work for all people, not only advanced economies. This consensus was voiced by Hicks, Curtis, Tammsaar, Walden, Duroir and Kim, who all linked UNESCO recommendations, UN Guiding Principles and OECD standards to concrete safeguards [2][9-11][13][32][67-74][138-140][184-185][209-213]. They also concurred that capacity-building tools such as the UNESCO RAMS assessments, the forthcoming MOOC, and NASSCOM’s ecosystem-wide training are vital to turn theory into practice [32-35][36-44][89-105][209-213]. Finally, there was broad agreement that multi-stakeholder engagement-including civil society, academia, NGOs and investors-is indispensable for inclusive, culturally aware AI, as reflected in the statements of Walden, Duroir, Tammsaar, Kim and Agarwal [300-310][276-285][73-75][209-213][322-338].


Points of disagreement (bullet list):


Regulation vs. voluntary commitments – Hicks calls for “clear, enforceable rules… and alignment around the global norms” [9]; Duroir emphasizes “voluntary commitments… at AI summits” as the primary mechanism to operationalise standards [188-192].


Proliferation of frameworks – Bose says developers are “lost in the framework” and need “concrete, actionable guidance” [258-267]; Curtis maintains that UNESCO’s RAMS and the MOOC already provide a unified foundation [32-35][37-44].


Incentive design – Hicks promotes a broad “race to the top” through market rewards [13]; Agarwal insists that incentives must be tied to specific board-level governance, executive incentives and impact-assessment requirements [226-232].


Thought-provoking remarks shaped the tone of the discussion. Curtis’s framing of trust as a design problem [32] set the agenda for concrete engineering solutions. Tammsaar’s succinct articulation of the four UN-derived priorities [66-73] gave the panel a shared roadmap. Walden’s description of Google’s multilayered governance-model-level checks, executive sign-off and post-launch monitoring-provided a vivid example of operationalising ethics [149-162]. Duroir’s account of Microsoft’s Sensitive Use Case triage and ITER committee illustrated board-level oversight [179-184]. Agarwal’s data point that “only about 10 %… meet global governance expectations and none disclose human-rights impact assessments” [226-229] underscored the compliance gap. Parvati Adani’s philosophical probe-asking an AI tool whether it has ethical limits and receiving “I don’t know” [322-332]-reminded the audience that AI lacks self-awareness and therefore requires human governance. Kim’s African proverb, “If we want to go fast, go alone. If we want to go far, go together,” encapsulated the collaborative spirit needed for a trustworthy ecosystem [294-296].


Concrete next steps were identified:


– UNESCO’s MOOC will be delivered on Coursera, with an open invitation for learners and partners [36-38].


– The UN Global Dialogue on AI Governance will reconvene in Geneva in July [46-50].


– Companies such as LG and Microsoft pledged to publish annual AI-ethics accountability reports (LG’s third edition is already released) [210-213][276-283].


– Microsoft will continue community-led benchmark projects like Samishka in India to develop multilingual safety tools [282-285].


– NASSCOM will expand capacity-building workshops and open-asset libraries for startups, SMEs and government agencies [98-105].


– The World Benchmarking Alliance will circulate its three-step investor engagement framework-board oversight, product-level checks, impact assessments-to catalyse market-based incentives [236-241].


– Participants agreed to share best-practice case studies with the WBA for inclusion in future benchmarking reports [?].


Unresolved issues remain. There is no consensus on how to harmonise the growing number of national and sectoral AI frameworks into a single actionable roadmap for developers. Financing mechanisms to close capacity gaps-particularly infrastructure, compute and talent in developing-country firms-were not settled. A standardised, auditable methodology for AI-specific human-rights impact assessments is still lacking. Scaling responsible-AI processes for small startups without over-burdening them, and establishing clear ownership and frequency for post-launch monitoring, also require further work. Finally, integrating multilingual and informal language contexts into safety tools beyond ad-hoc community projects remains an open challenge.


In closing, Peggy Hicks urged participants to translate the day’s insights into action, reminding them that “AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give us human dignity” [362]. She thanked the participants and closed the session [?]. The panel reaffirmed that responsible AI requires coordinated global norms, concrete capacity-building tools, and market incentives, and they committed to share best-practice case studies and continue dialogue at the July Global Dialogue in Geneva [?].


Session transcriptComplete transcript of the session
Peggy Hicks

These are consequential challenges that have impacts in people’s lives on a day -to -day basis. And our session is going to address how global standards, collaborative public -private solutions, and rights -based approaches can enable responsible AI with meaningful real -world impact. And we know that these things don’t just happen on their own. It takes deliberation. It takes thought. It takes engagement to make sure that the products and approaches that we’re using in the AI field avoid some of the pitfalls that may be associated with them. And the companies are going to share some of the good practices that they’re engaging in about how that works in the real world. And we know if they don’t engage in that way, that the risks are there and very much present.

And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying to deliver these benefits for. Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. Companies, of course, have a responsibility to respect human rights and address the risk to people stemming from their products. And human rights due diligence, of course, is one of the process -based ways and a pragmatic way to weave this into corporate operations. But, of course, governments are the ones that also have a responsibility here, too, to create a level playing field, and we talk a lot about that.

We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well. Our BTEC project at OHCHR is aimed at how do we make this conversation happen. So through convenings like this one, through engaging with companies, pulling out their good practices and letting all of you hear about them and encouraging others to do the same is what that project is really about. And we are really looking at and working with, of course, how to use tools like the UN Guidelines. We’re working with the United Nations, the United Nations, and the United Nations to try to get the best out of the work that we’re doing.

And we’re working with the United Nations to try to get the best out of the work that we’re doing. So we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. UNESCO’s AI recommendations on ethics and figuring out how we weave those into the decisions and work that’s being done now.

And as I said, bringing this conversation to this summit where there is truly a global and multi -stakeholder effort is happening to really look at AI innovation and deployment has been incredibly important. So without further ado on that front, I want to hand over to my colleague and co -sponsor here, Tim, over to you.

Tim Curtis

Thanks, Peggy. And good morning, everyone. Ambassador Reintesma from Estonia and co -chair of the AI Dialogue that the United Nations is holding. Of course, Peggy and dear panelists, it’s really wonderful to be here with you today to be part of this conversation on responsible practices and industry standards. And as we all know now where AI is moving, you know, from something we discuss in theory to really something that is shaping the decisions in real time and real institutions. and of course for real people. I’d like to thank particularly the Office of the High Commission for inviting us to join in on this and for working with us on organising this event. It’s been a pleasure.

At UNESCO we often return to a simple idea that trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability and that’s why the recommendation on the ethics of AI we believe is so important because it does give the world a shared foundation, a first step on how AI could be built and used in ways that protect people’s rights, that promote fairness and support inclusion. So we’ve been translating this global agreement and framework into local realities and through what we call the RAMS, the Readiness Assessment Methodology Reports which we’ve now launched in around over 80 countries and just two days we did India’s readiness assessment report.

And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyond theory and towards this responsible human -centred deployment of AI we hear about. And so by grounding innovation in these evidence -based diagnostics, we hope to ensure that progress remains aligned with those shared values. But, of course, a recommendation only matters if it can be applied by people who are actually catering, creating and using AI. And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course, or a MOOC, as more commonly known, on the ethics of artificial intelligence.

And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide global audience, and to make a practical… for day -to -day work. And so, as I mentioned earlier, the key idea behind the MOOC is ethics by design. And so in simple terms, that we don’t wait until something goes wrong to ask these ethical questions. We should build these questions into the process from the beginning. And the course will help learners think through issues like fairness, transparency, safety, accountability and inclusion at the stage when decisions are still being made rather than after systems have already been deployed. The course, of course, is really going to be focusing on practical tools so that we can offer clear ways of thinking and working that can be used in everyday settings.

So it’ll help learners, for example, recognise common risks early, ask better questions during development, document the decisions made responsibly and think through the impact of AI systems on different groups of people. we’re moving beyond a one size fits all approach and we’ve done this by collaborating with experts from over 10 countries and 5 continents with some of the leading minds from the University of Oxford and the Alan Turing Institute and this global group, this global coalition is really vital because AI of course doesn’t operate in a vacuum it’s shaped by languages, it’s shaped by cultural norms and institutional capacities and of where it is developed and deployed so by integrating these diverse perspectives we’re trying to move from the theory again to the live reality so ultimately this MOOC is a capacity building effort with a simple purpose to help more people around the world build and use AI in ways that are responsible, inclusive and worthy of public trust we look forward of course to this continued collaboration with governments, with industry, with academia and civil society as we try to move forward we take it forward and we hope many of you will engage with the course when it launches, not only as learners, but also as partners in building a stronger culture of ethical innovation across the world.

Thank you very much.

Peggy Hicks

Thanks, Tim Curtis, UNESCO. We’re all looking forward to it. Now we have anticipation. We’re very fortunate to have an addition to our program today with Ambassador Tomsar, the permanent representative of Estonia, who’s one of the co -facilitators for the Global Dialogue on AI that will be launched in July, and a big responsibility. And he’s here to tell us a little bit about where it’s heading and how you all can contribute. Please, Ambassador.

Rein Tammsaar

Thank you very much. Good morning. I don’t know, is it morning? Yeah, maybe. So after three days here in India, I think that I lost time. I’m not track of understanding. Is it morning or evening? But thank you, UNESCO and Office of the High Commissioner for Human Rights. for convening this really important discussion, and of course to all our hosts here in India. And I also thank partners who contributed to this work. Today I’ll speak on behalf of two co -chairs of United Nations Global Dialogue on AI Governance, and two co -chairs are from Salvador and Estonia. The first Global Dialogue on AI Governance was mandated by all member states through a General Assembly resolution adopted in August 25.

So this is a member states driven process. It belongs to every country, to all member states. And its task is very practical, while its scope is multilateral. So this… The aim is, you know, to come together. It is a platform where governments and stakeholders exchange best practices and experiences, and this, we believe, can strengthen international cooperation on AI governance and ensure human -centric AI supports sustainable development and reduce, indeed, digital divides that are already there. So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attention four points from these priorities. So first, they want safe, secure, and trustworthy AI systems, and the trust here, of course, is an absolute key word.

Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to participate fully in AI economy, and inclusivity and equal access are essential here. Third, they want governance approaches that can work across borders. and be practical. So fragmentation raises the cost and weakens trust. So interoperability is absolutely key. And fourth, and that is, I think, quite actual here, they want AI anchored in human rights and international law. And this includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability. Now, we know human rights are not optional. They are part of a mandate agreed by member states. And today’s focus on responsible practices and industry standards responds directly to these priorities.

And standards turn principles into action. They shape risk management, they clarify accountability, they guide human oversight, and they give companies and regulators tools they can apply in real systems. so let me say that the global dialogue will not and I guess it cannot impose one single model we will listen we will identify common ground we will build on existing initiatives ethics of AI was mentioned here and it’s of course one of them we’ll avoid or try to avoid duplication and we will focus on practical value so I encourage you to bring your experience into this process share what works share what doesn’t work help us identify approaches that can scale across regions and level of capacity and in best case if we succeed and failure is not an option safety and trust will be visible in how systems are designed deployed and governed they will be reflected in real safeguards and in benefits that reach more people and this is very important for us thank you So I thank you and wish you a productive day, practical exchanges that move our common work forward.

And with this, I give it over to the real experts and panel. Thank you very much.

Peggy Hicks

Thank you, Ambassador. Wonderful to have you with us, and I think we’re all looking forward. We’re looking forward to having all of you join us in Geneva in July. So with that introduction by the three of us, we’re really, as the Ambassador said, going to turn it over to those who can really inform us about how this work is happening and I hope inspire us to both give support and emphasis, amplification to the work that you’re doing and bring more into the fold around responsible business conduct. So with that introduction, I’d really like to start, Ankit Bose, with you from NASSCOM. We had a great conversation yesterday. I’d love for you to inform our audience that NASSCOM represents the leading Indian tech industries and we want to hear more about your work and what you’re doing to encourage companies and help them to ensure a responsible work environment.

Thank you.

Ankit Bose

Thank you so much for having me here. And it’s my pleasure here to address the audience. So NASCOM has been there for almost four decades plus, right? We have been helping the tech industry in the country to shape and change the whole agenda for the country. I think that’s what we have been doing, specifically on responsible AI interests, right? I think the mission for NASCOM started in 2021. So we started with a gap that, you know, we were seeing a lot of AI was getting developed. But again, I think we found that there was some missing element that was the responsible, the trust, the human element. I think that that is how the mission started. From that point in time, our main core objective has been to develop open assets, right?

Build capacity, build, you know, adoption, right? And I think help all the different components in the ecosystem, right? Right from the government to the startup, the SMEs, right? All of them. So we have been trying to help them. And we’re trying to help them go up the ladder. and then really become aware, I think, not only the gloomy side of AI, but also the bright side if they adopt responsible AI governance practices right at the early. They can have a big upside. I think that’s what we have been doing.

Peggy Hicks

Can I ask, Ankit, how does this work? You mentioned that full range of companies that are involved, and one of the topics we spoke a bit about yesterday is the difficulties sometimes when you have big companies, we have some of them represented here, but also startups and small and medium enterprises. How do you differentiate? How do you make sure that we’re engaging across that very differentiated group of industry?

Ankit Bose

Yeah, so I think if I take it, I think there’s the big techs, right? Then there are the services companies. Then there are the middle -sized, small, and startup. I think all five of them have different sort of engagement, right? The big tech, I think, are playing at the front foot, right? The services companies have to follow their contracts, right? The bigger services companies. The medium tier companies, they are really trying to understand how they grow their AI base at the same time build, you know, that services or product using right governance principles. But again, I think the bigger support is needed from the, you know, the smaller startup, right? Because they are really, really fighting for day to day, right?

I think, and believe me, I think a startup founder has to first build a business, a tech, a team, right? And also get funding and apparently focus on, you know, a lot of things around governance, right? I think in that whole journey that we have seen, they are putting it at a second or probably the side burner, which is something which we see is a complete no -no. If you do that, you know, when you’re building a product, right, you might miss when you’re scaling. I think that’s what we are.

Peggy Hicks

Great. Thanks very much. I think we’re going to turn to the scale side of it now with Alex, you’re next in line. So, Alex Walden, you’ve been working on these issues within Google, and I think one of the insights that I’ve learned from you over the time we’ve known each other is really how complex it is to bring to product teams and those that are on the technologist side some of these issues of responsible business conduct and human rights, and give us the benefit of your wisdom about how that works and how we can do it better.

Alex Walden

Thanks for the question, and I love that you said that because I do see a very important part of my role as making sure that the stakeholders that we work with understand how things are working within companies because that helps us be better and you be better advocates for helping us improve. So, anyway, but to your question, because I know I’m going to be fast, I think where it really starts for us is from the values perspective. Obviously, we’re a company that’s founded on values around freedom of expression and privacy and bringing the benefit of our technology to everyone, and so that is where… That’s where it begins. But obviously, we have things like… Like, ultimately, it’s the sort of governance inside of the company that is what permeates throughout the 180 ,000 people that work at Google to ensure that we are being responsible in the way that we’re developing AI.

So as a baseline for us, responsibility and thinking about what responsibility means has to start with human rights, and then we can build from there. So we have a corporate policy that says that we have a commitment to respect the UN guiding principles on business and human rights. And we’ve also built on that with things like our AI principles that reinforce sort of more of an operational way in which we can manifest those values in all of the teams that are working to develop the various models or applications of the models in, say, Google Cloud or YouTube or Search. Just to maybe hone in a little bit on the types of standards that we’re using, because I think that’s important because there’s so much work being done in our ecosystem.

We use the UN guiding principles. We use the work happening. We use the work happening at the OECD, the work at UNESCO. engagement with our peers in industry through the BTEC project, through global network initiative, and this is just a few, but all of the guidance that comes out of those places and the dialogue that happens there helps us ultimately inform how things are working inside the company. And then just one layer down, then I’ll stop. I think having programs and processes like training and dedicated teams, ultimately that’s how you operationalize this through getting a product to market. And so I can say more, but I think those are kind of the big picture structures for what’s required for a company to do this at scale.

Peggy Hicks

So, you know, I’m not going to let you off the hook quite that easily. So we know that this isn’t always easy, though, right, that there are obstacles to really convincing people it’s worth the time. I’ve been in the room where hand -wringing is described as the, you know, no more hand -wringing about safety. We’re going to, we need to just move forward, and I’m sure there are pressures. that you face as the lead for human rights within this company trying to get your message heard. Tell me a bit about how you’ve been able to sometimes surmount some of those necessary challenges that you might see from that different perspective on whether or not these are hurdles or supports for the company to do its mission more effectively.

Alex Walden

Well, I mean, I think in general we have sort of corporations are incentivized to put products on market that are safe and that are trusted by our consumers. People know Google best through Google Search or Gmail, the varieties of consumer -facing ways they’re engaging with our products. And so we do have an inherent sort of market business reason to put out products that people trust and deliver good outcomes. And so we have to have processes inside that… that make that real. And so what we do is we have model requirements just at the most granular level. Before any product goes to market, there are model requirements. And so those teams are focused on ensuring that they’re validating the data and doing testing and doing evaluations.

And that’s at the model level. And then at the application layer, we have requirements for teams to be, again, doing testing, additional evaluations, setting additional guardrails, and focusing on what mitigations are going to be put in place for, again, things like Gemini before that gets launched. And then we have to have executives review these things. So before anything goes to market, leadership needs to understand what the risks are and how we’re mitigating them and have a plan in place to address that. So that is an important part of the process for us. And then last, we have post -launch monitoring, because obviously we can do all the testing in the world, but once you’ve launched a product, you have to be able to do all the testing.

there may be novel or new or residual risks that arise. And so we have to have a process for continuing to monitor that, understanding it, getting feedback, improving and improving

Peggy Hicks

Great. That’s super helpful, Alex, to understand that multilayered approach that needs to happen within companies, including, I think, that executive level that you mentioned. I mean, the signals from on top will actually inspire all of those other levels to do what we’re hoping they’ll do. And we have another example with us of some of these practices. I want to turn to Hector Duroir, who’s the director of responsible AI public policy at Microsoft. And we want to hear more about what you’re doing to embed responsible policy practices within Microsoft’s approach.

Hector Duroir

Thank you very much, Peggy, and thanks for having Microsoft here. So, yeah, I want to start with the inception of our responsible AI approach, which was in 2018. And at that stage, you didn’t have codes, directives, regulations. Frameworks guiding our approach. We’re nearly starting from a blank page. And we didn’t talk about foundational models or frontier models at that stage. It was all about specific AI system and applications, such as facial recognition, for instance, which was very popular. So we forged our AI principles around priorities such as privacy, such as reliability, inclusion, fairness, safety, security. And these high -level principles, the whole challenge was to translate them into practice afterwards. And so it’s really on this basis that we forged the Office of Responsible AI when we created it in 2019, around these principles, which then became our RAI standard, guiding all our actions across our different programs.

One of the programs that I want to reference here is our Sensitive Use Case program. So it’s a team within the Office of Responsible AI that is in charge of doing some triage, challenging basically sensitive use case coming from our different markets. on AI systems and models that could actually violate these principles that I was referencing. And so this team analyzed these use cases and then when it occurs that it’s necessary, bring them to our ITER committee, which is our AI ethics committee. And it involved Microsoft Board, both at the CTO level and the present level. And I think the board inclusion is very important in this kind of internal risk management framework. And so this work has been informed during the past years by many interesting developments.

So the OECD AI principles, obviously, but also the UNESCO recommendation on AI ethics. And I think all these principled approach that evolve or refine nuanced with AI capabilities are actually so important and very useful signals for us to refine our own AI governance program within Microsoft.

Peggy Hicks

Hector, you’ve talked a little bit about how you look at it from an internal perspective. But we wanted to hear a bit of how you look externally, like what are the drivers between how you engage. across the sector and with the government side as well.

Hector Duroir

Yeah, and I think we always navigate this very interesting interplay between best practices and international norms and regulatory standards. And a very good example here is the line of voluntary commitments that have been signed across the AI summits. And so if you look at Letchley Park in the UK or the South Korea summit that happened afterwards, it really helped us, as Alex was referencing, to ground our model testing approach, especially against public safety and national security risks. So when we talk about cybersecurity, for instance, or loss of control, or CBRN risks, that really grounded some very solid testing approach with some concrete operational triggers, concrete high -risk domain that we’re monitoring at the model level.

So that was one. The OECD hyper -reporting framework, which came out of the Hiroshima AI process, is another very good tool that I was involved in and I want to reference here. It was launched along the lines of the Paris AI Summit. And actually, it’s a very good way to understand how risk management transparency works in practice and how real -world deployment experience and transparency experience can guide upstream developments. And so it’s this kind of feedback loop that it creates that’s very interesting. And because we’re in Delhi, just to reference the voluntary commitments that were signed yesterday, I think that’s another very, very, very good and positive approach that the Indian government have been taking, especially on one of the commitments which basically encourages company to forge multilingual capabilities approach.

So basically build better evaluations against safety risk, not only in English norms, but beyond English norms. And I think that speaks about, you know, our principle of inclusion. That’s so important, and I’m very happy that they initiated this work.

Peggy Hicks

I have to say, one of the contrasts I’ve been making when I look at what’s been talked about here in Delhi as opposed to… prior summits is that issue of inclusion. And the language issue, I think, is so underrepresented in some of the conversations we have. So it’s wonderful that you’ve given that a shout out. We’re very fortunate, Yuchil, to have you with us as well. Yuchil Kim, who is the vice president at LG for AI research. So we’d really like to hear more about how you’re engaging with these global technical and policy standards. We talked about the UN Guiding Principles on Business and Human Rights, the UNESCO recommendation on AI ethics, and, of course, the MOOC that’s being worked on.

So give us a sense of how these frameworks are being engaged with by LG.

Yuchil Kim

So the essential of MOOC is for the practitioner The practitioner usually is struggling with the same question How do I actually apply this in my day -to -day work? So we are focusing on the bridge in the gap So we provide the best standard risk So we get a lot of risk that Timothy mentioned So we also contribute to our own experience So I previously mentioned about our process And we made also our AI -powered data compliance system And also I will mention soon We have an annual report about AI ethics activities So I hope the MOOC can be a good practice for everyone It will launch in this half So last one is I want to talk about transparency So we have a lot of activities about the AI Responsible AI Inclusive AI So we published our annual accountability report on AI So yesterday we released the third edition.

So here are some some track of that. I will spread out after our session. So please refer my documents.

Peggy Hicks

Well yeah. Wonderful. No I think it’s super interesting to understand both how you’ve been looking at that learning process within the company but also how that more global approach working with UNESCO is going to be very helpful and I think it’s one of those areas where we all know so much more needs needs to happen. But we’ve we’ve heard the the company perspective here and we’re very fortunate to have with us from the World Benchmarking Alliance, Namit Agarwal. And Namit, you know, I think one of the things that we’ve talked about is how we incentivize the race to the top amongst all of these actors in this space. And you’re going to, I hope, give us some some insights based on the the work that the World Benchmarking Alliance is doing about how capital and investment can be used to make sure that innovation is being approached in a responsible way.

Over to you, Nami.

Namit Agarwal

Thanks for having me here. And I’m not representing investors, but we do work with several stakeholders, including investors, civil society, governments, and companies. So we are a nonprofit, and we try to strengthen accountability of the world’s most influential companies so that their impact on people and planet can be sustainable. We also assess the world’s most influential tech companies on whether they are advancing a trustworthy, rights -respecting, and inclusive digital future using standards such as the UN Guiding Principles, but also others that were mentioned by my fellow panelists here. Our role is to provide comparable, credible, and standardized data that our stakeholders can use because of the challenges that we face. Because it’s an ecosystem approach, so how can they work together in doing that?

So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our latest assessments of 2 ,000 companies at Davos last month, and particularly from the tech side, what we found is close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations, global expectations on the governance aspect of it, and none of these 200 companies that we assess disclose their reports on human rights impact assessment. And I think that clearly shows that while there is a lot of intent, some work is happening, but governance and accountability are not really there, so we need a lot of work to happen there. And we believe responsible innovation requires incentives for long -term risk management, clear expectations that are tied to capital.

So I think that the way we’re going to do this is to look at the market, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and consequences for weak governance because it has to be consequential for companies to move in that direction.

And I think that is where investors have a very catalytic role to play. We convene a coalition of investors and civil society organizations

Peggy Hicks

Now, I mean, I think it’s so interesting that we work in a sector that is incredibly based on data, but yet we don’t necessarily bring it into this conversation in the ways that we need to, and that idea of both incentivizing the right practices and leverage within companies, but also, you know, too many conversations sort of focus on the tech industry as a whole and sort of group everybody together as if they’re all engaging in the same way. And so the work that you’re doing really helps us to understand those nuances more. Could you go a bit deeper and look a bit at some of the examples and concrete suggestions coming out of your work as to how to push that discussion forward more?

Namit Agarwal

Absolutely. So I think the first thing is engagement and dialogue, and I think that is a very important way. And we have been fortunate to have good engagement with both Google and Microsoft on this panel. but again it’s important to build on engagement because it’s a continuous process it’s important for investors to engage with some of the leaders but also engage with companies who are fence -sitters to bring them along faster the laggard will definitely catch up and come on board but for investors and if you want to for capital and finance to incentivize responsible innovation, responsible AI there are three things that we believe investors should definitely do first is on AI governance and board oversight investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are aligned with long term human rights risk mitigation and whether governance applies across the full AI value chain second is on implementation at product and business model level and we heard some examples just now investors need to move beyond policy statements and ask companies how ethical principles are translated into product level strategy How high -risk use cases are identified and whether there are internal mechanisms and controls to, you know, identify harms as they emerge.

And third is robust human rights impact assessments and asking whether companies conduct AI -specific impact assessments. Are they publishing meaningful summaries? Are mitigation measures integrated into product cycles? And I think this is an area where we have seen a lot of gaps.

Peggy Hicks

Great. Thanks, Namek. I wonder if we could actually take that one step further and get some input from the other members of the panel on what that looks like in practice. Because, of course, this is a panel that’s focused sort of on the company perspective. I think we have some of our real partners here on the civil society side. And as much as they understand that that conversation needs to happen, I think they sometimes find it difficult to be able to make sure that the way those risks are assessed really looks like bring in the voices, bring in the experiences of people, and particularly people in different contexts and different environments in which the companies are being, their products are being rolled out.

So those issues of stakeholder engagement, access, dialogue with the civil society side, it would be great to hear a little bit more about some of the lessons that you’ve all learned there. And I see you shaking your head. Please tell us from the NASSCOM perspective how you look at it.

Ankit Bose

Well, I think from an enterprise lens, right, I think when they are trying to implement responsible AI or trustworthy AI, right, I think the biggest issue is there are different groups internally, right, the tech group, the business group, the legal risk group, right, the finance group, right. And then all of them are working in silos, what we feel like, because the business want the best, you know, for the business, the tech wants to put the best technology, right. The risk is very, you know. Right. conservative, right? And finance always has upper limit on what they want to spend. So that’s what issue. I think what helps is if all of them build a collaboration which can be taken use case by use case.

I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first thing. The second thing is I think what from NASCOM what we are seeing, there’s a lot of frameworks which are getting developed, right? Every country, every place you go, there’s a new framework, right? But from the framework heavy or the concept heavy to action is not happening. I think that’s a big gap, right? So if a technologist is trying to implement responsible governance, right? A developer is trying to implement, right? He will be lost in the framework. He doesn’t know what’s actionable. I think what he should do. So I think that’s one big need.

I think that’s what we are also driving. We are trying to drive a multi, you know, multi -different organization -led approach where we have all different sizes of organizations, where we come together and start discussing, collaborating, implementing, right? I think that’s the second nugget. I think these are two points. I know time is up.

Peggy Hicks

No, that’s great. I mean, I think it shows that that collaborative effort is going to be super important rather than a siloed approach for so many practical reasons as well, that companies can only respond to so many different frameworks. And what they need to do is have the simple guidance and support that they need to actually implement at this stage. Headquarter, do you want to say quick comments from the company side about how you’re facing those challenges?

Hector Duroir

Yeah, two very quick examples on how we involve civil society and academia in this process. So our work really sits at the intersection of policy, research, and engineering groups. And to inform product development with our responsible AI principles, we regularly publish some internal policies. And it’s an iterative process with our research teams, with our product teams. And as part of this process, we actually include academics who have a specific REST domain expertise or think tanks and civil society organizations which have been thinking very deep about the deployment of 1AI system, 1AI model in certain contexts. And so that really informs the product that we do from the inception. And I think the second example that was raised is the big topic and the governance challenge that we face is the importance of refining AI evaluations.

That’s the constant thing. And in India, for instance, we’ve been working with some NGOs around a project named Samishka to build some community -led benchmarks, which is basically a safety tool that we include afterwards in the system construction, to really get data sets that are grounded in a community with a specific… cultural aspects, specific contextual aspects, because if you just translate safety tools from English to another language, you lose all the context for which this safety tool has been built. And so that’s another example of really an area where we need more cooperation between civil society and governments and companies. It’s really how do we build these safety tools beyond English norms such as an India.

Peggy Hicks

That’s great, and it takes work to do that, and the more we can spread, you’ve done some of the work, you know how to do some of it, and diffuse it amongst other companies that could learn from it. That’s part of what we’re trying to do with BTEC, but I think there’s a lot more to be done. Yuchil, do you want to come in?

Yuchil Kim

Yes, I agree with his comment. The safety, we should work together. So that’s the reason why we make our annual report, because sharing our best practice and also sharing our struggles, what we had a struggle of, that we think that’s a very important thing. So this is my colleague mentioned about it. There’s an African proverb that said, if we want to go fast, go alone. If we want to go far, go together. So building a trustworthy and safe ecosystem is not a sprint. So it’s a long journey, so we can go together.

Peggy Hicks

It’s a long journey with a lot of sprints happening day to day, as far as I can tell. Some of them here at the summit, but over to you, Allie.

Alex Walden

So much sprinting. I think maybe just to pick up specifically on the stakeholder engagement piece. So a few things. One, I think it’s important for companies to have a programmatic approach to stakeholder engagement, so we need to have ways in which we’re regularly engaging with stakeholders in general, not just on a specific product question. But so I would say first a programmatic approach, and then second, something that is more ad hoc. So when we need to consult specifically on a product, we need to have a sort of process and way to do that. The other thing is we have programs internally like trusted tester programs where we are working with third -party organizations to make sure that they have early access or pre -launch access to models or to a product in order to test it so that we can identify potential risks or errors ahead of time and address them before we launch a product.

And then last, just to highlight something that we do is similar to others, our research team called Impact Lab, which is part of the overall human rights programmatic work at the company, engages directly with communities in doing research to inform how we are improving our products and what we’re developing. So that work is also happening through the research team specifically. And they recently launched something called the Amplify Initiative, which is an open -source app. This is specifically on language inclusion that allows… members of the public and communities to engage in the fine -tuning work around our language models. So specifically that there is a lot of, there’s a wealth of information and expertise out there that we should all be benefiting from, that we can benefit from, and it’s open source so we can also share it with others in industry.

Peggy Hicks

That’s great to hear, and I’m sure more needs to be done on that front, but the amplification effect is so crucial. Look, we could probably go on talking all day, but I see the clock is ticking down. Now, fortunately, rather than us try to draw the conclusions from this, we’ve welcomed in another speaker to give us some concluding remarks to pull some of these pieces together. I’m very happy to invite Parvati Adani from Sero Amarchan Mangaldas to help us think through some of these issues. Please.

Parvati Adani

Thank you for that. I think it’s partly easy and partly tough because there was a lot to understand, but I think we can go back from this. conversation. But firstly, thank you. You’ve held the conversation beautifully. And thank you to UNESCO, United Nations, NASSCOM, and everybody in this room who brought their knowledge, their conscience to this conversation. Actually, I just want to talk a little bit about a conversation with a machine. As we were thinking about this topic, and we were engaging on this issue, I wanted to share something that I might, I feel resonates with a lot of what you’ve talked about. I did something in preparation, I decided to ask the tools that we’re talking about over here, a question that we avoid asking ourselves.

Do you, I’m talking to the tool, do you have ethical limits? Do you understand the difference between what it can do and what it should do? And I’m going to quote verbatim. On conscious, at a conscious level, the answer is I don’t know. And neither does anybody. Nobody else. The gap is a philosophical, uncomfortable position where think of me as having no home inside. I have no continuous thread of existence and I cannot verify about myself what you have asked me. I don’t have any consequences to bear. Now, what came back, though unexpectedly thoughtful, showed us about restraints, values and what it appears to have internalized. It acknowledged the difference between instruction and conscience, a lot of what we’ve talked about today.

And so, I think when we talk about this, we said human rights are not optional. We cannot ignore the impact on people and planet. we have to make incentives for good governance so when a tool cannot understand this for itself I think we have to do the job what we have chosen in India and when we are having this conversation this location is not ceremonial, it’s very deliberate that we have thought about innovation over restraint and we have to think about that being the right choice we allow innovation to be in a safe place without feeling the weight of the regulation and I think we have a lot to learn from all of you who have been doing this for so long the privacy, the safety, the impact on children and vulnerable groups the question is whether the people that we are talking about are going to be the subjects of the transformation or just its audience or just the object An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution.

So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design. I think the ideas of an interoperable, flexible system is a forward -looking and an inclusive one. I think a lot of what you mentioned, Alex, about governance inside the company, it’s wonderful what you’re talking about. And I think the voluntary commitments that have been reflected in this summit is also fantastic. So now we come to the harder work. The ambition is real. The infrastructure exists. But ensuring that… We don’t leave with just good intentions and good ideas, but action. Thank you.

Peggy Hicks

Thank you, Praveen and Dhani. It’s wonderful to hear those perspectives. We’re coming to the close of this session. Just a few parting words to all of you. I think we’ve done enough in this short conversation to really give a sense of how complex some of these issues are, the dynamics within companies and externally and then globally across different geographies, the challenges that are faced. But the reality is that all of us have a responsibility to engage on these, and we each have different roles. We’ve heard a bit about what some of the companies are doing. We’ve heard a little bit about how we can challenge them and incentivize the actions that they take in the space.

There are good practices, but they’re not universally applied. They’re not available to some companies. There are companies that may want to engage in this, and we can help them to do this. NASSCOM and I have been… We’ve been discussing a little bit about how we make, simplify things, bring in more into the fold of this conversation. And, of course, we’re here in an environment where we have governments that are looking at what do we need to do to create responsible business practices and incentivize them as well. So I hope everybody walks out of the room thinking, what can I do to continue this conversation? How can I differentiate between companies that are thinking about these issues in a way that will deliver for myself, for my children, for my future the ways that we want to see?

AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give those values, that will inform and give us human dignity going forward in the future. So thank you all so much for joining us. Thank you for fitting this into your schedule today, and enjoy the rest of the summit. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (35)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Peggy Hicks reminded participants that AI‑related challenges affect people’s daily lives and that global standards, collaborative public‑private solutions, and rights‑based approaches can enable responsible AI with meaningful real‑world impact.”

The knowledge base identifies Peggy Hicks as Director of Thematic Engagement at the OHCHR and notes her emphasis on responsible AI governance requiring deliberate thought and engagement, which aligns with the reported statement [S21] and [S2].

Confirmedhigh

“She emphasized that responsible outcomes do not happen automatically; they require deliberation and engagement, and that companies must share good practices while governments create a level playing field and incentives for responsible conduct.”

S2 explicitly states that responsible AI governance needs deliberate thought and engagement to avoid pitfalls, supporting the claim about the need for deliberation and a level playing field for companies and governments [S2].

Confirmedmedium

“Tim Curtis argued that trust is not something technology earns through ambition alone but is earned through design choices, safeguards and accountability.”

S122 frames trust as a foundational requirement that must be built through design choices, safeguards, and accountability, confirming Curtis’s point about how trust is earned [S122].

Confirmedmedium

“UNESCO’s Readiness Assessment Methodology Reports (RAMS) have been produced for more than 80 countries, providing a clear‑eyed look at how regional landscapes can evolve.”

S129 confirms the existence of UNESCO’s readiness assessment methodology, though it does not specify the exact number of countries; the claim about RAMS being produced is therefore supported [S129].

Confirmedhigh

“UNESCO and LG AI Research are launching a massive open online course (MOOC) on ethics‑by‑design, to be delivered on Coursera, accessible to a wide global audience and providing practical, day‑to‑day tools for practitioners.”

S1, S131 and S130 all describe a UNESCO-LG AI Research MOOC on AI ethics, delivered via Coursera, aimed at a global audience and designed to give practical tools for everyday work, confirming the claim [S1] and [S131] and [S130].

Confirmedmedium

“The MOOC is intended for practitioners who need concrete tools to bridge the gap between abstract standards and daily work.”

S131 explicitly states that the MOOC’s goal is to make AI ethics learning accessible and practical for day-to-day work, matching the reported purpose for practitioners [S131].

Additional Contextlow

“UNESCO’s measurement approaches, including the readiness assessments, aim to move the debate beyond theory toward practical implementation.”

S123 adds nuance by describing UNESCO’s measurement approaches as shifting from procedural requirements to trust-building mechanisms, providing additional context to the claim about moving beyond theory [S123].

External Sources (132)
S1
AI That Empowers Safety Growth and Social Inclusion in Action — – Peggy Hicks- Alex Walden- Rein Tammsaar
S5
Internet Human Rights: Mapping the UDHR to Cyberspace | IGF 2023 WS #85 — Peggy Hicks, Director of the Office of the UN High Commissioner for Refugees, participated in the session as a discussan…
S6
Digital Transformation for all: An Information Society that respects and protects human rights — – **Peggy Hicks** – Office of the High Commissioner on Human Rights (OHCHR) representative, panel moderator Peggy Hicks…
S7
New Technologies and the Impact on Human Rights — – **Peggy Hicks** – Director of the UN High Commission for Human Rights, human rights expertise Pablo Hinojosa: Please …
S8
AI That Empowers Safety Growth and Social Inclusion in Action — Parvati Adani from Sero Amarchan Mangaldas provided a powerful concluding perspective that reframed the technical and po…
S9
Keynote-Jeet Adani — -Moderator: Role involves introducing speakers and facilitating the discussion. Areas of expertise, specific role detail…
S10
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assis…
S11
Ethical AI_ Keeping Humanity in the Loop While Innovating — 339 words | 73 words per minute | Duration: 276 secondss This afternoon to this UNESCO sponsored event, my name is Tim …
S12
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Dr. Tawfiq Jilasi- Assistant Director General for Communication and Information (mentioned by Tim Curtis in introductio…
S13
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S14
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S17
Open Forum #34 How Do Technical Standards Shape Connectivity and Inclusion — – **Alex Walden** – Global Head of Human Rights, Google Alex Walden, Global Head of Human Rights at Google, articulated…
S18
WS #42 Combating misinformation with Election Coalitions — – Alex Walden – Global Head of Human Rights for Google 5. Government pressure: Alex Walden, Global Head of Human Rights…
S19
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Alexandria Walden: Global Head of Human Rights, Google – Nikki Muscati: Audience member who asked questions (role/aff…
S21
Embedding Human Rights in AI Standards: From Principles to Practice — – **Peggy Hicks** – Director of Thematic Engagement at the Office of the UN High Commissioner for Human Rights Ernst No…
S22
What Proliferation of Artificial Intelligence Means for Information Integrity? — – **Peggy Hicks** – Director of the Thematic Engagement, Special Procedures and Rights to Development Division at the UN…
S23
Who Watches the Watchers Building Trust in AI Governance — The IVO model offers several potential advantages over traditional approaches. Independence ensures that companies are n…
S24
AI diplomacy — We are, in essence, searching for a common language to discuss AI ethics, safety, and security. We can see the early res…
S25
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier techn…
S26
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — A key principle underlying UNESCO’s approach is the recognition that “everybody will have a very different view and appr…
S27
Main Session on Artificial Intelligence | IGF 2023 — Seth Center:Is the answer yes to that? But how? The tricky question is the how. Let me rewind just a minute to the quest…
S28
Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations — There is a pressing need to enhance safeguards against the digital targeting of vulnerable populations, particularly LGB…
S29
WS #362 Incorporating Human Rights in AI Risk Management — Alexandria Walden: All right. Thank you. Thanks for that question. Thanks to GNI for putting this session together. I th…
S30
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S31
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the import…
S32
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S33
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Collaboration across sectors through multistakeholder engagement is essential responsibility Multi-stakeholder particip…
S34
Accessible e-learning experience for PWDs-Best Practices | IGF 2023 WS #350 — In the context of education, the analysis emphasizes the need for inclusion to be integrated into everyday practice in e…
S35
IGF Parliamentary track – Session 2 — 6. Capacity Building and Education Shuaib Afolabi Salisu: Thank you so much. Let me start on a note of appreciation to…
S36
Open Forum #17 AI Regulation Insights From Parliaments — AI governance requires ongoing education for all stakeholders – politicians, policymakers, and the general public. This …
S37
How Trust and Safety Drive Innovation and Sustainable Growth — Craig explains that Microsoft implements responsible AI governance programs internally and sees opportunities for differ…
S38
Secure Finance Risk-Based AI Policy for the Banking Sector — The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integ…
S39
AI Meets Cybersecurity Trust Governance & Global Security — And Marie made a reference to the UN cyber norms process through the Open Internet Working Group, the group of governmen…
S40
Laying the foundations for AI governance — Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI governance principles into practice may actu…
S41
TradeTech for Greener Supply Chains — Governments are viewed as key actors in closing the gap between technology disruption and regulation, and there is a pos…
S42
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Incentives that can be driven by policy development, that can be driven by economic incentive creation Therefore, the f…
S43
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Investors can directly engage companies to improve their policies, practices, and governance. Investors have the potent…
S44
Digital divides & Inclusion — Collaboration between government entities, private sector organizations, civil society, and academia is deemed critical …
S45
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96 — The diversity of civil society and the global majority, including different languages and cultural norms, should be cons…
S46
WS #266 Empowering Civil Society: Bridging Gaps in Policy Influence — Audience member (unnamed) This is important to ensure that policies reflect diverse cultural perspectives, not just Wes…
S47
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade stresses the need for a multistakeholder approach in policymaking. She argues that policies often lack in…
S48
Setting the Rules_ Global AI Standards for Growth and Governance — I think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s min…
S49
Bridging the AI innovation gap — This comment provides a profound reframing of technical standards from bureaucratic requirements to tools of global equi…
S50
Setting the Rules_ Global AI Standards for Growth and Governance — Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, …
S51
Importance of Professional standards for AI development and testing — The disagreement level is moderate but significant for practical implementation. While speakers generally agree on the n…
S52
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Do we have the ethics and the inclusivity that’s required? And so those are areas that I think we have seen practical ex…
S53
Embedding Human Rights in AI Standards: From Principles to Practice — 1. **Capacity Building**: Need for sustained education programs to help technical experts understand human rights princi…
S54
WS #187 Bridging Internet AI Governance From Theory to Practice — Hadia Elminiawi: Regional and international strategies and cooperations should not be seen as conflicting with national …
S55
AI That Empowers Safety Growth and Social Inclusion in Action — The discussion revealed tension between framework proliferation and the need for practical implementation guidance. Diff…
S56
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — The level of disagreement was moderate and constructive. Speakers shared common goals of protecting submarine cable infr…
S57
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S58
Policies and platforms in support of learning: towards more coherence, coordination and convergence — – (a) Common standards for needs assessment and evaluation of learning programmes; – (b) Coordination and possibly int…
S59
Secure Finance Risk-Based AI Policy for the Banking Sector — This identifies a fundamental legal and philosophical challenge that current legal frameworks are unprepared to handle. …
S60
Who Watches the Watchers Building Trust in AI Governance — Summary:The speakers demonstrated strong consensus on the urgency of AI governance challenges, the inadequacy of current…
S61
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Ball emphasizes the importance of addressing AI surveillance and privacy concerns through specific, context-dependent so…
S62
AI Governance Dialogue: Steering the future of AI — Legal and regulatory | Development Martin emphasizes that effective AI governance requires local ownership and contextu…
S63
[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation — Effective governance requires different layers from core regulatory frameworks to voluntary commitments, as some aspects…
S64
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Minimum safeguards with international coordination while respecting local specificities and strategic goals Principle-b…
S65
WS #133 Better products and policies through stakeholder engagement — The discussion also addressed challenges, including time constraints, the fast pace of technology development, and poten…
S66
Voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI — Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issue…
S67
AI That Empowers Safety Growth and Social Inclusion in Action — High level of consensus on core principles and challenges, with speakers from different sectors (government, companies, …
S68
Press Conference: Closing the AI Access Gap — Countries need robust data strategies that include sharing frameworks and data protection measures. These strategies are…
S69
Leaders TalkX: Local to global: preserving culture and language in a digital era — Cultural diversity | Content policy | Multilingualism Multilinguality and cultural diversity must be viewed as core pri…
S70
Inclusive AI_ Why Linguistic Diversity Matters — Arguments:Data sharing decisions should be context-specific based on whether they serve public interest versus private c…
S71
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S72
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Multi-stakeholder engagement is essential but complex, requiring diverse expertise and perspectives
S73
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Another key point raised in the analysis is the importance of stakeholder engagement in AI governance. Stakeholder engag…
S74
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S75
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S76
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S77
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S78
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Collaboration across sectors through multistakeholder engagement is essential responsibility Multi-stakeholder particip…
S79
WS #187 Bridging Internet AI Governance From Theory to Practice — Multi-stakeholder governance approach is essential, but private actors’ commercial interests may limit participation in …
S80
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S81
How to make AI governance fit for purpose? — – **Multi-stakeholder involvement** – All speakers acknowledged the need for collaboration between governments, private …
S82
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — All speakers acknowledge that having strategies and frameworks is insufficient without proper implementation mechanisms,…
S83
AI That Empowers Safety Growth and Social Inclusion in Action — Major discussion point 2: Capacity building, education and operational tools
S84
Digital solutions for sustainability: ICT’s role in GHG reduction and biodiversity protection — Capacity building and training are critical for implementation
S86
Open Forum #34 How Do Technical Standards Shape Connectivity and Inclusion — Capacity building and translation between technical and policy communities is essential for effective multi-stakeholder …
S87
How Trust and Safety Drive Innovation and Sustainable Growth — Craig explains that Microsoft implements responsible AI governance programs internally and sees opportunities for differ…
S88
Leading tech companies commit to responsible development of AI at Seoul AI Summit — At an AI Seoul Summit 2024 meeting on Tuesday, sixteen companies leading the charge in artificial intelligence (AI) deve…
S89
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Giuseppe Claudio Cicu:So konnichiwa to everyone. And thank you, Mr. Thank you, Professor Bailey, for the introduction. I…
S90
Global Enterprises Show How to Scale Responsible AI — For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stu…
S91
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Investors can directly engage companies to improve their policies, practices, and governance. Investors have the potent…
S92
TradeTech for Greener Supply Chains — Governments are viewed as key actors in closing the gap between technology disruption and regulation, and there is a pos…
S93
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you, Ms. Gose, for really giving that perspective. Now may I invite Mr. Cyril Shroff who is of course the convener…
S94
Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64 — This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion…
S95
Digital divides & Inclusion — Collaboration between government entities, private sector organizations, civil society, and academia is deemed critical …
S96
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Addressing the digital language divide requires coordinated efforts from all sectors of society working together. No sin…
S97
WS #302 Upgrading Digital Governance at the Local Level — The second phase demonstrated practical results, with one pilot municipality (Reba) improving its score from 30% to 39% …
S98
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S99
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S100
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S101
The role of standards in shaping an AI-driven future — The tone is consistently formal, authoritative, and optimistic throughout. The speaker maintains a confident and promoti…
S102
Friday Opening Ceremony: Summit of the Future Action Days — The overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while …
S103
Open Forum #48 Implementation of the Global Digital Compact — The discussion maintained a constructive and collaborative tone throughout, with speakers demonstrating both urgency abo…
S104
WS #69 Beyond Tokenism Disability Inclusive Leadership in Ig — The discussion maintained a constructive and collaborative tone throughout, characterized by professional expertise and …
S105
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers were solu…
S106
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S107
Session — Concluding sessions are pivotal for reflecting the outcomes of negotiations and the interests of different stakeholders,…
S108
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Overall Tone:The tone is consistently optimistic, inspirational, and forward-looking throughout the speech. The speaker …
S109
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S110
Closure of the session — Meaningful stakeholder participation The United States argues that the current modalities for stakeholder participation…
S111
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — The tone is consistently optimistic, inspirational, and forward-looking throughout the speech. The speaker maintains an …
S112
Internet standards and human rights | IGF 2023 WS #460 — Moderator – Sheetal Kumar:Hello, everyone. Good morning. Welcome to this session on Internet Standards and Human Rights….
S113
Day 0 Event #150 Digital Rights in Partnership Strategies for Impact — – **Peggy Hicks** – Works with the Office of the High Commissioner for Human Rights in Geneva Peggy Hicks: Great questi…
S114
UN Human Rights Council: High level discussion on AI and human rights — And its impact on society is accelerating. And we’re still only just starting to think about what that means. So I think…
S115
Software.gov — Bogdan-Martin advocates for the inclusion of citizens and private entities in government plans, emphasizing the importan…
S116
Protecting Democracy against Bots and Plots — They argue that this is necessary to maintain peace, justice, and strong institutions. Companies are also called upon to…
S117
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S118
Unlocking Multistakeholder Cooperation within the UN System: Global Partnerships for Open Internet — Additionally, the guidelines outline a systematic process from initial stakeholder identification and engagement, throug…
S119
WS #82 A Global South perspective on AI governance — Jenny Domino: Yeah, of course. Thank you. Maybe I’ll just quickly comment on all the questions and comments. So on …
S120
WS #395 Applying International Law Principles in the Digital Space — Francisco Brito Cruz: Thank you. I hope you are all listening to me. Hello from Sao Paulo. I’m wanting to be with all of…
S121
Closing remarks — This comment provides the conceptual foundation for the standards discussion that follows. It explains why technical sta…
S122
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — This comment reframes the entire trust vs. innovation debate by rejecting the false dichotomy. It establishes that trust…
S123
Pre 7: Advancing Digital Inclusivity: UNESCO’s Measurement Approaches — This comment reframes multi-stakeholder engagement from a procedural requirement to a trust-building mechanism, introduc…
S124
WS #110 AI Innovation Responsible Development Ethical Imperatives — Godoi outlined UNESCO’s three-pronged approach: fostering opportunities through AI development, mitigating risks through…
S125
From principles to implementation – pathways forward — Gabriela Ramos:Well thank you, thank you so much, and thank you all for being here. I have to first and foremost say tha…
S126
UK Minister warns that NATO must adapt to AI threats — The UK government hasannouncedthe launch of a Laboratory for AI Security Research (LASR), an initiative to protect again…
S128
The Alan Turing Institute stresses AI’s vital role in UK national security — A recentreportfrom the Turing’s Centre for Emerging Technology and Security (CETaS), commissioned by the UK government, …
S129
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — 3. UNESCO’s readiness assessment methodology and ethical impact assessment framework Rosanna Fanni emphasized UNESCO’s …
S130
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Final MOOC planned to be held worldwide by early 2026 LG AI Research is collaborating with UNESCO to develop online edu…
S131
https://dig.watch/event/india-ai-impact-summit-2026/ai-that-empowers-safety-growth-and-social-inclusion-in-action-2 — And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide glo…
S132
Rights and Permissions — Tertiary systems have not remained impervious to these changing demands-general and vocational tracks often inte…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Peggy Hicks
2 arguments169 words per minute2469 words876 seconds
Argument 1
Global norms and practical safeguards are essential to ensure AI works for all people, not just advanced economies. (Peggy Hicks)
EXPLANATION
Peggy emphasizes that AI must be governed by worldwide standards and concrete safeguards so that its benefits reach everyone, not only wealthy nations or dominant platforms. She links these safeguards to responsible AI governance and clear rules for both companies and governments.
EVIDENCE
She notes that practical safeguards are needed to make AI work for people beyond advanced economies and that alignment around global norms will help achieve this goal, citing her remarks about responsible AI governance and global norms [8-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Peggy’s call for worldwide standards and concrete safeguards is documented in her remarks during the AI dialogue, emphasizing that AI must benefit everyone, not only advanced economies [S7] and her focus on responsible AI governance [S21].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Tim Curtis, Rein Tammsaar, Alex Walden, Hector Duroir, Yuchil Kim
Argument 2
Rewarding companies that engage responsibly creates a “race to the top” and aligns market incentives with ethical outcomes. (Peggy Hicks)
EXPLANATION
Peggy argues that incentives should be designed so that companies that adopt responsible AI practices are rewarded, encouraging a competitive push toward higher standards. This creates a market‑driven “race to the top” for ethical AI.
EVIDENCE
She states that incentives for companies to be engaged responsibly should reward those companies, highlighting the need for such mechanisms [13].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Namit Agarwal, Alex Walden, Hector Duroir
DISAGREED WITH
Namit Agarwal
T
Tim Curtis
2 arguments158 words per minute740 words280 seconds
Argument 1
UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis)
EXPLANATION
Tim explains that UNESCO’s AI ethics recommendations offer a common baseline for AI development, which UNESCO is adapting to national contexts via Readiness Assessment Methodology Reports (RAMS). These assessments give countries a clear view of their AI landscape and guide responsible, human‑centred deployment.
EVIDENCE
He mentions UNESCO’s recommendation on AI ethics as a shared foundation and describes the RAMS reports launched in over 80 countries, including a recent assessment for India, which provide evidence-based diagnostics of regional AI landscapes [32-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tim explains that UNESCO’s AI ethics recommendations serve as a common baseline and that the RAMS reports have been launched in over 80 countries to adapt these norms locally, as highlighted in the UNESCO-sponsored event transcript [S11] and the broader UNESCO recommendation overview [S24].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Rein Tammsaar, Alex Walden, Hector Duroir, Yuchil Kim
DISAGREED WITH
Ankit Bose
Argument 2
A global MOOC on AI ethics will make “ethics‑by‑design” training accessible to a wide audience and support day‑to‑day implementation. (Tim Curtis)
EXPLANATION
Tim announces a massive open online course (MOOC) on AI ethics that will be delivered on Coursera, aiming to embed ethics‑by‑design into everyday AI work. The course will give learners practical tools to consider fairness, transparency, safety, accountability and inclusion early in the development cycle.
EVIDENCE
He describes the development of a global MOOC in partnership with LG AI Research, its delivery on Coursera, and its focus on ethics-by-design, practical tools, and day-to-day decision-making [37-44].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
AGREED WITH
Yuchil Kim, Ankit Bose, Rein Tammsaar
R
Rein Tammsaar
2 arguments126 words per minute576 words273 seconds
Argument 1
Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar)
EXPLANATION
Rein outlines the four key priorities identified by UN member states for AI governance: ensuring AI is safe and trustworthy, bridging capacity gaps in developing countries, creating interoperable cross‑border governance, and grounding AI in human‑rights law. These priorities guide the Global Dialogue on AI Governance.
EVIDENCE
He lists the four points-safe, secure, trustworthy AI; closing capacity gaps; cross-border governance and interoperability; and anchoring AI in human rights, including protection of vulnerable groups and bias mitigation [67-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tammsaar outlines these four priority areas-safe and trustworthy AI, capacity-gap closure, interoperable cross-border governance, and human-rights anchoring-in the AI dialogue session summary [S2].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Tim Curtis, Yuchil Kim, Ankit Bose
Argument 2
Human‑rights‑based AI must protect vulnerable groups and address bias, requiring continuous dialogue with civil society. (Rein Tammsaar)
EXPLANATION
Rein stresses that AI systems must be anchored in human‑rights law to safeguard vulnerable populations, mitigate bias and discrimination, and ensure accountability. Ongoing engagement with civil society is essential to monitor and enforce these protections.
EVIDENCE
He notes that anchoring AI in human rights includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability [73-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to safeguard vulnerable populations and mitigate bias is reinforced by discussions on protecting marginalized groups such as LGBTQI+ individuals in AI systems, as noted in the human-rights impact session [S28].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Alex Walden, Hector Duroir, Parvati Adani, Yuchil Kim
A
Alex Walden
4 arguments184 words per minute1023 words332 seconds
Argument 1
Adoption of the UN Guiding Principles and other international frameworks guides corporate AI policies and practices. (Alex Walden)
EXPLANATION
Alex states that Google incorporates the UN Guiding Principles on Business and Human Rights, along with OECD and UNESCO frameworks, into its AI policies. These international standards shape the company’s internal governance and operational guidelines.
EVIDENCE
He references using the UN Guiding Principles, OECD work, UNESCO recommendations, and engagement with peers through the BTEC project to inform internal AI governance and policies [138-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Alex describes Google’s integration of the UN Guiding Principles on Business and Human Rights, alongside OECD and UNESCO frameworks, into its AI governance policies, as recorded in the session transcript [S2].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Tim Curtis, Rein Tammsaar, Hector Duroir, Yuchil Kim
Argument 2
Google’s multilayered approach includes values‑based policies, model‑level requirements, executive review, and post‑launch monitoring. (Alex Walden)
EXPLANATION
Alex describes Google’s internal AI governance as a tiered system: company‑wide values and AI principles, granular model‑level testing, application‑level guardrails, executive risk reviews before launch, and continuous post‑launch monitoring to catch residual risks.
EVIDENCE
He details model-level requirements, application-level testing and guardrails, executive review of risks, and post-launch monitoring processes that ensure safety and trust throughout the product lifecycle [149-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The multi-tiered governance model-spanning corporate values, model-level testing, application-level guardrails, executive risk reviews, and continuous post-launch monitoring-is detailed in the AI dialogue discussion of Google’s internal processes [S2] and reinforced in the broader overview of Google’s layered approach [S1] and [S29].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
Argument 3
Market demand for trustworthy products pushes firms to embed safety, fairness, and accountability into their offerings. (Alex Walden)
EXPLANATION
Alex points out that Google’s business model creates a market incentive to deliver safe, trusted products because consumer trust drives usage of services like Search and Gmail. This market pressure motivates the company to embed ethical safeguards.
EVIDENCE
He explains that Google’s products are trusted by consumers and that there is a business reason to put out safe, trusted products, which drives internal processes [149-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Google’s business model creates a market incentive for safe, trusted products, driving the company to embed ethical safeguards, as highlighted in the session summary [S1].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Peggy Hicks, Namit Agarwal, Hector Duroir
Argument 4
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden)
EXPLANATION
Alex outlines Google’s approach to external engagement: a systematic program for regular stakeholder dialogue, trusted‑tester programs that give third‑parties early access to test models, and open‑source projects like the Amplify Initiative that let communities help fine‑tune language models.
EVIDENCE
He mentions a programmatic engagement approach, trusted-tester programs for early access, the Impact Lab’s community research, and the open-source Amplify Initiative for language inclusion [301-310].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Alex outlines Google’s systematic stakeholder engagement strategy, including trusted-tester programs and open-source projects like the Amplify Initiative, to broaden community participation, as captured in the dialogue transcript [S2].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Hector Duroir, Parvati Adani, Rein Tammsaar, Yuchil Kim
H
Hector Duroir
4 arguments150 words per minute891 words356 seconds
Argument 1
Microsoft aligns its Responsible AI standards with OECD and UNESCO principles and uses them to shape internal programs. (Hector Duroir)
EXPLANATION
Hector explains that Microsoft’s Responsible AI (RAI) standards are built on the OECD AI Principles and UNESCO’s AI ethics recommendation, providing a principled foundation for the company’s internal AI governance and product development.
EVIDENCE
He cites the influence of the OECD AI principles and UNESCO recommendation on Microsoft’s AI governance program and the creation of the RAI standard in 2019 [184-185] and earlier reference to the RAI standard [176-177].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Tim Curtis, Rein Tammsaar, Alex Walden, Yuchil Kim
Argument 2
Microsoft’s Office of Responsible AI and Sensitive Use‑Case program embed principles into product development and involve board‑level oversight. (Hector Duroir)
EXPLANATION
He describes the Office of Responsible AI, created in 2019, which runs a Sensitive Use‑Case program to triage high‑risk AI applications. When necessary, cases are escalated to the ITER committee, which includes senior leadership and board members, ensuring governance at the highest level.
EVIDENCE
He details the creation of the Office of Responsible AI, the Sensitive Use-Case team’s triage work, and escalation to the ITER committee involving the CTO and board level [176-182].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
Argument 3
Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir)
EXPLANATION
Hector highlights a partnership with NGOs in India on the Samishka project, which develops community‑led safety benchmarks that reflect local cultural and linguistic contexts, addressing the limitation of English‑only safety tools.
EVIDENCE
He explains that Microsoft works with NGOs on the Samishka project to build community-led benchmarks, providing safety tools grounded in specific cultural and contextual aspects, especially for non-English languages [282-285].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Alex Walden, Parvati Adani, Rein Tammsaar, Yuchil Kim
Argument 4
Voluntary commitments at international summits help translate regulatory expectations into concrete corporate actions. (Hector Duroir)
EXPLANATION
Hector notes that voluntary commitments signed at AI summits, such as those in Letchley Park (UK) and South Korea, have guided Microsoft’s model testing and risk‑management practices, turning high‑level expectations into operational triggers.
EVIDENCE
He references voluntary commitments from AI summits, including Letchley Park and the South Korea summit, which informed Microsoft’s testing approach for public safety and national security risks [188-192].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Peggy Hicks, Namit Agarwal, Alex Walden
DISAGREED WITH
Peggy Hicks
Y
Yuchil Kim
3 arguments146 words per minute272 words111 seconds
Argument 1
LG integrates UNESCO recommendations into its AI risk standards and publishes annual accountability reports to demonstrate compliance. (Yuchil Kim)
EXPLANATION
Yuchil states that LG incorporates UNESCO’s AI ethics recommendations into its internal risk standards and issues an annual accountability report to show how it meets those standards, thereby providing transparency and compliance evidence.
EVIDENCE
He mentions that LG aligns its AI risk standards with UNESCO recommendations, has created an AI-powered data compliance system, and released the third edition of an annual accountability report on AI ethics activities [209-213].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Tim Curtis, Rein Tammsaar, Alex Walden, Hector Duroir
Argument 2
Providing practitioners with concrete standards, risk tools, and transparent reporting bridges the gap between theory and practice. (Yuchil Kim)
EXPLANATION
Yuchil explains that LG focuses on giving practitioners clear standards, risk‑assessment tools, and transparent reporting to turn theoretical AI ethics guidance into actionable day‑to‑day practice.
EVIDENCE
He describes the MOOC’s role in bridging theory and practice, the provision of concrete risk standards, and the publication of an annual report to increase transparency [209-213].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
AGREED WITH
Tim Curtis, Ankit Bose, Rein Tammsaar
Argument 3
LG’s annual AI ethics report and internal policies provide transparent guidance for developers and product teams. (Yuchil Kim)
EXPLANATION
Yuchil notes that LG’s yearly AI ethics accountability report, together with internal policies, offers developers clear guidance on responsible AI development and helps embed ethical considerations into product pipelines.
EVIDENCE
He references the annual accountability report on AI ethics activities and internal policies that support responsible AI implementation [209-213].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
AGREED WITH
Alex Walden, Hector Duroir, Parvati Adani, Rein Tammsaar
A
Ankit Bose
2 arguments179 words per minute758 words253 seconds
Argument 1
NASSCOM builds capacity across the Indian tech ecosystem—government, startups, SMEs—so they can adopt responsible AI early. (Ankit Bose)
EXPLANATION
Ankit describes NASSCOM’s four‑decade effort to develop the Indian tech sector, focusing on building capacity, creating open assets, and helping governments, startups, and SMEs adopt responsible AI practices from the outset.
EVIDENCE
He outlines NASSCOM’s mission since 2021 to address gaps in responsible AI, develop open assets, build capacity across government, startups, and SMEs, and promote early adoption of responsible AI governance for upside benefits [89-105].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
AGREED WITH
Tim Curtis, Yuchil Kim, Rein Tammsaar
Argument 2
Internal silos (tech, business, legal, finance) hinder responsible AI; cross‑functional collaboration on use‑case basis is needed. (Ankit Bose)
EXPLANATION
Ankit points out that different functional groups within companies often work in isolation, which impedes responsible AI implementation. He advocates for collaborative, use‑case‑driven approaches that bring together tech, business, legal, and finance teams.
EVIDENCE
He describes how internal groups (tech, business, legal, finance) operate in silos, the need for collaboration on a use-case basis, and the challenge of translating frameworks into actionable steps for developers [250-267].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
DISAGREED WITH
Tim Curtis
N
Namit Agarwal
2 arguments175 words per minute681 words233 seconds
Argument 1
Investors and civil‑society actors should maintain ongoing dialogue with companies to accelerate responsible practices. (Namit Agarwal)
EXPLANATION
Namit stresses that continuous engagement between investors, civil‑society groups, and companies is crucial for speeding up the adoption of responsible AI. Dialogue helps bring fence‑sitters up to speed and ensures that best practices are shared.
EVIDENCE
He emphasizes the importance of ongoing engagement, noting that investors have already interacted with Google and Microsoft, and that dialogue should also target fence-sitters to accelerate responsible innovation [236-241].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
Argument 2
Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal)
EXPLANATION
Namit argues that while capital can incentivize responsible AI, it must be linked to concrete governance structures such as board responsibility, aligned executive incentives, and robust human‑rights impact assessments to be effective.
EVIDENCE
He notes that capital alone is insufficient, describing the need for board-level AI governance, executive incentives aligned with long-term risk management, and AI-specific impact assessments as catalytic for investors [226-232].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Peggy Hicks, Alex Walden, Hector Duroir
DISAGREED WITH
Peggy Hicks
P
Parvati Adani
2 arguments133 words per minute564 words253 seconds
Argument 1
Highlighting AI’s inability to self‑regulate underscores the need for human‑driven ethical guidance and education. (Parvati Adani)
EXPLANATION
Parvati reflects on an experiment where she asked an AI tool about its ethical limits and received no answer, illustrating that AI cannot self‑regulate and that human oversight and education are essential.
EVIDENCE
She recounts asking the AI whether it has ethical limits, receiving a response that it does not know and that no one can verify its conscience, highlighting the philosophical gap and the need for human-driven guidance [322-332].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
Argument 2
Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani)
EXPLANATION
Parvati argues that AI frameworks that ignore multilingualism, gender diversity, and informal contexts are inherently incomplete. She calls for interoperable, flexible systems that incorporate these dimensions to achieve truly inclusive AI.
EVIDENCE
She states that any framework lacking language, gender, and informal context considerations is incomplete by design, emphasizing the need for inclusive, interoperable systems [335-338].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Alex Walden, Hector Duroir, Rein Tammsaar, Yuchil Kim
Agreements
Agreement Points
Global standards and practical safeguards are essential to ensure AI works for all people, not only advanced economies.
Speakers: Peggy Hicks, Tim Curtis, Rein Tammsaar, Alex Walden, Hector Duroir, Yuchil Kim
Global norms and practical safeguards are essential to ensure AI works for all people, not just advanced economies. (Peggy Hicks) UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis) Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar) Adoption of the UN Guiding Principles and other international frameworks guides corporate AI policies and practices. (Alex Walden) Microsoft aligns its Responsible AI standards with OECD and UNESCO principles and uses them to shape internal programs. (Hector Duroir) LG integrates UNESCO recommendations into its AI risk standards and publishes annual accountability reports to demonstrate compliance. (Yuchil Kim)
All speakers stressed that worldwide norms – such as UNESCO recommendations, UN Guiding Principles, OECD AI principles and the UN Guiding Principles on Business and Human Rights – must be turned into concrete, practical safeguards so that AI benefits everyone, not just advanced economies [8-9][32-35][67-74][138-140][184-185][209-213].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the emphasis in the “Setting the Rules_ Global AI Standards for Growth and Governance” reports, which argue that standards should serve inclusive global equity and adaptable, process-oriented safeguards rather than favoring advanced economies [S48][S49][S50].
Incentivising responsible AI through market mechanisms and rewards creates a “race to the top”.
Speakers: Peggy Hicks, Namit Agarwal, Alex Walden, Hector Duroir
Rewarding companies that engage responsibly creates a “race to the top” and aligns market incentives with ethical outcomes. (Peggy Hicks) Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal) Market demand for trustworthy products pushes firms to embed safety, fairness, and accountability into their offerings. (Alex Walden) Voluntary commitments at international summits help translate regulatory expectations into concrete corporate actions. (Hector Duroir)
Speakers agreed that aligning financial incentives – rewarding responsible firms, linking capital to board-level AI oversight, responding to market demand for trustworthy products, and using voluntary summit commitments – can spur a “race to the top” for ethical AI [13][226-232][149-152][188-192].
Capacity building and education are needed to turn AI ethics theory into day‑to‑day practice.
Speakers: Tim Curtis, Yuchil Kim, Ankit Bose, Rein Tammsaar
A global MOOC on AI ethics will make “ethics‑by‑design” training accessible to a wide audience and support day‑to‑day implementation. (Tim Curtis) Providing practitioners with concrete standards, risk tools, and transparent reporting bridges the gap between theory and practice. (Yuchil Kim) NASSCOM builds capacity across the Indian tech ecosystem—government, startups, SMEs—so they can adopt responsible AI early. (Ankit Bose) Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar)
All highlighted the need for concrete capacity-building measures – UNESCO’s RAMS assessments and a global MOOC, LG’s practitioner-focused standards, NASSCOM’s ecosystem-wide training, and the broader priority of closing capacity gaps – to make AI ethics actionable on the ground [32-35][37-44][209-213][89-105][68-70].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is highlighted in policy dialogues such as the AI Governance Implementation and Capacity Building panel and the Embedding Human Rights in AI Standards session, both calling for sustained education programmes to translate ethics into operational practice [S52][S53].
Multi‑stakeholder engagement and inclusion of diverse languages, cultures and civil‑society perspectives are essential for trustworthy AI.
Speakers: Alex Walden, Hector Duroir, Parvati Adani, Rein Tammsaar, Yuchil Kim
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden) Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir) Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani) Human‑rights‑based AI must protect vulnerable groups and address bias, requiring continuous dialogue with civil society. (Rein Tammsaar) LG’s annual AI ethics report and internal policies provide transparent guidance for developers and product teams. (Yuchil Kim)
Speakers converged on the importance of systematic engagement with civil society, NGOs, academia and language-diverse communities, and on publishing transparent guidance, to ensure AI systems respect cultural, linguistic and gender diversity and protect vulnerable groups [301-310][276-285][335-338][73-75][209-213].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder approaches are repeatedly endorsed in AI governance literature, including the “Who Watches the Watchers” summary, WSIS Action Line C10, and the Global Human Rights Approach, all stressing inclusive processes to build trust and legitimacy [S60][S72][S73][S75].
Similar Viewpoints
Both stress that global standards must be adapted to national contexts through tools such as readiness assessments to make AI beneficial for everyone [8-9][32-35].
Speakers: Peggy Hicks, Tim Curtis
Global norms and practical safeguards are essential to ensure AI works for all people, not just advanced economies. (Peggy Hicks) UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis)
Both companies embed internationally‑agreed principles (UN Guiding Principles, OECD, UNESCO) into their internal AI governance structures [138-140][184-185].
Speakers: Alex Walden, Hector Duroir
Adoption of the UN Guiding Principles and other international frameworks guides corporate AI policies and practices. (Alex Walden) Microsoft aligns its Responsible AI standards with OECD and UNESCO principles and uses them to shape internal programs. (Hector Duroir)
Both underline that effective governance (board‑level oversight, incentives) and ecosystem‑wide capacity building are needed to translate investment into responsible AI outcomes [226-232][89-105].
Speakers: Namit Agarwal, Ankit Bose
Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal) NASSCOM builds capacity across the Indian tech ecosystem—government, startups, SMEs—so they can adopt responsible AI early. (Ankit Bose)
Both stress that AI frameworks that ignore linguistic, gender and vulnerable‑group considerations are fundamentally incomplete and must involve civil‑society dialogue [335-338][73-75].
Speakers: Parvati Adani, Rein Tammsaar
Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani) Human‑rights‑based AI must protect vulnerable groups and address bias, requiring continuous dialogue with civil society. (Rein Tammsaar)
Unexpected Consensus
Corporate leaders and civil‑society participants both highlighted language and cultural diversity as a core requirement for trustworthy AI.
Speakers: Alex Walden, Hector Duroir, Parvati Adani
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden) Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir) Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani)
While corporate representatives usually focus on technical safeguards, they unexpectedly aligned with civil-society’s call for multilingual and culturally-aware AI, emphasizing community-led benchmarks and open-source tools as essential for inclusion [301-310][276-285][335-338].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of linguistic and cultural diversity is framed as a core principle in discussions on preserving culture and multilingualism, emphasizing AI systems must support diverse linguistic contexts to be trustworthy [S69][S70].
Overall Assessment

There is strong consensus that global norms, human‑rights‑based principles and practical safeguards are the foundation for responsible AI, that market incentives and board‑level governance can create a race to the top, that capacity‑building tools (MOOC, readiness assessments, ecosystem training) are needed to operationalise ethics, and that multi‑stakeholder, linguistically‑inclusive engagement is essential.

High consensus across UN, civil‑society and major tech firms on the need for standards, incentives, capacity building and inclusive stakeholder engagement. This convergence suggests a solid basis for coordinated policy actions, joint initiatives and the development of interoperable frameworks that can be scaled globally.

Differences
Different Viewpoints
Voluntary commitments vs formal regulatory safeguards
Speakers: Peggy Hicks, Hector Duroir
Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. (Peggy Hicks) Voluntary commitments at international summits help translate regulatory expectations into concrete corporate actions. (Hector Duroir)
Peggy stresses the need for clear, enforceable rules and practical safeguards backed by governments to make AI work for everyone, whereas Hector highlights the role of voluntary industry commitments signed at AI summits as the primary mechanism to operationalise standards, reflecting a split between mandatory regulation and voluntary self-regulation. [8-9][188-192]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between self-regulation and statutory rules is documented in debates over professional standards, parliamentary sessions advocating layered approaches, and the voluntary commitments framework that positions corporate pledges as interim measures pending formal regulation [S51][S63][S64][S65][S66].
Proliferation of frameworks vs need for unified actionable guidance
Speakers: Ankit Bose, Tim Curtis
Internal silos (tech, business, legal, finance) hinder responsible AI; cross‑functional collaboration on use‑case basis is needed. (Ankit Bose) UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis)
Ankit argues that the multitude of national and sectoral AI frameworks creates confusion for developers, calling for concrete, actionable guidance, while Tim asserts that UNESCO’s global recommendations, operationalised via RAMS assessments and a MOOC, already supply a unified foundation for responsible AI, indicating a disagreement on whether existing global frameworks are sufficient or need simplification. [258-267][32-35][37-44]
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about an overabundance of AI governance frameworks versus the need for coherent, actionable guidance are articulated in the “AI That Empowers Safety Growth and Social Inclusion” discussion, which calls for differentiated yet coordinated approaches for various organisational contexts [S55].
Incentive design: race‑to‑the‑top rewards vs board‑level governance tied to capital
Speakers: Peggy Hicks, Namit Agarwal
Rewarding companies that engage responsibly creates a “race to the top” and aligns market incentives with ethical outcomes. (Peggy Hicks) Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal)
Both speakers agree incentives are needed, but Peggy focuses on broadly rewarding responsible companies to create a competitive “race to the top,” whereas Namit stresses that investment must be linked to specific governance structures-board responsibility, executive incentives, and human-rights impact assessments-showing a divergence on how incentives should be structured. [13][226-232]
Unexpected Differences
Philosophical limits of AI vs procedural corporate safeguards
Speakers: Parvati Adani, Alex Walden
Highlighting AI’s inability to self‑regulate underscores the need for human‑driven ethical guidance and education. (Parvati Adani) Google’s multilayered approach includes values‑based policies, model‑level requirements, executive review, and post‑launch monitoring. (Alex Walden)
Parvati brings a philosophical argument that AI lacks any intrinsic ethical conscience and therefore requires human oversight and education, whereas Alex concentrates on concrete corporate processes and tools to embed ethics, revealing an unexpected split between a fundamental philosophical stance and a pragmatic procedural approach. [322-332][149-162]
POLICY CONTEXT (KNOWLEDGE BASE)
The fundamental legal and philosophical challenges of AI, such as questions of ownership and creativity, are highlighted in the Secure Finance Risk-Based AI Policy for the Banking Sector, contrasting with more procedural corporate safeguard models [S59].
Overall Assessment

The panel displayed moderate disagreement centered on how best to translate global AI ethics standards into practice. Key tensions emerged between voluntary industry commitments versus formal regulatory safeguards, the overload of disparate frameworks versus the need for unified actionable guidance, and differing designs of incentive mechanisms (broad market rewards versus board‑level governance tied to capital). While participants shared common goals—responsible, human‑rights‑based AI and inclusive stakeholder engagement—their preferred pathways diverged, indicating that consensus on implementation strategies remains unsettled.

Moderate disagreement: participants largely agree on overarching objectives but differ on the mechanisms (regulatory vs voluntary, framework simplification, incentive architecture). This suggests that future policy work will need to reconcile these approaches to achieve coherent, scalable AI governance.

Partial Agreements
Both agree that AI must be governed responsibly and anchored in human rights, but Peggy emphasizes the need for global norms and practical safeguards, while Rein highlights specific priority areas (trust, capacity, interoperability, human‑rights anchoring) without detailing the mechanisms for safeguards. [8-9][67-74]
Speakers: Peggy Hicks, Rein Tammsaar
Responsible and effective AI governance and clarity of rules for both companies and government. (Peggy Hicks) Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar)
Both stress the importance of external collaboration and stakeholder engagement, yet Alex focuses on internal programs (trusted testers, Impact Lab, Amplify Initiative) while Hector points to partnerships with NGOs for community‑led benchmarks, showing agreement on the goal of inclusive engagement but differing on the primary partnership models. [301-310][282-285]
Speakers: Alex Walden, Hector Duroir
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden) Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir)
Takeaways
Key takeaways
Global norms, standards and practical safeguards are essential to ensure AI benefits all people, not just advanced economies. Four priority areas identified by the UN Global Dialogue: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. UNESCO’s AI ethics recommendations and the UN Guiding Principles on Business and Human Rights are being used as common foundations for corporate policies. Capacity‑building initiatives such as UNESCO’s global AI‑ethics MOOC and NASSCOM’s ecosystem‑wide programs aim to translate theory into day‑to‑day practice. Large tech firms (Google, Microsoft, LG) have multilayered internal governance models that combine values‑based policies, model‑level testing, executive oversight and post‑launch monitoring. Cross‑functional silos within companies hinder responsible AI; collaboration across tech, legal, finance and business units is needed. Multi‑stakeholder engagement—including civil society, academia, NGOs and investors—is critical for inclusive, culturally‑aware AI and for creating community‑led benchmarks. Incentives tied to capital (board‑level AI governance, executive incentives, impact assessments) can turn responsible AI into a “race to the top”. Voluntary commitments made at international summits provide a pragmatic bridge between regulation and industry action.
Resolutions and action items
Launch of a UNESCO‑partnered global MOOC on AI ethics (to be delivered on Coursera) – timeline: upcoming launch, with open invitation for learners and partners. Continuation of the BTEC convenings and the UN‑led Global Dialogue on AI Governance, with a flagship meeting scheduled for July in Geneva. Encourage companies to publish transparent annual AI‑ethics accountability reports (e.g., LG’s third edition, Microsoft’s Sensitive Use‑Case disclosures). Investors and civil‑society groups to adopt a three‑step engagement framework: board‑level AI oversight, product‑level governance checks, and robust human‑rights impact assessments. Develop and pilot community‑led benchmark datasets for safety and bias testing in non‑English contexts (e.g., Microsoft’s Samishka project in India). NASSCOM to expand capacity‑building workshops and open‑asset libraries for startups, SMEs and government agencies across India. All panelists agreed to share best‑practice case studies with the World Benchmarking Alliance for inclusion in future benchmarking reports.
Unresolved issues
How to harmonise the growing number of national and sectoral AI frameworks into a single, actionable roadmap for developers. Concrete mechanisms for financing the capacity gaps of developing‑country firms (infrastructure, compute, talent). Standardised methodology for AI‑specific human‑rights impact assessments that can be audited across industries. Scalable ways to bring small startups into formal responsible‑AI processes without over‑burdening them. Long‑term governance of post‑launch monitoring: who owns the data, how often updates are required, and how to enforce remediation. Ensuring that multilingual and informal language contexts are fully integrated into safety tools beyond ad‑hoc community projects.
Suggested compromises
Adopt a flexible, non‑prescriptive approach: the Global Dialogue will not impose a single model but will identify common ground and build on existing initiatives. Use voluntary commitments at international summits as a stepping‑stone toward formal regulation, allowing companies to demonstrate progress while regulators develop standards. Combine top‑down standards (UN, OECD, UNESCO) with bottom‑up, community‑led benchmarks to balance global consistency with local relevance. Encourage a programmatic stakeholder‑engagement process (regular dialogue) complemented by ad‑hoc consultations for specific product launches. Promote a “race to the top” incentive structure where companies that meet board‑level AI governance and impact‑assessment criteria receive market‑based rewards or preferential access to funding.
Thought Provoking Comments
Trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability.
Frames trust as a product of intentional design rather than a by‑product of innovation, shifting the conversation from abstract ethics to concrete engineering practices.
Set the tone for the rest of the session, prompting other speakers (e.g., Alex Walden, Hector Duroir) to describe concrete governance mechanisms and leading to the introduction of the UNESCO MOOC on ‘ethics by design’.
Speaker: Tim Curtis
We are developing a global massive open online course (MOOC) on the ethics of artificial intelligence… with a clear goal to make AI ethics learning accessible to a wide global audience and to make a practical… for day‑to‑day work.
Introduces a tangible, scalable solution that moves the discussion from policy rhetoric to capacity‑building action, addressing the gap between standards and implementation.
Created a new topic of discussion around education and skill‑building, which later resurfaced when Yuchil Kim and Alex Walden referenced training programs and the need for practical tools.
Speaker: Tim Curtis
Four priorities from member states: 1) safe, secure and trustworthy AI; 2) closing capacity gaps; 3) cross‑border governance and interoperability; 4) AI anchored in human rights and international law.
Synthesises the diverse concerns of UN member states into a clear, actionable framework, providing a shared reference point for the panel.
Guided the subsequent contributions, as speakers repeatedly mapped their own initiatives (e.g., Google’s model‑level requirements, Microsoft’s Sensitive Use Case program) onto these four pillars.
Speaker: Rein Tammsaar
Before any product goes to market, there are model‑level requirements, application‑level testing, executive review, and post‑launch monitoring.
Offers a concrete, multi‑layered process that illustrates how a large tech company operationalises responsible AI, moving the dialogue from abstract principles to real‑world workflow.
Prompted other participants (e.g., Hector Duroir, Namit Agarwal) to discuss similar governance structures and to compare accountability mechanisms across companies.
Speaker: Alex Walden
Our Sensitive Use Case program triages high‑risk AI applications, brings them to the ITER ethics committee, and involves the board at the CTO and senior‑leadership level.
Shows how Microsoft embeds ethical review into its organisational hierarchy, highlighting the importance of board‑level oversight.
Reinforced the theme of executive responsibility introduced by Alex, and later fed into Namit’s call for board‑level AI risk oversight.
Speaker: Hector Duroir
Only about 10 % of the 2,000 companies we assessed meet global governance expectations and none disclose human‑rights impact assessments.
Provides hard data that exposes a substantial compliance gap, challenging the assumption that most firms are already aligned with standards.
Shifted the tone from showcasing best practices to confronting systemic shortcomings, leading to a deeper discussion on investor leverage and concrete accountability measures.
Speaker: Namit Agarwal
Investors should first ask whether there is clear board‑level responsibility on AI risk, whether executive incentives are aligned with long‑term human‑rights risk mitigation, and whether governance applies across the full AI value chain.
Translates the earlier data gap into actionable questions for capital providers, linking finance directly to responsible AI governance.
Sparked a concrete set of recommendations that other panelists (e.g., Alex, Hector) referenced when describing their own internal oversight mechanisms.
Speaker: Namit Agarwal
When I asked the AI tool whether it has ethical limits, it replied ‘I don’t know.’ This highlighted that a system cannot understand its own responsibilities or consequences.
Introduces a philosophical and practical paradox—AI lacks self‑awareness of ethics—underscoring why human governance is indispensable.
Created a reflective pause in the discussion, reinforcing earlier points about human‑centred oversight and prompting participants to stress the need for external accountability mechanisms.
Speaker: Parvati Adani
We run programmatic stakeholder engagement, trusted‑tester programs, and the Impact Lab’s Amplify Initiative that lets communities fine‑tune language models, making the process open‑source and collaborative.
Highlights innovative, inclusive engagement models that go beyond internal checks, showing how companies can co‑create safeguards with civil society.
Extended the conversation on stakeholder involvement, linking back to Hector’s NGO collaborations and reinforcing the panel’s consensus on the necessity of multi‑stakeholder loops.
Speaker: Alex Walden
If we want to go fast, go alone. If we want to go far, go together – an African proverb reminding us that building a trustworthy AI ecosystem is a long‑term collective effort.
Summarises the collaborative ethos needed across industry, academia, and civil society, tying together the many initiatives mentioned earlier.
Served as a thematic bridge to the closing remarks, reinforcing the session’s call for sustained, joint action rather than isolated pilots.
Speaker: Yuchil Kim
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the conversation from high‑level rhetoric to concrete, actionable pathways. Tim Curtis’s framing of trust as a design problem and the announcement of a global AI‑ethics MOOC introduced a practical solution that anchored later talks. Rein Tammsaar’s concise articulation of the four UN‑derived priorities gave the panel a shared roadmap, which each speaker then mapped their own initiatives onto. Alex Walden and Hector Duroir provided vivid, multi‑layered governance models that illustrated how large tech firms can operationalise those priorities. Namit Agarwal’s data‑driven critique exposed a systemic compliance gap and his investor‑focused recommendations turned the critique into a set of concrete levers for change. Parvati Adani’s philosophical probe of an AI’s self‑awareness reminded participants why human oversight remains essential. Together, these comments created turning points that shifted the tone from descriptive to prescriptive, deepened the analysis of accountability mechanisms, and reinforced the central message that responsible AI requires coordinated, multi‑stakeholder effort across standards, education, corporate governance, and capital markets.

Follow-up Questions
How does this work? How do you differentiate engagement across big companies, services firms, SMEs, and startups?
Understanding tailored engagement strategies is crucial for ensuring responsible AI practices across diverse organization sizes.
Speaker: Peggy Hicks (to Ankit Bose)
How have you been able to surmount challenges in getting human‑rights messages heard within the company?
Insights into internal advocacy help identify mechanisms to overcome resistance and embed ethics at scale.
Speaker: Peggy Hicks (to Alex Walden)
What are the drivers for external engagement across the sector and with governments?
Clarifying external collaboration incentives informs how companies can align with policy and civil‑society expectations.
Speaker: Peggy Hicks (to Hector Duroir)
What concrete suggestions from your work can push the discussion forward on incentivising responsible AI?
Seeking actionable steps for investors and other stakeholders to create market‑based incentives for good governance.
Speaker: Peggy Hicks (to Namit Agarwal)
From the NASSCOM perspective, how do you look at stakeholder engagement and internal silos?
Addressing internal coordination and the gap between frameworks and actionable steps is key for effective implementation.
Speaker: Peggy Hicks (to Ankit Bose)
Can you share quick comments on how your company is facing those challenges?
Request for concrete examples of how Microsoft tackles internal and external responsible‑AI challenges.
Speaker: Peggy Hicks (to Hector Duroir)
Could you share your perspective on stakeholder engagement and programmatic approaches?
Understanding Google’s methods (trusted tester programs, Impact Lab, open‑source initiatives) can guide best practices.
Speaker: Peggy Hicks (to Alex Walden)
How effective is UNESCO’s AI ethics MOOC in building global practitioner capacity?
Evaluating reach, uptake, and impact of the MOOC is needed to ensure it translates ethics‑by‑design into practice.
Speaker: Tim Curtis (implied)
How can multilingual safety evaluation tools be developed beyond English norms?
Ensuring AI safety across languages is essential for inclusion and accurate risk assessment in diverse contexts.
Speaker: Hector Duroir
How can the gap between high‑level AI frameworks and actionable implementation for technologists be bridged?
Practitioners need concrete guidance to move from policy to day‑to‑day development practices.
Speaker: Ankit Bose
What is the impact of board‑level AI governance and executive incentives on responsible AI outcomes?
Research is needed to link governance structures with measurable improvements in AI safety and rights‑respect.
Speaker: Namit Agarwal
How can AI‑specific human rights impact assessments be standardized and published meaningfully?
Current disclosures are scarce; developing robust assessment methodologies is critical for accountability.
Speaker: Namit Agarwal
What role can civil‑society‑led benchmarks (e.g., Samishka) play in creating community‑grounded safety tools?
Studying collaborative benchmark creation can improve contextual relevance of safety evaluations.
Speaker: Hector Duroir
What are the philosophical and ethical limits of AI systems regarding self‑awareness and conscience?
Exploring AI’s inability to understand its own ethical limits raises fundamental questions for governance.
Speaker: Parvati Adani
How can data be leveraged to create incentives that align capital with responsible AI governance?
Identifying data‑driven mechanisms can help tie investment decisions to AI risk management.
Speaker: Peggy Hicks (implied)
How effective are trusted tester programs and open‑source initiatives like Amplify in improving product safety and language inclusion?
Assessing these programs’ impact can inform broader adoption of collaborative safety testing.
Speaker: Alex Walden

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.