AI That Empowers Safety Growth and Social Inclusion in Action
20 Feb 2026 10:00h - 11:00h
AI That Empowers Safety Growth and Social Inclusion in Action
Summary
The panel opened by highlighting the urgent, day-to-day challenges of AI and the need for global standards, public-private collaboration and rights-based approaches to achieve responsible AI with real-world impact [1][2]. Speakers stressed that effective AI governance requires careful deliberation, stakeholder engagement and the sharing of good practices by companies to avoid pitfalls [4-6]. They affirmed that companies must respect human rights through due-diligence while governments should create a level playing field and incentivise responsible behaviour [9-13].
UNESCO emphasized that trust in AI is built through design choices, safeguards and accountability, and introduced its Readiness Assessment Methodology Reports (RAMS) that map regional AI landscapes in over 80 countries [32-34]. To translate the UNESCO ethics recommendation into practice, a massive open online course (MOOC) on AI ethics by design is being launched, teaching learners to embed fairness, transparency and inclusion early in the development cycle [36-42].
The newly-mandated UN Global Dialogue on AI Governance identified four priority areas: safe and trustworthy systems, closing capacity gaps, cross-border governance and anchoring AI in human-rights law [66-73]. Representing India’s tech sector, NASSCOM described its 2021-initiated mission to build open assets, develop capacity across government, startups and SMEs, and promote responsible AI adoption throughout the ecosystem [94-105]. Google outlined its corporate policy that commits to the UN Guiding Principles and UNESCO/OECD frameworks, embedding these values in AI principles and operational processes across product teams [130-138]. Microsoft recounted the evolution of its Office of Responsible AI since 2018, the Sensitive Use Case program and ITER ethics committee, and noted that its work is informed by OECD principles and UNESCO recommendations [169-184]. Externally, Microsoft cited voluntary commitments from AI summits, OECD hyper-reporting tools and recent Indian multilingual safety initiatives that reinforce inclusion and risk-based testing [188-200].
The World Benchmarking Alliance reported that while many firms publish AI principles, only a small fraction meet global governance standards or disclose human-rights impact assessments, underscoring the need for stronger incentives and board-level oversight [224-229]. It recommended that investors demand clear AI governance at the board level, concrete product-level implementation and robust human-rights impact assessments to close existing gaps [236-241]. Across the discussion, participants agreed that collaborative, multi-stakeholder engagement-spanning companies, civil society, academia and regulators-is essential to move from good intentions to actionable, inclusive AI systems [311-345].
Keypoints
Major discussion points
– Global norms and multi-stakeholder governance are essential for responsible AI.
The opening remarks stress that “global standards, collaborative public-private solutions, and rights-based approaches can enable responsible AI” and that both companies and governments must create “clear rules… and alignment around the global norms” [2][9-11][13]. The UN-mandated Global Dialogue on AI Governance highlights four member-state priorities – trustworthy AI, capacity-building, cross-border governance, and anchoring AI in human rights – and frames standards as the bridge from principle to practice [67-74][78].
– Capacity-building and education tools are needed to translate standards into everyday practice.
UNESCO’s RAMS assessments and the new massive open online course (MOOC) on AI ethics are presented as concrete ways to move “beyond theory and towards this responsible human-centred deployment of AI” [32-38][39-44]. LG’s contribution echoes this by stressing a “practitioner-focused” MOOC that bridges the gap between abstract standards and day-to-day work [209-213].
– Companies are implementing layered internal governance to embed responsible AI.
Google describes a hierarchy of model-level requirements, application-level guardrails, executive review, and post-launch monitoring [149-162]. Microsoft outlines its Office of Responsible AI, the Sensitive Use Case program, and board-level oversight, all built on UN-based principles [168-184]. LG and Google also note programmatic stakeholder engagement, trusted-tester schemes, and open-source tools for language inclusion [300-307][308-310].
– Investors and benchmarking bodies can drive accountability through market incentives.
The World Benchmarking Alliance reports that only ~10 % of the 2,000 assessed tech firms meet global governance expectations and none disclose human-rights impact assessments, underscoring the need for “board-level responsibility, aligned executive incentives, and robust AI-specific impact assessments” [226-241].
– Inclusion-especially linguistic and cultural diversity-and civil-society partnership are critical gaps.
Participants point to the “language issue” and the need for multilingual safety tools, citing Microsoft’s community-led benchmarks in India and LG’s annual transparency report as examples of collaborative, culturally-aware practice [188-199][276-285][290-296].
Overall purpose / goal
The session is a convening of UN bodies, industry leaders, and civil-society representatives to share concrete practices, highlight gaps, and mobilise coordinated action so that AI development and deployment are governed by human-rights-based standards, are inclusive, and deliver real-world benefits across all economies [1][15][24][85-88].
Overall tone
– Opening (0-5 min): Formal, optimistic, and forward-looking, emphasizing shared responsibility and the promise of standards [1-5][9-13].
– Middle (5-30 min): Becomes more technical and candid as speakers detail specific tools, internal processes, and the challenges of scaling responsible practices [32-44][149-162][168-184][220-241]. The tone shifts to a problem-solving mode, acknowledging “obstacles” and “gaps” while showcasing concrete initiatives.
– Closing (30-50 min): Reflective and motivational, urging continued dialogue, broader participation, and concrete action, ending on a call-to-action for all stakeholders [316-345][350-363].
Overall, the discussion moves from a high-level, collaborative framing to detailed, sometimes critical examinations of implementation, and finishes with a unifying, inspirational appeal to sustain momentum.
Speakers
– Rein Tammsaar – Permanent Representative of Estonia; co-chair of the United Nations Global Dialogue on AI Governance. [S1][S2]
– Namit Agarwal – Representative of the World Benchmarking Alliance, focusing on AI governance, investment incentives and accountability.
– Peggy Hicks – Director, Office of the United Nations High Commissioner for Human Rights (OHCHR); moderator of the panel. [S5][S6]
– Parvati Adani – Representative from Sero Amarchan Mangaldas (law firm), delivered the concluding remarks.
– Tim Curtis – Regional Director for UNESCO South Asia; co-sponsor of the session. [S11][S12]
– Ankit Bose – Senior executive representing NASSCOM, India’s technology industry association.
– Hector Duroir – Director of Responsible AI Public Policy, Microsoft.
– Alex Walden – Global Head of Human Rights, Google. [S17]
– Yuchil Kim – Vice President, AI Research, LG.
Additional speakers:
– Praveen – Mentioned in the closing remarks; role not specified.
– Dhani – Mentioned in the closing remarks; role not specified.
– Allie – Addressed by Peggy Hicks near the end; role not specified.
The session opened with Peggy Hicks (BTEC, Office of the High Commissioner for Human Rights) reminding participants that AI-related challenges affect people’s daily lives [?] and that “global standards, collaborative public-private solutions, and rights-based approaches can enable responsible AI with meaningful real-world impact” [2]. She emphasized that responsible outcomes do not happen automatically; they require “deliberation… engagement” to avoid pitfalls [4-5] and that companies must share “good practices” while governments create a “level playing field” and incentives for responsible conduct [12-14]. Hicks framed the BTEC project as a mechanism to convene stakeholders, extract best practices and feed them back into policy, noting that the work is anchored in UN tools such as the UN Guidelines and UNESCO’s AI ethics recommendations [15][23-24].
Tim Curtis (UNESCO) shifted the discussion to the foundations of trust, arguing that “trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability” [32]. He thanked the Office of the High Commissioner for Human Rights for its support [?] and explained that UNESCO’s Readiness Assessment Methodology Reports (RAMS) have been produced for more than 80 countries, providing a “clear-eyed look at how regional landscapes can evolve” and moving the debate “beyond theory” [33-35]. Curtis noted that the RAMS include an assessment of India [?] and that partner institutions such as Oxford and the Alan Turing Institute contributed [?]. To translate the UNESCO ethics recommendation into practice, UNESCO and LG AI Research are launching a massive open online course (MOOC) on “ethics-by-design” [?] that will be delivered on Coursera [?] and is “accessible to a wide global audience and provides practical, day-to-day tools” [?]. The MOOC is intended for practitioners who need concrete tools to bridge the gap between abstract standards and daily work [209-213].
Rein Tammsaar (UN-mandated Global Dialogue on AI Governance) outlined the Dialogue’s four member-state priorities: (i) safe, secure, and trustworthy AI systems; (ii) closing capacity gaps in developing economies; (iii) interoperable, cross-border governance; and (iv) anchoring AI in human-rights law, including protection of vulnerable groups [66-74]. He positioned the Dialogue as a “platform where governments and stakeholders exchange best practices” to strengthen international cooperation and reduce digital divides [65-66] and stressed that standards turn principles into action, shaping risk management, accountability and human-oversight [78-79].
Ankit Bose (NASSCOM) described the association’s 2021-initiated mission to fill the “responsible, trust, human element” gap that emerged as AI proliferated [95-98]. NASSCOM’s core objectives are to develop open assets, build capacity across government, startups and SMEs, and promote early adoption of responsible AI governance [98-104]. Bose highlighted internal silos – tech, business, legal, finance – that impede coherent action and argued that “collaboration… use-case by use-case” is needed, especially for high-impact projects [110-118]. He warned that many national and sectoral frameworks leave developers “lost in the framework” and called for “concrete, actionable guidance” [258-267].
Alex Walden (Google) detailed how the company embeds human-rights-based values into its AI lifecycle. Google has a corporate policy committing to the UN Guiding Principles on Business and Human Rights [?] and an internal set of AI principles that operationalise those values across products such as Cloud, YouTube and Search [135-137]. Governance is layered: model-level requirements and testing, application-level guardrails, executive review of risks before launch, and continuous post-launch monitoring to catch “novel or residual risks” [149-162]. Walden also described programme-level tools – regular stakeholder-engagement processes, “trusted-tester” schemes that give third parties early access, and the Impact Lab’s “Amplify Initiative” which lets communities fine-tune language models in an open-source fashion [300-310].
Hector Duroir (Microsoft) traced the evolution of Microsoft’s responsible-AI framework from its 2018 inception, when “codes, directives, regulations… were not yet there” [170-172], to the creation of the Office of Responsible AI (2019) and the Sensitive Use Case programme that triages high-risk applications and escalates them to the ITER ethics committee involving senior leadership [179-182]. Microsoft aligns its standards with the OECD AI Principles and UNESCO’s recommendation [184-185] and leverages voluntary commitments signed at AI summits – including Letchley Park and South Korea [188-192] – to ground model testing against public-safety and national-security risks. Duroir also cited the OECD hyper-reporting framework and India’s recent voluntary commitment to multilingual safety evaluations, which “encourages companies to forge multilingual capabilities” [193-199].
Yuchil Kim (LG) echoed the need for practitioner-focused tools, noting that LG’s AI-powered data-compliance system is part of its responsible-AI toolkit [?] and that its annual AI ethics accountability report (now in its third edition released “yesterday”) provides “the best standard risk” guidance and transparent documentation of AI activities [210-213]. Kim stressed that the UNESCO MOOC will “bridge the gap” for practitioners who struggle to apply abstract standards in daily work [209-210] and that transparency, inclusive AI and multilingual considerations are central to LG’s roadmap [214-218].
Namit Agarwal (World Benchmarking Alliance) presented the results of its latest assessment of 2 000 tech firms: while roughly 40 % disclose AI principles, only just over 10 % meet global governance expectations and none publish human-rights impact assessments [226-229]. From this gap, WBA derived three investor-focused recommendations, enumerated in three consecutive lines: (i) board-level AI risk responsibility and aligned executive incentives; (ii) product-level governance checks that translate ethical principles into concrete strategies; (iii) robust AI-specific human-rights impact assessments with public summaries [236-241]. Agarwal argued that “capital can definitely incentivise innovation and responsibility, but capital alone cannot do that” and called for a “race to the top” driven by clear market expectations [226-232].
Across the panel, participants repeatedly agreed that global norms and practical safeguards are essential for AI to work for all people, not only advanced economies. This consensus was voiced by Hicks, Curtis, Tammsaar, Walden, Duroir and Kim, who all linked UNESCO recommendations, UN Guiding Principles and OECD standards to concrete safeguards [2][9-11][13][32][67-74][138-140][184-185][209-213]. They also concurred that capacity-building tools such as the UNESCO RAMS assessments, the forthcoming MOOC, and NASSCOM’s ecosystem-wide training are vital to turn theory into practice [32-35][36-44][89-105][209-213]. Finally, there was broad agreement that multi-stakeholder engagement-including civil society, academia, NGOs and investors-is indispensable for inclusive, culturally aware AI, as reflected in the statements of Walden, Duroir, Tammsaar, Kim and Agarwal [300-310][276-285][73-75][209-213][322-338].
Points of disagreement (bullet list):
– Regulation vs. voluntary commitments – Hicks calls for “clear, enforceable rules… and alignment around the global norms” [9]; Duroir emphasizes “voluntary commitments… at AI summits” as the primary mechanism to operationalise standards [188-192].
– Proliferation of frameworks – Bose says developers are “lost in the framework” and need “concrete, actionable guidance” [258-267]; Curtis maintains that UNESCO’s RAMS and the MOOC already provide a unified foundation [32-35][37-44].
– Incentive design – Hicks promotes a broad “race to the top” through market rewards [13]; Agarwal insists that incentives must be tied to specific board-level governance, executive incentives and impact-assessment requirements [226-232].
Thought-provoking remarks shaped the tone of the discussion. Curtis’s framing of trust as a design problem [32] set the agenda for concrete engineering solutions. Tammsaar’s succinct articulation of the four UN-derived priorities [66-73] gave the panel a shared roadmap. Walden’s description of Google’s multilayered governance-model-level checks, executive sign-off and post-launch monitoring-provided a vivid example of operationalising ethics [149-162]. Duroir’s account of Microsoft’s Sensitive Use Case triage and ITER committee illustrated board-level oversight [179-184]. Agarwal’s data point that “only about 10 %… meet global governance expectations and none disclose human-rights impact assessments” [226-229] underscored the compliance gap. Parvati Adani’s philosophical probe-asking an AI tool whether it has ethical limits and receiving “I don’t know” [322-332]-reminded the audience that AI lacks self-awareness and therefore requires human governance. Kim’s African proverb, “If we want to go fast, go alone. If we want to go far, go together,” encapsulated the collaborative spirit needed for a trustworthy ecosystem [294-296].
Concrete next steps were identified:
– UNESCO’s MOOC will be delivered on Coursera, with an open invitation for learners and partners [36-38].
– The UN Global Dialogue on AI Governance will reconvene in Geneva in July [46-50].
– Companies such as LG and Microsoft pledged to publish annual AI-ethics accountability reports (LG’s third edition is already released) [210-213][276-283].
– Microsoft will continue community-led benchmark projects like Samishka in India to develop multilingual safety tools [282-285].
– NASSCOM will expand capacity-building workshops and open-asset libraries for startups, SMEs and government agencies [98-105].
– The World Benchmarking Alliance will circulate its three-step investor engagement framework-board oversight, product-level checks, impact assessments-to catalyse market-based incentives [236-241].
– Participants agreed to share best-practice case studies with the WBA for inclusion in future benchmarking reports [?].
Unresolved issues remain. There is no consensus on how to harmonise the growing number of national and sectoral AI frameworks into a single actionable roadmap for developers. Financing mechanisms to close capacity gaps-particularly infrastructure, compute and talent in developing-country firms-were not settled. A standardised, auditable methodology for AI-specific human-rights impact assessments is still lacking. Scaling responsible-AI processes for small startups without over-burdening them, and establishing clear ownership and frequency for post-launch monitoring, also require further work. Finally, integrating multilingual and informal language contexts into safety tools beyond ad-hoc community projects remains an open challenge.
In closing, Peggy Hicks urged participants to translate the day’s insights into action, reminding them that “AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give us human dignity” [362]. She thanked the participants and closed the session [?]. The panel reaffirmed that responsible AI requires coordinated global norms, concrete capacity-building tools, and market incentives, and they committed to share best-practice case studies and continue dialogue at the July Global Dialogue in Geneva [?].
These are consequential challenges that have impacts in people’s lives on a day -to -day basis. And our session is going to address how global standards, collaborative public -private solutions, and rights -based approaches can enable responsible AI with meaningful real -world impact. And we know that these things don’t just happen on their own. It takes deliberation. It takes thought. It takes engagement to make sure that the products and approaches that we’re using in the AI field avoid some of the pitfalls that may be associated with them. And the companies are going to share some of the good practices that they’re engaging in about how that works in the real world. And we know if they don’t engage in that way, that the risks are there and very much present.
And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying to deliver these benefits for. Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. Companies, of course, have a responsibility to respect human rights and address the risk to people stemming from their products. And human rights due diligence, of course, is one of the process -based ways and a pragmatic way to weave this into corporate operations. But, of course, governments are the ones that also have a responsibility here, too, to create a level playing field, and we talk a lot about that.
We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well. Our BTEC project at OHCHR is aimed at how do we make this conversation happen. So through convenings like this one, through engaging with companies, pulling out their good practices and letting all of you hear about them and encouraging others to do the same is what that project is really about. And we are really looking at and working with, of course, how to use tools like the UN Guidelines. We’re working with the United Nations, the United Nations, and the United Nations to try to get the best out of the work that we’re doing.
And we’re working with the United Nations to try to get the best out of the work that we’re doing. So we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. UNESCO’s AI recommendations on ethics and figuring out how we weave those into the decisions and work that’s being done now.
And as I said, bringing this conversation to this summit where there is truly a global and multi -stakeholder effort is happening to really look at AI innovation and deployment has been incredibly important. So without further ado on that front, I want to hand over to my colleague and co -sponsor here, Tim, over to you.
Thanks, Peggy. And good morning, everyone. Ambassador Reintesma from Estonia and co -chair of the AI Dialogue that the United Nations is holding. Of course, Peggy and dear panelists, it’s really wonderful to be here with you today to be part of this conversation on responsible practices and industry standards. And as we all know now where AI is moving, you know, from something we discuss in theory to really something that is shaping the decisions in real time and real institutions. and of course for real people. I’d like to thank particularly the Office of the High Commission for inviting us to join in on this and for working with us on organising this event. It’s been a pleasure.
At UNESCO we often return to a simple idea that trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability and that’s why the recommendation on the ethics of AI we believe is so important because it does give the world a shared foundation, a first step on how AI could be built and used in ways that protect people’s rights, that promote fairness and support inclusion. So we’ve been translating this global agreement and framework into local realities and through what we call the RAMS, the Readiness Assessment Methodology Reports which we’ve now launched in around over 80 countries and just two days we did India’s readiness assessment report.
And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyond theory and towards this responsible human -centred deployment of AI we hear about. And so by grounding innovation in these evidence -based diagnostics, we hope to ensure that progress remains aligned with those shared values. But, of course, a recommendation only matters if it can be applied by people who are actually catering, creating and using AI. And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course, or a MOOC, as more commonly known, on the ethics of artificial intelligence.
And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide global audience, and to make a practical… for day -to -day work. And so, as I mentioned earlier, the key idea behind the MOOC is ethics by design. And so in simple terms, that we don’t wait until something goes wrong to ask these ethical questions. We should build these questions into the process from the beginning. And the course will help learners think through issues like fairness, transparency, safety, accountability and inclusion at the stage when decisions are still being made rather than after systems have already been deployed. The course, of course, is really going to be focusing on practical tools so that we can offer clear ways of thinking and working that can be used in everyday settings.
So it’ll help learners, for example, recognise common risks early, ask better questions during development, document the decisions made responsibly and think through the impact of AI systems on different groups of people. we’re moving beyond a one size fits all approach and we’ve done this by collaborating with experts from over 10 countries and 5 continents with some of the leading minds from the University of Oxford and the Alan Turing Institute and this global group, this global coalition is really vital because AI of course doesn’t operate in a vacuum it’s shaped by languages, it’s shaped by cultural norms and institutional capacities and of where it is developed and deployed so by integrating these diverse perspectives we’re trying to move from the theory again to the live reality so ultimately this MOOC is a capacity building effort with a simple purpose to help more people around the world build and use AI in ways that are responsible, inclusive and worthy of public trust we look forward of course to this continued collaboration with governments, with industry, with academia and civil society as we try to move forward we take it forward and we hope many of you will engage with the course when it launches, not only as learners, but also as partners in building a stronger culture of ethical innovation across the world.
Thank you very much.
Thanks, Tim Curtis, UNESCO. We’re all looking forward to it. Now we have anticipation. We’re very fortunate to have an addition to our program today with Ambassador Tomsar, the permanent representative of Estonia, who’s one of the co -facilitators for the Global Dialogue on AI that will be launched in July, and a big responsibility. And he’s here to tell us a little bit about where it’s heading and how you all can contribute. Please, Ambassador.
Thank you very much. Good morning. I don’t know, is it morning? Yeah, maybe. So after three days here in India, I think that I lost time. I’m not track of understanding. Is it morning or evening? But thank you, UNESCO and Office of the High Commissioner for Human Rights. for convening this really important discussion, and of course to all our hosts here in India. And I also thank partners who contributed to this work. Today I’ll speak on behalf of two co -chairs of United Nations Global Dialogue on AI Governance, and two co -chairs are from Salvador and Estonia. The first Global Dialogue on AI Governance was mandated by all member states through a General Assembly resolution adopted in August 25.
So this is a member states driven process. It belongs to every country, to all member states. And its task is very practical, while its scope is multilateral. So this… The aim is, you know, to come together. It is a platform where governments and stakeholders exchange best practices and experiences, and this, we believe, can strengthen international cooperation on AI governance and ensure human -centric AI supports sustainable development and reduce, indeed, digital divides that are already there. So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attention four points from these priorities. So first, they want safe, secure, and trustworthy AI systems, and the trust here, of course, is an absolute key word.
Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to participate fully in AI economy, and inclusivity and equal access are essential here. Third, they want governance approaches that can work across borders. and be practical. So fragmentation raises the cost and weakens trust. So interoperability is absolutely key. And fourth, and that is, I think, quite actual here, they want AI anchored in human rights and international law. And this includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability. Now, we know human rights are not optional. They are part of a mandate agreed by member states. And today’s focus on responsible practices and industry standards responds directly to these priorities.
And standards turn principles into action. They shape risk management, they clarify accountability, they guide human oversight, and they give companies and regulators tools they can apply in real systems. so let me say that the global dialogue will not and I guess it cannot impose one single model we will listen we will identify common ground we will build on existing initiatives ethics of AI was mentioned here and it’s of course one of them we’ll avoid or try to avoid duplication and we will focus on practical value so I encourage you to bring your experience into this process share what works share what doesn’t work help us identify approaches that can scale across regions and level of capacity and in best case if we succeed and failure is not an option safety and trust will be visible in how systems are designed deployed and governed they will be reflected in real safeguards and in benefits that reach more people and this is very important for us thank you So I thank you and wish you a productive day, practical exchanges that move our common work forward.
And with this, I give it over to the real experts and panel. Thank you very much.
Thank you, Ambassador. Wonderful to have you with us, and I think we’re all looking forward. We’re looking forward to having all of you join us in Geneva in July. So with that introduction by the three of us, we’re really, as the Ambassador said, going to turn it over to those who can really inform us about how this work is happening and I hope inspire us to both give support and emphasis, amplification to the work that you’re doing and bring more into the fold around responsible business conduct. So with that introduction, I’d really like to start, Ankit Bose, with you from NASSCOM. We had a great conversation yesterday. I’d love for you to inform our audience that NASSCOM represents the leading Indian tech industries and we want to hear more about your work and what you’re doing to encourage companies and help them to ensure a responsible work environment.
Thank you.
Thank you so much for having me here. And it’s my pleasure here to address the audience. So NASCOM has been there for almost four decades plus, right? We have been helping the tech industry in the country to shape and change the whole agenda for the country. I think that’s what we have been doing, specifically on responsible AI interests, right? I think the mission for NASCOM started in 2021. So we started with a gap that, you know, we were seeing a lot of AI was getting developed. But again, I think we found that there was some missing element that was the responsible, the trust, the human element. I think that that is how the mission started. From that point in time, our main core objective has been to develop open assets, right?
Build capacity, build, you know, adoption, right? And I think help all the different components in the ecosystem, right? Right from the government to the startup, the SMEs, right? All of them. So we have been trying to help them. And we’re trying to help them go up the ladder. and then really become aware, I think, not only the gloomy side of AI, but also the bright side if they adopt responsible AI governance practices right at the early. They can have a big upside. I think that’s what we have been doing.
Can I ask, Ankit, how does this work? You mentioned that full range of companies that are involved, and one of the topics we spoke a bit about yesterday is the difficulties sometimes when you have big companies, we have some of them represented here, but also startups and small and medium enterprises. How do you differentiate? How do you make sure that we’re engaging across that very differentiated group of industry?
Yeah, so I think if I take it, I think there’s the big techs, right? Then there are the services companies. Then there are the middle -sized, small, and startup. I think all five of them have different sort of engagement, right? The big tech, I think, are playing at the front foot, right? The services companies have to follow their contracts, right? The bigger services companies. The medium tier companies, they are really trying to understand how they grow their AI base at the same time build, you know, that services or product using right governance principles. But again, I think the bigger support is needed from the, you know, the smaller startup, right? Because they are really, really fighting for day to day, right?
I think, and believe me, I think a startup founder has to first build a business, a tech, a team, right? And also get funding and apparently focus on, you know, a lot of things around governance, right? I think in that whole journey that we have seen, they are putting it at a second or probably the side burner, which is something which we see is a complete no -no. If you do that, you know, when you’re building a product, right, you might miss when you’re scaling. I think that’s what we are.
Great. Thanks very much. I think we’re going to turn to the scale side of it now with Alex, you’re next in line. So, Alex Walden, you’ve been working on these issues within Google, and I think one of the insights that I’ve learned from you over the time we’ve known each other is really how complex it is to bring to product teams and those that are on the technologist side some of these issues of responsible business conduct and human rights, and give us the benefit of your wisdom about how that works and how we can do it better.
Thanks for the question, and I love that you said that because I do see a very important part of my role as making sure that the stakeholders that we work with understand how things are working within companies because that helps us be better and you be better advocates for helping us improve. So, anyway, but to your question, because I know I’m going to be fast, I think where it really starts for us is from the values perspective. Obviously, we’re a company that’s founded on values around freedom of expression and privacy and bringing the benefit of our technology to everyone, and so that is where… That’s where it begins. But obviously, we have things like… Like, ultimately, it’s the sort of governance inside of the company that is what permeates throughout the 180 ,000 people that work at Google to ensure that we are being responsible in the way that we’re developing AI.
So as a baseline for us, responsibility and thinking about what responsibility means has to start with human rights, and then we can build from there. So we have a corporate policy that says that we have a commitment to respect the UN guiding principles on business and human rights. And we’ve also built on that with things like our AI principles that reinforce sort of more of an operational way in which we can manifest those values in all of the teams that are working to develop the various models or applications of the models in, say, Google Cloud or YouTube or Search. Just to maybe hone in a little bit on the types of standards that we’re using, because I think that’s important because there’s so much work being done in our ecosystem.
We use the UN guiding principles. We use the work happening. We use the work happening at the OECD, the work at UNESCO. engagement with our peers in industry through the BTEC project, through global network initiative, and this is just a few, but all of the guidance that comes out of those places and the dialogue that happens there helps us ultimately inform how things are working inside the company. And then just one layer down, then I’ll stop. I think having programs and processes like training and dedicated teams, ultimately that’s how you operationalize this through getting a product to market. And so I can say more, but I think those are kind of the big picture structures for what’s required for a company to do this at scale.
So, you know, I’m not going to let you off the hook quite that easily. So we know that this isn’t always easy, though, right, that there are obstacles to really convincing people it’s worth the time. I’ve been in the room where hand -wringing is described as the, you know, no more hand -wringing about safety. We’re going to, we need to just move forward, and I’m sure there are pressures. that you face as the lead for human rights within this company trying to get your message heard. Tell me a bit about how you’ve been able to sometimes surmount some of those necessary challenges that you might see from that different perspective on whether or not these are hurdles or supports for the company to do its mission more effectively.
Well, I mean, I think in general we have sort of corporations are incentivized to put products on market that are safe and that are trusted by our consumers. People know Google best through Google Search or Gmail, the varieties of consumer -facing ways they’re engaging with our products. And so we do have an inherent sort of market business reason to put out products that people trust and deliver good outcomes. And so we have to have processes inside that… that make that real. And so what we do is we have model requirements just at the most granular level. Before any product goes to market, there are model requirements. And so those teams are focused on ensuring that they’re validating the data and doing testing and doing evaluations.
And that’s at the model level. And then at the application layer, we have requirements for teams to be, again, doing testing, additional evaluations, setting additional guardrails, and focusing on what mitigations are going to be put in place for, again, things like Gemini before that gets launched. And then we have to have executives review these things. So before anything goes to market, leadership needs to understand what the risks are and how we’re mitigating them and have a plan in place to address that. So that is an important part of the process for us. And then last, we have post -launch monitoring, because obviously we can do all the testing in the world, but once you’ve launched a product, you have to be able to do all the testing.
there may be novel or new or residual risks that arise. And so we have to have a process for continuing to monitor that, understanding it, getting feedback, improving and improving
Great. That’s super helpful, Alex, to understand that multilayered approach that needs to happen within companies, including, I think, that executive level that you mentioned. I mean, the signals from on top will actually inspire all of those other levels to do what we’re hoping they’ll do. And we have another example with us of some of these practices. I want to turn to Hector Duroir, who’s the director of responsible AI public policy at Microsoft. And we want to hear more about what you’re doing to embed responsible policy practices within Microsoft’s approach.
Thank you very much, Peggy, and thanks for having Microsoft here. So, yeah, I want to start with the inception of our responsible AI approach, which was in 2018. And at that stage, you didn’t have codes, directives, regulations. Frameworks guiding our approach. We’re nearly starting from a blank page. And we didn’t talk about foundational models or frontier models at that stage. It was all about specific AI system and applications, such as facial recognition, for instance, which was very popular. So we forged our AI principles around priorities such as privacy, such as reliability, inclusion, fairness, safety, security. And these high -level principles, the whole challenge was to translate them into practice afterwards. And so it’s really on this basis that we forged the Office of Responsible AI when we created it in 2019, around these principles, which then became our RAI standard, guiding all our actions across our different programs.
One of the programs that I want to reference here is our Sensitive Use Case program. So it’s a team within the Office of Responsible AI that is in charge of doing some triage, challenging basically sensitive use case coming from our different markets. on AI systems and models that could actually violate these principles that I was referencing. And so this team analyzed these use cases and then when it occurs that it’s necessary, bring them to our ITER committee, which is our AI ethics committee. And it involved Microsoft Board, both at the CTO level and the present level. And I think the board inclusion is very important in this kind of internal risk management framework. And so this work has been informed during the past years by many interesting developments.
So the OECD AI principles, obviously, but also the UNESCO recommendation on AI ethics. And I think all these principled approach that evolve or refine nuanced with AI capabilities are actually so important and very useful signals for us to refine our own AI governance program within Microsoft.
Hector, you’ve talked a little bit about how you look at it from an internal perspective. But we wanted to hear a bit of how you look externally, like what are the drivers between how you engage. across the sector and with the government side as well.
Yeah, and I think we always navigate this very interesting interplay between best practices and international norms and regulatory standards. And a very good example here is the line of voluntary commitments that have been signed across the AI summits. And so if you look at Letchley Park in the UK or the South Korea summit that happened afterwards, it really helped us, as Alex was referencing, to ground our model testing approach, especially against public safety and national security risks. So when we talk about cybersecurity, for instance, or loss of control, or CBRN risks, that really grounded some very solid testing approach with some concrete operational triggers, concrete high -risk domain that we’re monitoring at the model level.
So that was one. The OECD hyper -reporting framework, which came out of the Hiroshima AI process, is another very good tool that I was involved in and I want to reference here. It was launched along the lines of the Paris AI Summit. And actually, it’s a very good way to understand how risk management transparency works in practice and how real -world deployment experience and transparency experience can guide upstream developments. And so it’s this kind of feedback loop that it creates that’s very interesting. And because we’re in Delhi, just to reference the voluntary commitments that were signed yesterday, I think that’s another very, very, very good and positive approach that the Indian government have been taking, especially on one of the commitments which basically encourages company to forge multilingual capabilities approach.
So basically build better evaluations against safety risk, not only in English norms, but beyond English norms. And I think that speaks about, you know, our principle of inclusion. That’s so important, and I’m very happy that they initiated this work.
I have to say, one of the contrasts I’ve been making when I look at what’s been talked about here in Delhi as opposed to… prior summits is that issue of inclusion. And the language issue, I think, is so underrepresented in some of the conversations we have. So it’s wonderful that you’ve given that a shout out. We’re very fortunate, Yuchil, to have you with us as well. Yuchil Kim, who is the vice president at LG for AI research. So we’d really like to hear more about how you’re engaging with these global technical and policy standards. We talked about the UN Guiding Principles on Business and Human Rights, the UNESCO recommendation on AI ethics, and, of course, the MOOC that’s being worked on.
So give us a sense of how these frameworks are being engaged with by LG.
So the essential of MOOC is for the practitioner The practitioner usually is struggling with the same question How do I actually apply this in my day -to -day work? So we are focusing on the bridge in the gap So we provide the best standard risk So we get a lot of risk that Timothy mentioned So we also contribute to our own experience So I previously mentioned about our process And we made also our AI -powered data compliance system And also I will mention soon We have an annual report about AI ethics activities So I hope the MOOC can be a good practice for everyone It will launch in this half So last one is I want to talk about transparency So we have a lot of activities about the AI Responsible AI Inclusive AI So we published our annual accountability report on AI So yesterday we released the third edition.
So here are some some track of that. I will spread out after our session. So please refer my documents.
Well yeah. Wonderful. No I think it’s super interesting to understand both how you’ve been looking at that learning process within the company but also how that more global approach working with UNESCO is going to be very helpful and I think it’s one of those areas where we all know so much more needs needs to happen. But we’ve we’ve heard the the company perspective here and we’re very fortunate to have with us from the World Benchmarking Alliance, Namit Agarwal. And Namit, you know, I think one of the things that we’ve talked about is how we incentivize the race to the top amongst all of these actors in this space. And you’re going to, I hope, give us some some insights based on the the work that the World Benchmarking Alliance is doing about how capital and investment can be used to make sure that innovation is being approached in a responsible way.
Over to you, Nami.
Thanks for having me here. And I’m not representing investors, but we do work with several stakeholders, including investors, civil society, governments, and companies. So we are a nonprofit, and we try to strengthen accountability of the world’s most influential companies so that their impact on people and planet can be sustainable. We also assess the world’s most influential tech companies on whether they are advancing a trustworthy, rights -respecting, and inclusive digital future using standards such as the UN Guiding Principles, but also others that were mentioned by my fellow panelists here. Our role is to provide comparable, credible, and standardized data that our stakeholders can use because of the challenges that we face. Because it’s an ecosystem approach, so how can they work together in doing that?
So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our latest assessments of 2 ,000 companies at Davos last month, and particularly from the tech side, what we found is close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations, global expectations on the governance aspect of it, and none of these 200 companies that we assess disclose their reports on human rights impact assessment. And I think that clearly shows that while there is a lot of intent, some work is happening, but governance and accountability are not really there, so we need a lot of work to happen there. And we believe responsible innovation requires incentives for long -term risk management, clear expectations that are tied to capital.
So I think that the way we’re going to do this is to look at the market, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and consequences for weak governance because it has to be consequential for companies to move in that direction.
And I think that is where investors have a very catalytic role to play. We convene a coalition of investors and civil society organizations
Now, I mean, I think it’s so interesting that we work in a sector that is incredibly based on data, but yet we don’t necessarily bring it into this conversation in the ways that we need to, and that idea of both incentivizing the right practices and leverage within companies, but also, you know, too many conversations sort of focus on the tech industry as a whole and sort of group everybody together as if they’re all engaging in the same way. And so the work that you’re doing really helps us to understand those nuances more. Could you go a bit deeper and look a bit at some of the examples and concrete suggestions coming out of your work as to how to push that discussion forward more?
Absolutely. So I think the first thing is engagement and dialogue, and I think that is a very important way. And we have been fortunate to have good engagement with both Google and Microsoft on this panel. but again it’s important to build on engagement because it’s a continuous process it’s important for investors to engage with some of the leaders but also engage with companies who are fence -sitters to bring them along faster the laggard will definitely catch up and come on board but for investors and if you want to for capital and finance to incentivize responsible innovation, responsible AI there are three things that we believe investors should definitely do first is on AI governance and board oversight investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are aligned with long term human rights risk mitigation and whether governance applies across the full AI value chain second is on implementation at product and business model level and we heard some examples just now investors need to move beyond policy statements and ask companies how ethical principles are translated into product level strategy How high -risk use cases are identified and whether there are internal mechanisms and controls to, you know, identify harms as they emerge.
And third is robust human rights impact assessments and asking whether companies conduct AI -specific impact assessments. Are they publishing meaningful summaries? Are mitigation measures integrated into product cycles? And I think this is an area where we have seen a lot of gaps.
Great. Thanks, Namek. I wonder if we could actually take that one step further and get some input from the other members of the panel on what that looks like in practice. Because, of course, this is a panel that’s focused sort of on the company perspective. I think we have some of our real partners here on the civil society side. And as much as they understand that that conversation needs to happen, I think they sometimes find it difficult to be able to make sure that the way those risks are assessed really looks like bring in the voices, bring in the experiences of people, and particularly people in different contexts and different environments in which the companies are being, their products are being rolled out.
So those issues of stakeholder engagement, access, dialogue with the civil society side, it would be great to hear a little bit more about some of the lessons that you’ve all learned there. And I see you shaking your head. Please tell us from the NASSCOM perspective how you look at it.
Well, I think from an enterprise lens, right, I think when they are trying to implement responsible AI or trustworthy AI, right, I think the biggest issue is there are different groups internally, right, the tech group, the business group, the legal risk group, right, the finance group, right. And then all of them are working in silos, what we feel like, because the business want the best, you know, for the business, the tech wants to put the best technology, right. The risk is very, you know. Right. conservative, right? And finance always has upper limit on what they want to spend. So that’s what issue. I think what helps is if all of them build a collaboration which can be taken use case by use case.
I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first thing. The second thing is I think what from NASCOM what we are seeing, there’s a lot of frameworks which are getting developed, right? Every country, every place you go, there’s a new framework, right? But from the framework heavy or the concept heavy to action is not happening. I think that’s a big gap, right? So if a technologist is trying to implement responsible governance, right? A developer is trying to implement, right? He will be lost in the framework. He doesn’t know what’s actionable. I think what he should do. So I think that’s one big need.
I think that’s what we are also driving. We are trying to drive a multi, you know, multi -different organization -led approach where we have all different sizes of organizations, where we come together and start discussing, collaborating, implementing, right? I think that’s the second nugget. I think these are two points. I know time is up.
No, that’s great. I mean, I think it shows that that collaborative effort is going to be super important rather than a siloed approach for so many practical reasons as well, that companies can only respond to so many different frameworks. And what they need to do is have the simple guidance and support that they need to actually implement at this stage. Headquarter, do you want to say quick comments from the company side about how you’re facing those challenges?
Yeah, two very quick examples on how we involve civil society and academia in this process. So our work really sits at the intersection of policy, research, and engineering groups. And to inform product development with our responsible AI principles, we regularly publish some internal policies. And it’s an iterative process with our research teams, with our product teams. And as part of this process, we actually include academics who have a specific REST domain expertise or think tanks and civil society organizations which have been thinking very deep about the deployment of 1AI system, 1AI model in certain contexts. And so that really informs the product that we do from the inception. And I think the second example that was raised is the big topic and the governance challenge that we face is the importance of refining AI evaluations.
That’s the constant thing. And in India, for instance, we’ve been working with some NGOs around a project named Samishka to build some community -led benchmarks, which is basically a safety tool that we include afterwards in the system construction, to really get data sets that are grounded in a community with a specific… cultural aspects, specific contextual aspects, because if you just translate safety tools from English to another language, you lose all the context for which this safety tool has been built. And so that’s another example of really an area where we need more cooperation between civil society and governments and companies. It’s really how do we build these safety tools beyond English norms such as an India.
That’s great, and it takes work to do that, and the more we can spread, you’ve done some of the work, you know how to do some of it, and diffuse it amongst other companies that could learn from it. That’s part of what we’re trying to do with BTEC, but I think there’s a lot more to be done. Yuchil, do you want to come in?
Yes, I agree with his comment. The safety, we should work together. So that’s the reason why we make our annual report, because sharing our best practice and also sharing our struggles, what we had a struggle of, that we think that’s a very important thing. So this is my colleague mentioned about it. There’s an African proverb that said, if we want to go fast, go alone. If we want to go far, go together. So building a trustworthy and safe ecosystem is not a sprint. So it’s a long journey, so we can go together.
It’s a long journey with a lot of sprints happening day to day, as far as I can tell. Some of them here at the summit, but over to you, Allie.
So much sprinting. I think maybe just to pick up specifically on the stakeholder engagement piece. So a few things. One, I think it’s important for companies to have a programmatic approach to stakeholder engagement, so we need to have ways in which we’re regularly engaging with stakeholders in general, not just on a specific product question. But so I would say first a programmatic approach, and then second, something that is more ad hoc. So when we need to consult specifically on a product, we need to have a sort of process and way to do that. The other thing is we have programs internally like trusted tester programs where we are working with third -party organizations to make sure that they have early access or pre -launch access to models or to a product in order to test it so that we can identify potential risks or errors ahead of time and address them before we launch a product.
And then last, just to highlight something that we do is similar to others, our research team called Impact Lab, which is part of the overall human rights programmatic work at the company, engages directly with communities in doing research to inform how we are improving our products and what we’re developing. So that work is also happening through the research team specifically. And they recently launched something called the Amplify Initiative, which is an open -source app. This is specifically on language inclusion that allows… members of the public and communities to engage in the fine -tuning work around our language models. So specifically that there is a lot of, there’s a wealth of information and expertise out there that we should all be benefiting from, that we can benefit from, and it’s open source so we can also share it with others in industry.
That’s great to hear, and I’m sure more needs to be done on that front, but the amplification effect is so crucial. Look, we could probably go on talking all day, but I see the clock is ticking down. Now, fortunately, rather than us try to draw the conclusions from this, we’ve welcomed in another speaker to give us some concluding remarks to pull some of these pieces together. I’m very happy to invite Parvati Adani from Sero Amarchan Mangaldas to help us think through some of these issues. Please.
Thank you for that. I think it’s partly easy and partly tough because there was a lot to understand, but I think we can go back from this. conversation. But firstly, thank you. You’ve held the conversation beautifully. And thank you to UNESCO, United Nations, NASSCOM, and everybody in this room who brought their knowledge, their conscience to this conversation. Actually, I just want to talk a little bit about a conversation with a machine. As we were thinking about this topic, and we were engaging on this issue, I wanted to share something that I might, I feel resonates with a lot of what you’ve talked about. I did something in preparation, I decided to ask the tools that we’re talking about over here, a question that we avoid asking ourselves.
Do you, I’m talking to the tool, do you have ethical limits? Do you understand the difference between what it can do and what it should do? And I’m going to quote verbatim. On conscious, at a conscious level, the answer is I don’t know. And neither does anybody. Nobody else. The gap is a philosophical, uncomfortable position where think of me as having no home inside. I have no continuous thread of existence and I cannot verify about myself what you have asked me. I don’t have any consequences to bear. Now, what came back, though unexpectedly thoughtful, showed us about restraints, values and what it appears to have internalized. It acknowledged the difference between instruction and conscience, a lot of what we’ve talked about today.
And so, I think when we talk about this, we said human rights are not optional. We cannot ignore the impact on people and planet. we have to make incentives for good governance so when a tool cannot understand this for itself I think we have to do the job what we have chosen in India and when we are having this conversation this location is not ceremonial, it’s very deliberate that we have thought about innovation over restraint and we have to think about that being the right choice we allow innovation to be in a safe place without feeling the weight of the regulation and I think we have a lot to learn from all of you who have been doing this for so long the privacy, the safety, the impact on children and vulnerable groups the question is whether the people that we are talking about are going to be the subjects of the transformation or just its audience or just the object An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution.
So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design. I think the ideas of an interoperable, flexible system is a forward -looking and an inclusive one. I think a lot of what you mentioned, Alex, about governance inside the company, it’s wonderful what you’re talking about. And I think the voluntary commitments that have been reflected in this summit is also fantastic. So now we come to the harder work. The ambition is real. The infrastructure exists. But ensuring that… We don’t leave with just good intentions and good ideas, but action. Thank you.
Thank you, Praveen and Dhani. It’s wonderful to hear those perspectives. We’re coming to the close of this session. Just a few parting words to all of you. I think we’ve done enough in this short conversation to really give a sense of how complex some of these issues are, the dynamics within companies and externally and then globally across different geographies, the challenges that are faced. But the reality is that all of us have a responsibility to engage on these, and we each have different roles. We’ve heard a bit about what some of the companies are doing. We’ve heard a little bit about how we can challenge them and incentivize the actions that they take in the space.
There are good practices, but they’re not universally applied. They’re not available to some companies. There are companies that may want to engage in this, and we can help them to do this. NASSCOM and I have been… We’ve been discussing a little bit about how we make, simplify things, bring in more into the fold of this conversation. And, of course, we’re here in an environment where we have governments that are looking at what do we need to do to create responsible business practices and incentivize them as well. So I hope everybody walks out of the room thinking, what can I do to continue this conversation? How can I differentiate between companies that are thinking about these issues in a way that will deliver for myself, for my children, for my future the ways that we want to see?
AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give those values, that will inform and give us human dignity going forward in the future. So thank you all so much for joining us. Thank you for fitting this into your schedule today, and enjoy the rest of the summit. Thank you. Thank you. Thank you. Thank you. Thank you.
AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions across numerous sectors, driving unique development needs as varied as healthcare, to…
EventCollaboration across sectors through multistakeholder engagement is essential responsibility Multi-stakeholder participation must include civil society, technical communities, academia, private secto…
EventMulti-stakeholder governance approach is essential, but private actors’ commercial interests may limit participation in voluntary international standards
EventMulti-stakeholder cooperation and inclusive governance frameworks are essential
Event– **Multi-stakeholder involvement** – All speakers acknowledged the need for collaboration between governments, private sector, researchers, and civil society Anne Bouverot: Thank you so much, Gabrie…
EventAll speakers acknowledge that having strategies and frameworks is insufficient without proper implementation mechanisms, training, and capacity building to translate high-level policies into daily pra…
EventMajor discussion point 2: Capacity building, education and operational tools
EventCapacity building and training are critical for implementation
EventCapacity building and education
EventCapacity building and translation between technical and policy communities is essential for effective multi-stakeholder collaboration
EventCraig explains that Microsoft implements responsible AI governance programs internally and sees opportunities for different governance models that governments could pursue. She emphasizes that the com…
EventAt an AI Seoul Summit 2024 meeting on Tuesday, sixteen companies leading the charge in artificial intelligence (AI) development pledged to advance this transformative technology responsibly. This init…
UpdatesGiuseppe Claudio Cicu:So konnichiwa to everyone. And thank you, Mr. Thank you, Professor Bailey, for the introduction. I will very quickly dive deep into the relationship between artificial intelligen…
EventFor POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stuff, then they say, okay, I am okay to go. And that’s where I think people also d…
EventInvestors can directly engage companies to improve their policies, practices, and governance. Investors have the potential to significantly contribute to advancing corporate accountability by encoura…
EventGovernments are viewed as key actors in closing the gap between technology disruption and regulation, and there is a positive sentiment towards the idea that governments can drive change through incen…
EventThank you, Ms. Gose, for really giving that perspective. Now may I invite Mr. Cyril Shroff who is of course the convener of this panel but also managing, founding, managing partner of Cyril Amachal Ma…
Event_reportingThis involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion. The importance of stakeholder collaboration in the decolonisation of digital r…
EventCollaboration between government entities, private sector organizations, civil society, and academia is deemed critical for effectively tackling this issue. This aligns with the Partnership for the Go…
EventAddressing the digital language divide requires coordinated efforts from all sectors of society working together. No single entity can solve this complex challenge alone, necessitating collaboration b…
EventThe second phase demonstrated practical results, with one pilot municipality (Reba) improving its score from 30% to 39% after implementing the first batch of easy fixes, with projections suggesting sc…
EventThe overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological change but expressed confidence in the ability of democratic institutions and mult…
EventThe tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-looking perspective, emphasizing partnership and shared responsibility. The discu…
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe tone is consistently formal, authoritative, and optimistic throughout. The speaker maintains a confident and promotional stance, emphasizing ITU’s achievements and capabilities while projecting as…
EventThe overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while also conveying the urgency of youth action on global issues. There was an emphas…
EventThe discussion maintained a constructive and collaborative tone throughout, with speakers demonstrating both urgency about implementation challenges and optimism about potential solutions. While ackno…
EventThe discussion maintained a constructive and collaborative tone throughout, characterized by professional expertise and shared commitment to the cause. While speakers acknowledged significant challeng…
EventThe discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers were solution-oriented and forward-looking, sharing concrete examples and achievements wh…
EventThe tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than debating. It maintained a balance between technical expertise and practical imp…
EventConcluding sessions are pivotal for reflecting the outcomes of negotiations and the interests of different stakeholders, including civil society organizations.
EventOverall Tone:The tone is consistently optimistic, inspirational, and forward-looking throughout the speech. The speaker maintains an enthusiastic and confident demeanor, expressing strong belief in In…
EventThe tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists revealed the depth of AI-related challenges. Sherry Turkle acknowledged being “the Grin…
EventMeaningful stakeholder participation The United States argues that the current modalities for stakeholder participation are not sufficient and must be improved. They emphasize the need for meaningful…
EventThe tone is consistently optimistic, inspirational, and forward-looking throughout the speech. The speaker maintains an enthusiastic and confident demeanor, expressing strong belief in India’s potenti…
Event“Peggy Hicks reminded participants that AI‑related challenges affect people’s daily lives and that global standards, collaborative public‑private solutions, and rights‑based approaches can enable responsible AI with meaningful real‑world impact.”
The knowledge base identifies Peggy Hicks as Director of Thematic Engagement at the OHCHR and notes her emphasis on responsible AI governance requiring deliberate thought and engagement, which aligns with the reported statement [S21] and [S2].
“She emphasized that responsible outcomes do not happen automatically; they require deliberation and engagement, and that companies must share good practices while governments create a level playing field and incentives for responsible conduct.”
S2 explicitly states that responsible AI governance needs deliberate thought and engagement to avoid pitfalls, supporting the claim about the need for deliberation and a level playing field for companies and governments [S2].
“Tim Curtis argued that trust is not something technology earns through ambition alone but is earned through design choices, safeguards and accountability.”
S122 frames trust as a foundational requirement that must be built through design choices, safeguards, and accountability, confirming Curtis’s point about how trust is earned [S122].
“UNESCO’s Readiness Assessment Methodology Reports (RAMS) have been produced for more than 80 countries, providing a clear‑eyed look at how regional landscapes can evolve.”
S129 confirms the existence of UNESCO’s readiness assessment methodology, though it does not specify the exact number of countries; the claim about RAMS being produced is therefore supported [S129].
“UNESCO and LG AI Research are launching a massive open online course (MOOC) on ethics‑by‑design, to be delivered on Coursera, accessible to a wide global audience and providing practical, day‑to‑day tools for practitioners.”
S1, S131 and S130 all describe a UNESCO-LG AI Research MOOC on AI ethics, delivered via Coursera, aimed at a global audience and designed to give practical tools for everyday work, confirming the claim [S1] and [S131] and [S130].
“The MOOC is intended for practitioners who need concrete tools to bridge the gap between abstract standards and daily work.”
S131 explicitly states that the MOOC’s goal is to make AI ethics learning accessible and practical for day-to-day work, matching the reported purpose for practitioners [S131].
“UNESCO’s measurement approaches, including the readiness assessments, aim to move the debate beyond theory toward practical implementation.”
S123 adds nuance by describing UNESCO’s measurement approaches as shifting from procedural requirements to trust-building mechanisms, providing additional context to the claim about moving beyond theory [S123].
There is strong consensus that global norms, human‑rights‑based principles and practical safeguards are the foundation for responsible AI, that market incentives and board‑level governance can create a race to the top, that capacity‑building tools (MOOC, readiness assessments, ecosystem training) are needed to operationalise ethics, and that multi‑stakeholder, linguistically‑inclusive engagement is essential.
High consensus across UN, civil‑society and major tech firms on the need for standards, incentives, capacity building and inclusive stakeholder engagement. This convergence suggests a solid basis for coordinated policy actions, joint initiatives and the development of interoperable frameworks that can be scaled globally.
The panel displayed moderate disagreement centered on how best to translate global AI ethics standards into practice. Key tensions emerged between voluntary industry commitments versus formal regulatory safeguards, the overload of disparate frameworks versus the need for unified actionable guidance, and differing designs of incentive mechanisms (broad market rewards versus board‑level governance tied to capital). While participants shared common goals—responsible, human‑rights‑based AI and inclusive stakeholder engagement—their preferred pathways diverged, indicating that consensus on implementation strategies remains unsettled.
Moderate disagreement: participants largely agree on overarching objectives but differ on the mechanisms (regulatory vs voluntary, framework simplification, incentive architecture). This suggests that future policy work will need to reconcile these approaches to achieve coherent, scalable AI governance.
The discussion was driven forward by a handful of pivotal remarks that moved the conversation from high‑level rhetoric to concrete, actionable pathways. Tim Curtis’s framing of trust as a design problem and the announcement of a global AI‑ethics MOOC introduced a practical solution that anchored later talks. Rein Tammsaar’s concise articulation of the four UN‑derived priorities gave the panel a shared roadmap, which each speaker then mapped their own initiatives onto. Alex Walden and Hector Duroir provided vivid, multi‑layered governance models that illustrated how large tech firms can operationalise those priorities. Namit Agarwal’s data‑driven critique exposed a systemic compliance gap and his investor‑focused recommendations turned the critique into a set of concrete levers for change. Parvati Adani’s philosophical probe of an AI’s self‑awareness reminded participants why human oversight remains essential. Together, these comments created turning points that shifted the tone from descriptive to prescriptive, deepened the analysis of accountability mechanisms, and reinforced the central message that responsible AI requires coordinated, multi‑stakeholder effort across standards, education, corporate governance, and capital markets.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

