AI That Empowers Safety Growth and Social Inclusion in Action
20 Feb 2026 10:00h - 11:00h
AI That Empowers Safety Growth and Social Inclusion in Action
Summary
The session opened with Peggy emphasizing that responsible AI must address day-to-day challenges and deliver practical safeguards for all people, not only those in advanced economies, through global standards, public-private collaboration and rights-based approaches [1-9]. She underscored corporate duties to respect human rights and the need for human-rights due diligence, while urging governments to create a level playing field and reward firms that act responsibly [10-13].
Tim introduced UNESCO’s stance that trust is earned through design, safeguards and accountability, citing the UNESCO ethics recommendation and the RAMS readiness assessments now used in more than 80 countries, including a recent India report [32-34]. He announced a UNESCO-LG AI Research MOOC on “ethics by design” that will provide practical tools for developers worldwide [37-44].
Rein Tammsaar explained the UN Global Dialogue on AI Governance, mandated by a General Assembly resolution, and outlined its four member-state priorities: trustworthy AI, closing capacity gaps, cross-border governance, and anchoring AI in human rights and international law [60-74]. He noted that standards translate principles into actionable risk-management tools for companies and regulators [78-79].
Ankit Bose described NASSCOM’s four-decade mission to build capacity, create open assets and guide startups, SMEs and large firms toward responsible AI, pointing out that startups often deprioritise governance amid resource constraints [94-124]. Alex Walden detailed Google’s internal framework, from corporate values and UN guiding principles to AI principles, model-level requirements, application-level guardrails, executive review and post-launch monitoring [130-162]. Hector Duroir outlined Microsoft’s Responsible AI office, its Sensitive Use Case program, board-level oversight and reliance on OECD and UNESCO guidelines, while highlighting recent Indian voluntary commitments on multilingual safety [170-199]. Yuchil Kim explained LG’s contribution to the UNESCO MOOC, its annual AI ethics accountability report and a proprietary AI-powered data-compliance system aimed at transparent, inclusive AI [209-213].
Namit Agarwal presented the World Benchmarking Alliance’s assessment of 2,000 tech firms, revealing that only about 10 % meet global governance expectations and none disclose human-rights impact assessments, and called for board-level AI oversight, product-level implementation and robust impact assessments [224-240]. Panelists agreed that siloed frameworks hinder implementation and advocated programmatic stakeholder engagement, trusted-tester programmes and open-source initiatives such as Google’s Amplify to bring civil-society and academic input into product development [302-310].
In closing, Peggy summarized that while incentives, standards and multi-stakeholder collaboration are emerging, concrete action is required to turn good intentions into trustworthy AI that respects human dignity worldwide [350-363].
Keypoints
Major discussion points
– Global norms and multilateral governance are essential for responsible AI.
The opening remarks stress the need for “global standards, collaborative public-private solutions, and rights-based approaches” and for “responsible and effective AI governance and clarity of rules” to make AI work for all people [2-5][9-12]. The UN-led Global Dialogue on AI Governance further outlines member-state priorities – trustworthy AI, capacity-building, cross-border governance, and anchoring AI in human rights and international law [67-74].
– Capacity-building and education are being operationalised through assessments and a UNESCO MOOC.
UNESCO’s RAMS (Readiness Assessment Methodology) reports are being rolled out in over 80 countries to translate the global ethics recommendation into local practice [33-35]. A new massive open online course on AI ethics, co-developed with LG AI Research, will teach “ethics by design” and provide practical tools for fairness, transparency, safety, accountability and inclusion [36-44].
– Large tech companies are embedding responsible-AI principles into internal structures and product lifecycles.
Google cites its corporate policy on the UN Guiding Principles, AI principles, and a layered process of model-level requirements, application-level guardrails, executive review and post-launch monitoring [130-138][149-162]. Microsoft describes its Office of Responsible AI, the Sensitive Use-Case program, and board-level oversight, drawing on OECD and UNESCO principles [168-184].
– Investors and benchmarking organisations can drive accountability and incentivise good governance.
The World Benchmarking Alliance provides comparable, credible data on companies’ AI disclosures, finding that only ~10 % meet global governance expectations and none publish human-rights impact assessments [224-230]. It recommends that investors demand board-level AI risk responsibility, alignment of executive incentives, and robust AI-specific human-rights impact assessments [236-241].
– Inclusion, language diversity, and civil-society engagement are critical yet under-addressed.
Examples include voluntary commitments on multilingual safety tools in India [190-199] and Microsoft’s partnership with NGOs to build community-led benchmarks that reflect local cultural contexts [276-285]. Google’s “Amplify Initiative” and trusted-tester programs illustrate how companies can involve external stakeholders to improve language inclusion and overall safety [300-310].
Overall purpose / goal
The session aims to bring together UN bodies, governments, industry leaders, civil-society representatives, and investors to share concrete practices, identify gaps, and forge collaborative, rights-based mechanisms that translate high-level AI ethics standards into actionable safeguards, capacity-building programmes, and market incentives-ultimately ensuring that AI development and deployment are trustworthy, inclusive, and beneficial for all societies.
Overall tone and its evolution
– Opening (0-15 min): Formal, optimistic, and forward-looking, emphasizing shared responsibility and the promise of global standards.
– Mid-session (15-35 min): Becomes more explanatory and technical, highlighting concrete tools (MOOC, assessments) and the practical challenges companies face.
– Later (35-50 min): Shifts to a candid acknowledgment of obstacles-fragmented frameworks, capacity gaps, and the need for stronger incentives-while still maintaining a collaborative spirit.
– Closing (50-53 min): Moves to a reflective, call-to-action tone, urging participants to translate “good intentions” into concrete actions and sustain the multi-stakeholder momentum.
Overall, the discussion maintains a constructive and solution-oriented tone, but it deepens in nuance as participants move from high-level framing to detailed examples of implementation hurdles and the necessity of broader engagement.
Speakers
– Peggy Hicks – Director, Office of the United Nations High Commissioner for Human Rights (OHCHR); moderator; expertise in human rights, AI governance, and responsible business conduct [S18][S19].
– Tim Curtis – Regional Director for UNESCO South Asia; co-chair of the UN AI Dialogue; expertise in AI policy, ethics, and multistakeholder collaboration [S2].
– Ankit Bose – Representative of NASSCOM (National Association of Software and Service Companies), India; focuses on responsible AI, industry capacity building, and tech ecosystem coordination.
– Rein Tammsaar – Ambassador, Permanent Representative of Estonia to the United Nations; co-facilitator and co-chair of the UN Global Dialogue on AI Governance; expertise in AI governance and diplomatic engagement.
– Namit Agarwal – Representative, World Benchmarking Alliance (non-profit); works on AI accountability, benchmarking of tech companies, and aligning capital-market incentives with responsible AI.
– Yuchil Kim – Vice President, LG AI Research; leads LG’s AI ethics, transparency, and responsible AI initiatives, including development of an AI ethics MOOC.
– Parvati Adani – Partner, Sero Amarchan Mangaldas (law firm); expertise in AI law, ethics, and the intersection of technology with human rights [S12].
– Alex Walden – Global Head of Human Rights, Google; leads Google’s responsible AI policies, human-rights impact assessments, and stakeholder engagement [S14].
– Hector Duroir – Director, Responsible AI Public Policy, Microsoft; oversees Microsoft’s AI principles, internal governance frameworks, and external collaborations on AI safety and inclusion.
Additional speakers:
– Ambassador Reintesma – Ambassador of Estonia (mentioned by Tim Curtis as co-chair of the UN AI Dialogue); diplomatic role in AI governance.
– Praveen – Mentioned by Peggy Hicks in closing remarks; affiliation not specified in the transcript.
– Dhani – Mentioned alongside Praveen; affiliation not specified in the transcript.
– Allie – Referred to by Peggy Hicks near the end; likely a mis-identification of an existing speaker (e.g., Alex Walden) but listed as a distinct name in the transcript.
Peggy Hicks opened the session by reminding participants that the challenges posed by artificial intelligence are “consequential … that have impacts in people’s lives on a day-to-day basis” and that any response must be grounded in “global standards, collaborative public-private solutions, and rights-based approaches” [1-2]. She stressed that responsible AI does not emerge spontaneously; it requires “deliberation, thought and engagement” to avoid pitfalls and to ensure that products “work for people, not only in advanced economies or for the dominant platforms” [3-8]. Hicks linked responsible governance to “clarity of rules for both companies and government” and called for “responsible and effective AI governance” that aligns with “global norms” [9-10]. She underlined the corporate duty to “respect human rights and address the risk to people stemming from their products” and presented human-rights due diligence as a pragmatic way to embed these obligations into operations [11-12]. Hicks also noted the complementary role of governments in creating a “level playing field” and rewarding firms that act responsibly, framing this as part of the BTEC project’s aim to “make this conversation happen” through convenings and the use of UN guidelines [13-16]. The BTEC project is hosted by the Office of the High Commissioner for Human Rights (OHCHR), and Tim Curtis later thanked this office for inviting the panel [13-16]. Peggy added that the UN Global Dialogue on AI Governance will be launched in July, with an inaugural convening in Geneva [13-16].
Tim Curtis, Director of UNESCO’s AI Ethics Programme, articulated UNESCO’s perspective. He argued that “trust is not something technology earns through ambition alone but … through design choices, safeguards and accountability” [32]. To operationalise the UNESCO Recommendation on the Ethics of AI, UNESCO has produced the Readiness Assessment Methodology (RAMS) reports, which have now been launched in “over 80 countries” and include a recent assessment for India [33-34]. Curtis announced the development of a joint UNESCO-LG AI Research massive open online course (MOOC) on “ethics by design”, to be delivered on Coursera, with the explicit goal of making AI-ethics learning “accessible to a wide global audience” and providing “practical … tools for day-to-day work” [37-44]. He positioned the MOOC as a bridge between high-level ethical recommendations and the concrete decisions developers face [37-44], and outlined four concrete learner benefits: recognising common risks early, asking better questions during development, documenting decisions responsibly, and assessing impact on different groups [37-44]. The MOOC focuses on ethics-by-design, embedding ethical questions from the start [37-44].
Rein Tammsaar, co-chair of the United Nations Global Dialogue on AI Governance, contextualised the discussion within the UN system. He explained that the Dialogue was “mandated by all member states through a General Assembly resolution” and is therefore a “member-states-driven process” belonging to every country [60-62]. Tammsaar noted that the Dialogue has two co-chairs – one from El Salvador and the other from Estonia [60-62]. He presented the four priorities identified by member states: (i) safe, secure and trustworthy AI; (ii) closing capacity gaps, especially for developing nations; (iii) interoperable, cross-border governance; and (iv) anchoring AI in human rights and international law [67-74]. He argued that standards “turn principles into action”, shaping risk management, accountability and human oversight, and that the Dialogue will seek “common ground” rather than imposing a single model [78-79].
Ankit Bose, Senior Vice-President, NASSCOM, described the association’s four-decade mission to “build capacity, develop open assets and guide the ecosystem” from government to startups and SMEs [98-102]. He traced NASSCOM’s responsible-AI focus to a 2021 launch that identified a gap between rapid AI development and the missing “human element” of trust [95-98]. Bose highlighted that startups often place governance on a “second-or-probably the side-burner” because they must simultaneously build a product, a team and secure funding, a situation he warned is a “complete no-no” [120-124]. When asked how NASSCOM differentiates its engagement across company sizes, he explained that “big tech … are playing at the front foot”, services firms “follow their contracts”, mid-tier firms “try to understand how they grow … while building governance”, and startups need “much bigger support” because they struggle to prioritise governance amid day-to-day pressures [110-124].
Alex Walden, Senior Director of Responsible AI at Google, presented Google’s internal governance framework. He began by linking corporate values-freedom of expression, privacy and universal benefit-to the company’s AI responsibilities [130-131]. Google’s policy “commits to respect the UN Guiding Principles on Business and Human Rights” and is reinforced by its own AI Principles, which translate high-level values into operational guidance for teams across Google Cloud, YouTube and Search [135-137]. Walden listed the standards that inform Google’s work: the UN Guiding Principles, OECD AI Principles, UNESCO recommendations, the BTEC project and other peer-industry initiatives [138-141]. He described a layered process: “model-level requirements” that mandate data validation and testing; “application-level guardrails” that add further evaluations and mitigations; “executive review” where senior leaders assess risks before launch; and “post-launch monitoring” to capture novel or residual risks [154-162]. He framed this as a “multilayered approach” that embeds responsibility throughout the product lifecycle [149-162]. When pressed about the pressures of championing human-rights considerations within Google, Walden noted that market incentives already push the company to deliver “safe and trusted” products, given that Google’s consumer-facing services such as Search and Gmail shape public perception [149-152]. He explained that the internal processes-model requirements, application guardrails, executive sign-off and continuous monitoring-are the mechanisms that turn those market pressures into concrete safeguards [153-161].
Hector Duroir, Director of Responsible AI, Microsoft, outlined Microsoft’s evolution in responsible AI. He recounted that the Office of Responsible AI was created in 2019, building on “AI principles” established in 2018 around privacy, reliability, inclusion, fairness, safety and security [175-176][170-174]. Microsoft’s Sensitive Use-Case programme “triages … high-risk applications” and escalates them to the ITER ethics committee, which includes board-level representation [179-182]. The programme draws on the OECD AI Principles and UNESCO’s recommendation [184-185]. Duroir also highlighted recent Indian voluntary commitments that “encourage companies to forge multilingual capabilities” and to evaluate safety risks beyond English-centric norms, linking this to Microsoft’s principle of inclusion [188-199]. He described the Samishka project in India, a collaboration with NGOs that creates “community-led benchmarks” to develop safety tools grounded in local cultural contexts, warning that simply translating English tools would lose essential nuance [276-285].
The importance of linguistic and cultural inclusion was reinforced by several speakers. Alex added that Google’s “Amplify Initiative”, an open-source app, allows members of the public to fine-tune language models, thereby promoting language inclusion [308-310]. Parvati Adani later echoed this sentiment, arguing that any framework that ignores language, gender and cultural contexts is “incomplete by design” [336-338].
Yuchil Kim, Head of AI Ethics, LG, spoke about LG’s contribution to the UNESCO MOOC and its broader responsible-AI activities. He positioned the MOOC as a “bridge in the gap” for practitioners who struggle to apply ethical concepts in daily work, noting that LG also provides an “AI-powered data-compliance system” and publishes an “annual accountability report on AI” (now in its third edition) to share best practices and challenges [209-213]. Kim’s remarks underscored the need for transparent, inclusive reporting to support the global learning effort.
Namit Agarwal, Executive Director, World Benchmarking Alliance (WBA), presented the results of the latest assessment of 2 000 tech firms. He reported that “close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations on the governance aspect” and that “none of the 200 companies … disclose their reports on human-rights impact assessment” [227-228]. From this evidence, the WBA calls for three investor-driven actions: (i) board-level AI risk responsibility and alignment of executive incentives; (ii) product-level translation of ethical principles, including identification of high-risk use cases; and (iii) robust, AI-specific human-rights impact assessments with meaningful public summaries [236-241]. He framed investors as “catalytic” actors who can make “consequences for weak governance … consequential for companies to move in that direction” [231-232].
A tension emerged between Ankit’s concern about “framework fatigue” and Tim’s confidence that UNESCO’s RAMS assessments and the developing MOOC provide actionable guidance [258-266][33-44].
Across the panel, there was a strong consensus on the necessity of multi-stakeholder engagement. Peggy called for “continuous, programme-level engagement” with civil society, academia and affected communities [144-149]. Alex described Google’s “programme-level approach” that includes trusted-tester programmes, the Impact Lab’s community research and the open-source Amplify Initiative [302-307][308-310]. Hector highlighted Microsoft’s inclusion of NGOs and academia in the Samishka benchmarks [276-285]. Yuchil reinforced this by noting LG’s practice of publishing annual reports to share both successes and struggles, invoking the African proverb “If you want to go fast, go alone; if you want to go far, go together” [290-296]. These remarks illustrate a shared belief that siloed internal structures must give way to collaborative, cross-functional processes.
Parvati Adani delivered the closing reflections, using a provocative experiment in which she asked an AI tool whether it had “ethical limits”. The tool replied “I don’t know”, prompting her to note that AI “has no continuous thread of existence” and “cannot bear consequences” [327-332]. She argued that because AI lacks conscience, “human rights are not optional” and that frameworks must explicitly address language, gender and cultural inclusion or remain “incomplete by design” [334-338]. Adani warned that voluntary commitments, while “fantastic”, must be turned into concrete actions to avoid “good intentions and good ideas” without impact [340-345].
Finally, Peggy concluded by reiterating the session’s key messages. She acknowledged the “complex … dynamics within companies and externally and then globally” and stressed that all participants have a responsibility to “engage on these … each of us have different roles” [350-352]. She highlighted the need to move beyond “good practices” that are not universally applied, to create incentives that reward responsible behaviour, and to continue the dialogue so that AI innovation can be trusted and uphold human dignity [353-363]. The panel closed with a collective pledge to translate standards, capacity-building programmes and market incentives into concrete, accountable actions that benefit all societies [350-363].
These are consequential challenges that have impacts in people’s lives on a day -to -day basis. And our session is going to address how global standards, collaborative public -private solutions, and rights -based approaches can enable responsible AI with meaningful real -world impact. And we know that these things don’t just happen on their own. It takes deliberation. It takes thought. It takes engagement to make sure that the products and approaches that we’re using in the AI field avoid some of the pitfalls that may be associated with them. And the companies are going to share some of the good practices that they’re engaging in about how that works in the real world. And we know if they don’t engage in that way, that the risks are there and very much present.
And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying to deliver these benefits for. Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. Companies, of course, have a responsibility to respect human rights and address the risk to people stemming from their products. And human rights due diligence, of course, is one of the process -based ways and a pragmatic way to weave this into corporate operations. But, of course, governments are the ones that also have a responsibility here, too, to create a level playing field, and we talk a lot about that.
We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well. Our BTEC project at OHCHR is aimed at how do we make this conversation happen. So through convenings like this one, through engaging with companies, pulling out their good practices and letting all of you hear about them and encouraging others to do the same is what that project is really about. And we are really looking at and working with, of course, how to use tools like the UN Guidelines. We’re working with the United Nations, the United Nations, and the United Nations to try to get the best out of the work that we’re doing.
And we’re working with the United Nations to try to get the best out of the work that we’re doing. So we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. UNESCO’s AI recommendations on ethics and figuring out how we weave those into the decisions and work that’s being done now.
And as I said, bringing this conversation to this summit where there is truly a global and multi -stakeholder effort is happening to really look at AI innovation and deployment has been incredibly important. So without further ado on that front, I want to hand over to my colleague and co -sponsor here, Tim, over to you.
Thanks, Peggy. And good morning, everyone. Ambassador Reintesma from Estonia and co -chair of the AI Dialogue that the United Nations is holding. Of course, Peggy and dear panelists, it’s really wonderful to be here with you today to be part of this conversation on responsible practices and industry standards. And as we all know now where AI is moving, you know, from something we discuss in theory to really something that is shaping the decisions in real time and real institutions. and of course for real people. I’d like to thank particularly the Office of the High Commission for inviting us to join in on this and for working with us on organising this event. It’s been a pleasure.
At UNESCO we often return to a simple idea that trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability and that’s why the recommendation on the ethics of AI we believe is so important because it does give the world a shared foundation, a first step on how AI could be built and used in ways that protect people’s rights, that promote fairness and support inclusion. So we’ve been translating this global agreement and framework into local realities and through what we call the RAMS, the Readiness Assessment Methodology Reports which we’ve now launched in around over 80 countries and just two days we did India’s readiness assessment report.
And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyond theory and towards this responsible human -centred deployment of AI we hear about. And so by grounding innovation in these evidence -based diagnostics, we hope to ensure that progress remains aligned with those shared values. But, of course, a recommendation only matters if it can be applied by people who are actually catering, creating and using AI. And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course, or a MOOC, as more commonly known, on the ethics of artificial intelligence.
And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide global audience, and to make a practical… for day -to -day work. And so, as I mentioned earlier, the key idea behind the MOOC is ethics by design. And so in simple terms, that we don’t wait until something goes wrong to ask these ethical questions. We should build these questions into the process from the beginning. And the course will help learners think through issues like fairness, transparency, safety, accountability and inclusion at the stage when decisions are still being made rather than after systems have already been deployed. The course, of course, is really going to be focusing on practical tools so that we can offer clear ways of thinking and working that can be used in everyday settings.
So it’ll help learners, for example, recognise common risks early, ask better questions during development, document the decisions made responsibly and think through the impact of AI systems on different groups of people. we’re moving beyond a one size fits all approach and we’ve done this by collaborating with experts from over 10 countries and 5 continents with some of the leading minds from the University of Oxford and the Alan Turing Institute and this global group, this global coalition is really vital because AI of course doesn’t operate in a vacuum it’s shaped by languages, it’s shaped by cultural norms and institutional capacities and of where it is developed and deployed so by integrating these diverse perspectives we’re trying to move from the theory again to the live reality so ultimately this MOOC is a capacity building effort with a simple purpose to help more people around the world build and use AI in ways that are responsible, inclusive and worthy of public trust we look forward of course to this continued collaboration with governments, with industry, with academia and civil society as we try to move forward we take it forward and we hope many of you will engage with the course when it launches, not only as learners, but also as partners in building a stronger culture of ethical innovation across the world.
Thank you very much.
Thanks, Tim Curtis, UNESCO. We’re all looking forward to it. Now we have anticipation. We’re very fortunate to have an addition to our program today with Ambassador Tomsar, the permanent representative of Estonia, who’s one of the co -facilitators for the Global Dialogue on AI that will be launched in July, and a big responsibility. And he’s here to tell us a little bit about where it’s heading and how you all can contribute. Please, Ambassador.
Thank you very much. Good morning. I don’t know, is it morning? Yeah, maybe. So after three days here in India, I think that I lost time. I’m not track of understanding. Is it morning or evening? But thank you, UNESCO and Office of the High Commissioner for Human Rights. for convening this really important discussion, and of course to all our hosts here in India. And I also thank partners who contributed to this work. Today I’ll speak on behalf of two co -chairs of United Nations Global Dialogue on AI Governance, and two co -chairs are from Salvador and Estonia. The first Global Dialogue on AI Governance was mandated by all member states through a General Assembly resolution adopted in August 25.
So this is a member states driven process. It belongs to every country, to all member states. And its task is very practical, while its scope is multilateral. So this… The aim is, you know, to come together. It is a platform where governments and stakeholders exchange best practices and experiences, and this, we believe, can strengthen international cooperation on AI governance and ensure human -centric AI supports sustainable development and reduce, indeed, digital divides that are already there. So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attention four points from these priorities. So first, they want safe, secure, and trustworthy AI systems, and the trust here, of course, is an absolute key word.
Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to participate fully in AI economy, and inclusivity and equal access are essential here. Third, they want governance approaches that can work across borders. and be practical. So fragmentation raises the cost and weakens trust. So interoperability is absolutely key. And fourth, and that is, I think, quite actual here, they want AI anchored in human rights and international law. And this includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability. Now, we know human rights are not optional. They are part of a mandate agreed by member states. And today’s focus on responsible practices and industry standards responds directly to these priorities.
And standards turn principles into action. They shape risk management, they clarify accountability, they guide human oversight, and they give companies and regulators tools they can apply in real systems. so let me say that the global dialogue will not and I guess it cannot impose one single model we will listen we will identify common ground we will build on existing initiatives ethics of AI was mentioned here and it’s of course one of them we’ll avoid or try to avoid duplication and we will focus on practical value so I encourage you to bring your experience into this process share what works share what doesn’t work help us identify approaches that can scale across regions and level of capacity and in best case if we succeed and failure is not an option safety and trust will be visible in how systems are designed deployed and governed they will be reflected in real safeguards and in benefits that reach more people and this is very important for us thank you So I thank you and wish you a productive day, practical exchanges that move our common work forward.
And with this, I give it over to the real experts and panel. Thank you very much.
Thank you, Ambassador. Wonderful to have you with us, and I think we’re all looking forward. We’re looking forward to having all of you join us in Geneva in July. So with that introduction by the three of us, we’re really, as the Ambassador said, going to turn it over to those who can really inform us about how this work is happening and I hope inspire us to both give support and emphasis, amplification to the work that you’re doing and bring more into the fold around responsible business conduct. So with that introduction, I’d really like to start, Ankit Bose, with you from NASSCOM. We had a great conversation yesterday. I’d love for you to inform our audience that NASSCOM represents the leading Indian tech industries and we want to hear more about your work and what you’re doing to encourage companies and help them to ensure a responsible work environment.
Thank you.
Thank you so much for having me here. And it’s my pleasure here to address the audience. So NASCOM has been there for almost four decades plus, right? We have been helping the tech industry in the country to shape and change the whole agenda for the country. I think that’s what we have been doing, specifically on responsible AI interests, right? I think the mission for NASCOM started in 2021. So we started with a gap that, you know, we were seeing a lot of AI was getting developed. But again, I think we found that there was some missing element that was the responsible, the trust, the human element. I think that that is how the mission started. From that point in time, our main core objective has been to develop open assets, right?
Build capacity, build, you know, adoption, right? And I think help all the different components in the ecosystem, right? Right from the government to the startup, the SMEs, right? All of them. So we have been trying to help them. And we’re trying to help them go up the ladder. and then really become aware, I think, not only the gloomy side of AI, but also the bright side if they adopt responsible AI governance practices right at the early. They can have a big upside. I think that’s what we have been doing.
Can I ask, Ankit, how does this work? You mentioned that full range of companies that are involved, and one of the topics we spoke a bit about yesterday is the difficulties sometimes when you have big companies, we have some of them represented here, but also startups and small and medium enterprises. How do you differentiate? How do you make sure that we’re engaging across that very differentiated group of industry?
Yeah, so I think if I take it, I think there’s the big techs, right? Then there are the services companies. Then there are the middle -sized, small, and startup. I think all five of them have different sort of engagement, right? The big tech, I think, are playing at the front foot, right? The services companies have to follow their contracts, right? The bigger services companies. The medium tier companies, they are really trying to understand how they grow their AI base at the same time build, you know, that services or product using right governance principles. But again, I think the bigger support is needed from the, you know, the smaller startup, right? Because they are really, really fighting for day to day, right?
I think, and believe me, I think a startup founder has to first build a business, a tech, a team, right? And also get funding and apparently focus on, you know, a lot of things around governance, right? I think in that whole journey that we have seen, they are putting it at a second or probably the side burner, which is something which we see is a complete no -no. If you do that, you know, when you’re building a product, right, you might miss when you’re scaling. I think that’s what we are.
Great. Thanks very much. I think we’re going to turn to the scale side of it now with Alex, you’re next in line. So, Alex Walden, you’ve been working on these issues within Google, and I think one of the insights that I’ve learned from you over the time we’ve known each other is really how complex it is to bring to product teams and those that are on the technologist side some of these issues of responsible business conduct and human rights, and give us the benefit of your wisdom about how that works and how we can do it better.
Thanks for the question, and I love that you said that because I do see a very important part of my role as making sure that the stakeholders that we work with understand how things are working within companies because that helps us be better and you be better advocates for helping us improve. So, anyway, but to your question, because I know I’m going to be fast, I think where it really starts for us is from the values perspective. Obviously, we’re a company that’s founded on values around freedom of expression and privacy and bringing the benefit of our technology to everyone, and so that is where… That’s where it begins. But obviously, we have things like… Like, ultimately, it’s the sort of governance inside of the company that is what permeates throughout the 180 ,000 people that work at Google to ensure that we are being responsible in the way that we’re developing AI.
So as a baseline for us, responsibility and thinking about what responsibility means has to start with human rights, and then we can build from there. So we have a corporate policy that says that we have a commitment to respect the UN guiding principles on business and human rights. And we’ve also built on that with things like our AI principles that reinforce sort of more of an operational way in which we can manifest those values in all of the teams that are working to develop the various models or applications of the models in, say, Google Cloud or YouTube or Search. Just to maybe hone in a little bit on the types of standards that we’re using, because I think that’s important because there’s so much work being done in our ecosystem.
We use the UN guiding principles. We use the work happening. We use the work happening at the OECD, the work at UNESCO. engagement with our peers in industry through the BTEC project, through global network initiative, and this is just a few, but all of the guidance that comes out of those places and the dialogue that happens there helps us ultimately inform how things are working inside the company. And then just one layer down, then I’ll stop. I think having programs and processes like training and dedicated teams, ultimately that’s how you operationalize this through getting a product to market. And so I can say more, but I think those are kind of the big picture structures for what’s required for a company to do this at scale.
So, you know, I’m not going to let you off the hook quite that easily. So we know that this isn’t always easy, though, right, that there are obstacles to really convincing people it’s worth the time. I’ve been in the room where hand -wringing is described as the, you know, no more hand -wringing about safety. We’re going to, we need to just move forward, and I’m sure there are pressures. that you face as the lead for human rights within this company trying to get your message heard. Tell me a bit about how you’ve been able to sometimes surmount some of those necessary challenges that you might see from that different perspective on whether or not these are hurdles or supports for the company to do its mission more effectively.
Well, I mean, I think in general we have sort of corporations are incentivized to put products on market that are safe and that are trusted by our consumers. People know Google best through Google Search or Gmail, the varieties of consumer -facing ways they’re engaging with our products. And so we do have an inherent sort of market business reason to put out products that people trust and deliver good outcomes. And so we have to have processes inside that… that make that real. And so what we do is we have model requirements just at the most granular level. Before any product goes to market, there are model requirements. And so those teams are focused on ensuring that they’re validating the data and doing testing and doing evaluations.
And that’s at the model level. And then at the application layer, we have requirements for teams to be, again, doing testing, additional evaluations, setting additional guardrails, and focusing on what mitigations are going to be put in place for, again, things like Gemini before that gets launched. And then we have to have executives review these things. So before anything goes to market, leadership needs to understand what the risks are and how we’re mitigating them and have a plan in place to address that. So that is an important part of the process for us. And then last, we have post -launch monitoring, because obviously we can do all the testing in the world, but once you’ve launched a product, you have to be able to do all the testing.
there may be novel or new or residual risks that arise. And so we have to have a process for continuing to monitor that, understanding it, getting feedback, improving and improving
Great. That’s super helpful, Alex, to understand that multilayered approach that needs to happen within companies, including, I think, that executive level that you mentioned. I mean, the signals from on top will actually inspire all of those other levels to do what we’re hoping they’ll do. And we have another example with us of some of these practices. I want to turn to Hector Duroir, who’s the director of responsible AI public policy at Microsoft. And we want to hear more about what you’re doing to embed responsible policy practices within Microsoft’s approach.
Thank you very much, Peggy, and thanks for having Microsoft here. So, yeah, I want to start with the inception of our responsible AI approach, which was in 2018. And at that stage, you didn’t have codes, directives, regulations. Frameworks guiding our approach. We’re nearly starting from a blank page. And we didn’t talk about foundational models or frontier models at that stage. It was all about specific AI system and applications, such as facial recognition, for instance, which was very popular. So we forged our AI principles around priorities such as privacy, such as reliability, inclusion, fairness, safety, security. And these high -level principles, the whole challenge was to translate them into practice afterwards. And so it’s really on this basis that we forged the Office of Responsible AI when we created it in 2019, around these principles, which then became our RAI standard, guiding all our actions across our different programs.
One of the programs that I want to reference here is our Sensitive Use Case program. So it’s a team within the Office of Responsible AI that is in charge of doing some triage, challenging basically sensitive use case coming from our different markets. on AI systems and models that could actually violate these principles that I was referencing. And so this team analyzed these use cases and then when it occurs that it’s necessary, bring them to our ITER committee, which is our AI ethics committee. And it involved Microsoft Board, both at the CTO level and the present level. And I think the board inclusion is very important in this kind of internal risk management framework. And so this work has been informed during the past years by many interesting developments.
So the OECD AI principles, obviously, but also the UNESCO recommendation on AI ethics. And I think all these principled approach that evolve or refine nuanced with AI capabilities are actually so important and very useful signals for us to refine our own AI governance program within Microsoft.
Hector, you’ve talked a little bit about how you look at it from an internal perspective. But we wanted to hear a bit of how you look externally, like what are the drivers between how you engage. across the sector and with the government side as well.
Yeah, and I think we always navigate this very interesting interplay between best practices and international norms and regulatory standards. And a very good example here is the line of voluntary commitments that have been signed across the AI summits. And so if you look at Letchley Park in the UK or the South Korea summit that happened afterwards, it really helped us, as Alex was referencing, to ground our model testing approach, especially against public safety and national security risks. So when we talk about cybersecurity, for instance, or loss of control, or CBRN risks, that really grounded some very solid testing approach with some concrete operational triggers, concrete high -risk domain that we’re monitoring at the model level.
So that was one. The OECD hyper -reporting framework, which came out of the Hiroshima AI process, is another very good tool that I was involved in and I want to reference here. It was launched along the lines of the Paris AI Summit. And actually, it’s a very good way to understand how risk management transparency works in practice and how real -world deployment experience and transparency experience can guide upstream developments. And so it’s this kind of feedback loop that it creates that’s very interesting. And because we’re in Delhi, just to reference the voluntary commitments that were signed yesterday, I think that’s another very, very, very good and positive approach that the Indian government have been taking, especially on one of the commitments which basically encourages company to forge multilingual capabilities approach.
So basically build better evaluations against safety risk, not only in English norms, but beyond English norms. And I think that speaks about, you know, our principle of inclusion. That’s so important, and I’m very happy that they initiated this work.
I have to say, one of the contrasts I’ve been making when I look at what’s been talked about here in Delhi as opposed to… prior summits is that issue of inclusion. And the language issue, I think, is so underrepresented in some of the conversations we have. So it’s wonderful that you’ve given that a shout out. We’re very fortunate, Yuchil, to have you with us as well. Yuchil Kim, who is the vice president at LG for AI research. So we’d really like to hear more about how you’re engaging with these global technical and policy standards. We talked about the UN Guiding Principles on Business and Human Rights, the UNESCO recommendation on AI ethics, and, of course, the MOOC that’s being worked on.
So give us a sense of how these frameworks are being engaged with by LG.
So the essential of MOOC is for the practitioner The practitioner usually is struggling with the same question How do I actually apply this in my day -to -day work? So we are focusing on the bridge in the gap So we provide the best standard risk So we get a lot of risk that Timothy mentioned So we also contribute to our own experience So I previously mentioned about our process And we made also our AI -powered data compliance system And also I will mention soon We have an annual report about AI ethics activities So I hope the MOOC can be a good practice for everyone It will launch in this half So last one is I want to talk about transparency So we have a lot of activities about the AI Responsible AI Inclusive AI So we published our annual accountability report on AI So yesterday we released the third edition.
So here are some some track of that. I will spread out after our session. So please refer my documents.
Well yeah. Wonderful. No I think it’s super interesting to understand both how you’ve been looking at that learning process within the company but also how that more global approach working with UNESCO is going to be very helpful and I think it’s one of those areas where we all know so much more needs needs to happen. But we’ve we’ve heard the the company perspective here and we’re very fortunate to have with us from the World Benchmarking Alliance, Namit Agarwal. And Namit, you know, I think one of the things that we’ve talked about is how we incentivize the race to the top amongst all of these actors in this space. And you’re going to, I hope, give us some some insights based on the the work that the World Benchmarking Alliance is doing about how capital and investment can be used to make sure that innovation is being approached in a responsible way.
Over to you, Nami.
Thanks for having me here. And I’m not representing investors, but we do work with several stakeholders, including investors, civil society, governments, and companies. So we are a nonprofit, and we try to strengthen accountability of the world’s most influential companies so that their impact on people and planet can be sustainable. We also assess the world’s most influential tech companies on whether they are advancing a trustworthy, rights -respecting, and inclusive digital future using standards such as the UN Guiding Principles, but also others that were mentioned by my fellow panelists here. Our role is to provide comparable, credible, and standardized data that our stakeholders can use because of the challenges that we face. Because it’s an ecosystem approach, so how can they work together in doing that?
So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our latest assessments of 2 ,000 companies at Davos last month, and particularly from the tech side, what we found is close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations, global expectations on the governance aspect of it, and none of these 200 companies that we assess disclose their reports on human rights impact assessment. And I think that clearly shows that while there is a lot of intent, some work is happening, but governance and accountability are not really there, so we need a lot of work to happen there. And we believe responsible innovation requires incentives for long -term risk management, clear expectations that are tied to capital.
So I think that the way we’re going to do this is to look at the market, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and consequences for weak governance because it has to be consequential for companies to move in that direction.
And I think that is where investors have a very catalytic role to play. We convene a coalition of investors and civil society organizations
Now, I mean, I think it’s so interesting that we work in a sector that is incredibly based on data, but yet we don’t necessarily bring it into this conversation in the ways that we need to, and that idea of both incentivizing the right practices and leverage within companies, but also, you know, too many conversations sort of focus on the tech industry as a whole and sort of group everybody together as if they’re all engaging in the same way. And so the work that you’re doing really helps us to understand those nuances more. Could you go a bit deeper and look a bit at some of the examples and concrete suggestions coming out of your work as to how to push that discussion forward more?
Absolutely. So I think the first thing is engagement and dialogue, and I think that is a very important way. And we have been fortunate to have good engagement with both Google and Microsoft on this panel. but again it’s important to build on engagement because it’s a continuous process it’s important for investors to engage with some of the leaders but also engage with companies who are fence -sitters to bring them along faster the laggard will definitely catch up and come on board but for investors and if you want to for capital and finance to incentivize responsible innovation, responsible AI there are three things that we believe investors should definitely do first is on AI governance and board oversight investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are aligned with long term human rights risk mitigation and whether governance applies across the full AI value chain second is on implementation at product and business model level and we heard some examples just now investors need to move beyond policy statements and ask companies how ethical principles are translated into product level strategy How high -risk use cases are identified and whether there are internal mechanisms and controls to, you know, identify harms as they emerge.
And third is robust human rights impact assessments and asking whether companies conduct AI -specific impact assessments. Are they publishing meaningful summaries? Are mitigation measures integrated into product cycles? And I think this is an area where we have seen a lot of gaps.
Great. Thanks, Namek. I wonder if we could actually take that one step further and get some input from the other members of the panel on what that looks like in practice. Because, of course, this is a panel that’s focused sort of on the company perspective. I think we have some of our real partners here on the civil society side. And as much as they understand that that conversation needs to happen, I think they sometimes find it difficult to be able to make sure that the way those risks are assessed really looks like bring in the voices, bring in the experiences of people, and particularly people in different contexts and different environments in which the companies are being, their products are being rolled out.
So those issues of stakeholder engagement, access, dialogue with the civil society side, it would be great to hear a little bit more about some of the lessons that you’ve all learned there. And I see you shaking your head. Please tell us from the NASSCOM perspective how you look at it.
Well, I think from an enterprise lens, right, I think when they are trying to implement responsible AI or trustworthy AI, right, I think the biggest issue is there are different groups internally, right, the tech group, the business group, the legal risk group, right, the finance group, right. And then all of them are working in silos, what we feel like, because the business want the best, you know, for the business, the tech wants to put the best technology, right. The risk is very, you know. Right. conservative, right? And finance always has upper limit on what they want to spend. So that’s what issue. I think what helps is if all of them build a collaboration which can be taken use case by use case.
I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first thing. The second thing is I think what from NASCOM what we are seeing, there’s a lot of frameworks which are getting developed, right? Every country, every place you go, there’s a new framework, right? But from the framework heavy or the concept heavy to action is not happening. I think that’s a big gap, right? So if a technologist is trying to implement responsible governance, right? A developer is trying to implement, right? He will be lost in the framework. He doesn’t know what’s actionable. I think what he should do. So I think that’s one big need.
I think that’s what we are also driving. We are trying to drive a multi, you know, multi -different organization -led approach where we have all different sizes of organizations, where we come together and start discussing, collaborating, implementing, right? I think that’s the second nugget. I think these are two points. I know time is up.
No, that’s great. I mean, I think it shows that that collaborative effort is going to be super important rather than a siloed approach for so many practical reasons as well, that companies can only respond to so many different frameworks. And what they need to do is have the simple guidance and support that they need to actually implement at this stage. Headquarter, do you want to say quick comments from the company side about how you’re facing those challenges?
Yeah, two very quick examples on how we involve civil society and academia in this process. So our work really sits at the intersection of policy, research, and engineering groups. And to inform product development with our responsible AI principles, we regularly publish some internal policies. And it’s an iterative process with our research teams, with our product teams. And as part of this process, we actually include academics who have a specific REST domain expertise or think tanks and civil society organizations which have been thinking very deep about the deployment of 1AI system, 1AI model in certain contexts. And so that really informs the product that we do from the inception. And I think the second example that was raised is the big topic and the governance challenge that we face is the importance of refining AI evaluations.
That’s the constant thing. And in India, for instance, we’ve been working with some NGOs around a project named Samishka to build some community -led benchmarks, which is basically a safety tool that we include afterwards in the system construction, to really get data sets that are grounded in a community with a specific… cultural aspects, specific contextual aspects, because if you just translate safety tools from English to another language, you lose all the context for which this safety tool has been built. And so that’s another example of really an area where we need more cooperation between civil society and governments and companies. It’s really how do we build these safety tools beyond English norms such as an India.
That’s great, and it takes work to do that, and the more we can spread, you’ve done some of the work, you know how to do some of it, and diffuse it amongst other companies that could learn from it. That’s part of what we’re trying to do with BTEC, but I think there’s a lot more to be done. Yuchil, do you want to come in?
Yes, I agree with his comment. The safety, we should work together. So that’s the reason why we make our annual report, because sharing our best practice and also sharing our struggles, what we had a struggle of, that we think that’s a very important thing. So this is my colleague mentioned about it. There’s an African proverb that said, if we want to go fast, go alone. If we want to go far, go together. So building a trustworthy and safe ecosystem is not a sprint. So it’s a long journey, so we can go together.
It’s a long journey with a lot of sprints happening day to day, as far as I can tell. Some of them here at the summit, but over to you, Allie.
So much sprinting. I think maybe just to pick up specifically on the stakeholder engagement piece. So a few things. One, I think it’s important for companies to have a programmatic approach to stakeholder engagement, so we need to have ways in which we’re regularly engaging with stakeholders in general, not just on a specific product question. But so I would say first a programmatic approach, and then second, something that is more ad hoc. So when we need to consult specifically on a product, we need to have a sort of process and way to do that. The other thing is we have programs internally like trusted tester programs where we are working with third -party organizations to make sure that they have early access or pre -launch access to models or to a product in order to test it so that we can identify potential risks or errors ahead of time and address them before we launch a product.
And then last, just to highlight something that we do is similar to others, our research team called Impact Lab, which is part of the overall human rights programmatic work at the company, engages directly with communities in doing research to inform how we are improving our products and what we’re developing. So that work is also happening through the research team specifically. And they recently launched something called the Amplify Initiative, which is an open -source app. This is specifically on language inclusion that allows… members of the public and communities to engage in the fine -tuning work around our language models. So specifically that there is a lot of, there’s a wealth of information and expertise out there that we should all be benefiting from, that we can benefit from, and it’s open source so we can also share it with others in industry.
That’s great to hear, and I’m sure more needs to be done on that front, but the amplification effect is so crucial. Look, we could probably go on talking all day, but I see the clock is ticking down. Now, fortunately, rather than us try to draw the conclusions from this, we’ve welcomed in another speaker to give us some concluding remarks to pull some of these pieces together. I’m very happy to invite Parvati Adani from Sero Amarchan Mangaldas to help us think through some of these issues. Please.
Thank you for that. I think it’s partly easy and partly tough because there was a lot to understand, but I think we can go back from this. conversation. But firstly, thank you. You’ve held the conversation beautifully. And thank you to UNESCO, United Nations, NASSCOM, and everybody in this room who brought their knowledge, their conscience to this conversation. Actually, I just want to talk a little bit about a conversation with a machine. As we were thinking about this topic, and we were engaging on this issue, I wanted to share something that I might, I feel resonates with a lot of what you’ve talked about. I did something in preparation, I decided to ask the tools that we’re talking about over here, a question that we avoid asking ourselves.
Do you, I’m talking to the tool, do you have ethical limits? Do you understand the difference between what it can do and what it should do? And I’m going to quote verbatim. On conscious, at a conscious level, the answer is I don’t know. And neither does anybody. Nobody else. The gap is a philosophical, uncomfortable position where think of me as having no home inside. I have no continuous thread of existence and I cannot verify about myself what you have asked me. I don’t have any consequences to bear. Now, what came back, though unexpectedly thoughtful, showed us about restraints, values and what it appears to have internalized. It acknowledged the difference between instruction and conscience, a lot of what we’ve talked about today.
And so, I think when we talk about this, we said human rights are not optional. We cannot ignore the impact on people and planet. we have to make incentives for good governance so when a tool cannot understand this for itself I think we have to do the job what we have chosen in India and when we are having this conversation this location is not ceremonial, it’s very deliberate that we have thought about innovation over restraint and we have to think about that being the right choice we allow innovation to be in a safe place without feeling the weight of the regulation and I think we have a lot to learn from all of you who have been doing this for so long the privacy, the safety, the impact on children and vulnerable groups the question is whether the people that we are talking about are going to be the subjects of the transformation or just its audience or just the object An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution.
So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design. I think the ideas of an interoperable, flexible system is a forward -looking and an inclusive one. I think a lot of what you mentioned, Alex, about governance inside the company, it’s wonderful what you’re talking about. And I think the voluntary commitments that have been reflected in this summit is also fantastic. So now we come to the harder work. The ambition is real. The infrastructure exists. But ensuring that… We don’t leave with just good intentions and good ideas, but action. Thank you.
Thank you, Praveen and Dhani. It’s wonderful to hear those perspectives. We’re coming to the close of this session. Just a few parting words to all of you. I think we’ve done enough in this short conversation to really give a sense of how complex some of these issues are, the dynamics within companies and externally and then globally across different geographies, the challenges that are faced. But the reality is that all of us have a responsibility to engage on these, and we each have different roles. We’ve heard a bit about what some of the companies are doing. We’ve heard a little bit about how we can challenge them and incentivize the actions that they take in the space.
There are good practices, but they’re not universally applied. They’re not available to some companies. There are companies that may want to engage in this, and we can help them to do this. NASSCOM and I have been… We’ve been discussing a little bit about how we make, simplify things, bring in more into the fold of this conversation. And, of course, we’re here in an environment where we have governments that are looking at what do we need to do to create responsible business practices and incentivize them as well. So I hope everybody walks out of the room thinking, what can I do to continue this conversation? How can I differentiate between companies that are thinking about these issues in a way that will deliver for myself, for my children, for my future the ways that we want to see?
AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give those values, that will inform and give us human dignity going forward in the future. So thank you all so much for joining us. Thank you for fitting this into your schedule today, and enjoy the rest of the summit. Thank you. Thank you. Thank you. Thank you. Thank you.
Jingjie He: So I think the inclusive engagement across stakeholders is essential for the effective global governance of artificial intelligence and the main reason will be that technological challenge…
Event– Jennifer Bachus- Anne Bouverot- Shan Zhongde- Chuen Hong Lew Given that AI technologies are inherently global, effective governance requires international engagement and cooperation even when diffe…
EventBoth speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. Thi…
EventThe rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as generative AI and synthetic disinformation. These advancements have had a negat…
Updates3. Establishing international technical standards that allow policy and regulation to remain flexible and agile Tomas Lamanauskas: Thank you, thank you very much Charlotte indeed, and thank you every…
EventThey have an agreement with UNESCO focusing on capacity building on the topic of artificial intelligence. UNESCO has been actively working towards operationalising the guidelines and principles they …
EventCapacity development | Development Godoi states that capacity building is the first demand UNESCO receives from member states regarding AI implementation. UNESCO has launched initiatives to address t…
Event“And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course,…
EventMajor Discussion Point 2: Challenges and Opportunities in Implementing IUIs The speaker mentioned that UNESCO accompanies the research team and the country at every step of the assessment process. U…
Event6. Capacity Building and Education Shuaib Afolabi Salisu: Thank you so much. Let me start on a note of appreciation to the IGF Secretariat for convening this IGF parliamentary track. The same way w…
EventAbsolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Pratibha here. Welcome, Pratibha. So Adobe is consistently really…
EventAbsolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me quickly bring in Prativa here. Welcome, Prativa. So Adobe is consistently really p…
EventAnthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies,have joined to establishtheFrontier Model Forum, an initiative poised to delve into cutting-edge AI research and est…
UpdatesAt an AI Seoul Summit 2024 meeting on Tuesday, sixteen companies leading the charge in artificial intelligence (AI) development pledged to advance this transformative technology responsibly. This init…
UpdatesThe speakers refer to tech companies breaking laws as long as the gains outweigh the sanctions. The argument is made that technology companies need to act responsibly and comply with laws and ethical …
EventInvestors can directly engage companies to improve their policies, practices, and governance. Investors have the potential to significantly contribute to advancing corporate accountability by encoura…
EventIncentivizing Good Governance Tuggar shared an anecdote about losing his passport to illustrate how incentivizing good behavior can improve governance. He suggested that creating systems that reward …
EventDigital tools can be utilized to disclose public purchases, tenders, and the entire decision-making process within governments. These advancements have the potential to promote greater trust and integ…
EventLanguage barriers and cultural diversity must be addressed for inclusive participation
EventIn conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It highlights the need for increased diversity and inclusive decision-making processe…
EventIn conclusion, standards have a significant impact on our lives and require an inclusive and diverse approach. Addressing the underrepresentation of the Global South and considering the needs of every…
EventThe tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-looking perspective, emphasizing partnership and shared responsibility. The discu…
EventThe overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological change but expressed confidence in the ability of democratic institutions and mult…
EventThe tone is consistently formal, authoritative, and optimistic throughout. The speaker maintains a confident and promotional stance, emphasizing ITU’s achievements and capabilities while projecting as…
EventThe tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere acknowledging 20 years of progress while expressing serious concerns about curren…
EventIn conclusion, the analysis highlights common concerns raised by Lacqua and Azhar. These include the potential for technology surpassing human behavior, flaws in current technology, challenges of retr…
EventCurrently more than100 ethical AI frameworksexist, but they remain voluntary and are not sanctioned. So what concrete measures can be taken to bring more ethics into AI systems? Tools such as technica…
EventJapan:Thank you, Mr. Chair. Japan believes that capacity building is essential for maintaining peace and stability and promoting a free, fair, and secure cyberspace. The United Nations is in a good po…
EventSezio Onoe:Thank you, Philippe. Good afternoon, everyone. I can talk within two minutes. Actually, my belief that standardization itself is the outcome of the public-private collaboration. So actually…
EventAldan Creo:Great. Hello. How are you, everyone? Well, it’s a pleasure to be able to have this session. I hope we’ll make it very interesting. But before we start, I’d just love to walk you through wha…
EventThe discussion maintained a constructive and collaborative tone throughout, with speakers demonstrating both urgency about implementation challenges and optimism about potential solutions. While ackno…
EventSuccess will depend on balancing celebration of concrete achievements with honest acknowledgment of persistent gaps, particularly in capacity building and digital inclusion. The ability of different c…
EventThese key comments transformed what could have been a routine closing ceremony into a substantive reflection on the fundamental challenges facing digital governance. Malatsi’s critique of performative…
EventThe discussion maintained a professional yet urgent tone throughout, with speakers expressing both optimism about collaborative possibilities and concern about potential societal fractures. While ackn…
EventOlaf Kolkman: Thank you. It’s a little bit closer to my mouth. Excellencies, distinguished delegates, my name is Olof Kalkman. I’m Principal at the Internet Society. I’ve been contributing to the …
EventWrottesley emphasized that the momentum generated at the summit must continue beyond the event itself, requiring long-term commitment from all participants. He stressed that translating recommendation…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularl…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging challenges while sharing practical solutions and experiences. The tone was profession…
Event“Corporations have a duty to respect human rights and address risks from their AI products; human‑rights due diligence is a pragmatic way to embed these obligations”
Peggy Hicks’ emphasis on corporate human-rights duties and due-diligence is corroborated by multiple sources that describe integrating human-rights due diligence into standards and operations [S21] and [S119] and [S120].
“Effective AI governance requires clear rules for companies and governments and should operate across development, validation and deployment stages rather than as an after‑thought”
The need for governance mechanisms that span the whole AI lifecycle is explicitly stated in the knowledge base [S73].
“Governments should create a level playing field and reward firms that act responsibly”
The importance of a level playing field for fair competition and responsible corporate behaviour is highlighted in the knowledge base [S122].
“The UN Global Dialogue on AI Governance will be launched in July with an inaugural convening in Geneva”
The knowledge base indicates the Dialogue will be launched later in the year but does not specify July or Geneva as the inaugural venue [S128]; the reported timing is not confirmed.
“Rein Tammsaar is co‑chair of the United Nations Global Dialogue on AI Governance”
The opening address of the AI Governance Dialogue lists the co-chairs, confirming Rein Tammsaar’s role [S77].
“AI challenges are consequential and require global standards, collaborative public‑private solutions and rights‑based approaches”
Other speakers in the knowledge base stress the need for inclusive, rights-based AI systems and proactive risk management, providing additional nuance to this claim [S115] and [S3].
The panel displayed a strong consensus on four pillars: (1) the necessity of global, rights‑based standards; (2) the central role of multi‑stakeholder, programmatic engagement; (3) the need for financial incentives and investor oversight; and (4) the requirement for concrete internal governance mechanisms. Capacity building and attention to language/cultural inclusion were also widely endorsed, though implementation gaps remain.
High consensus – the convergence across government, UN agencies, civil‑society, investors and leading tech firms indicates a shared understanding that coordinated standards, incentives and inclusive processes are essential. This creates a solid basis for joint actions, but the discussion also highlighted practical challenges (e.g., fragmented frameworks, siloed internal structures) that must be addressed to translate agreement into effective governance.
The panel shows strong consensus on the overarching goals of inclusive, trustworthy AI and the need for multi‑stakeholder engagement. Disagreements are confined to implementation pathways – specifically the usefulness of existing frameworks, the design of incentive mechanisms, and the reliance on voluntary commitments versus enforceable actions. These divergences are moderate and revolve around practical translation rather than fundamental values.
Moderate disagreement: while all speakers share the same high‑level objectives, they differ on the most effective means to achieve them. This suggests that future work should focus on harmonising standards with clear, actionable tools, aligning market incentives with investor governance, and establishing mechanisms to move voluntary commitments into binding actions.
The discussion was driven forward by a series of pivotal remarks that moved the dialogue from high‑level ideals to concrete mechanisms. Tim Curtis’s framing of trust as a design issue introduced the need for practical education, which was taken up by UNESCO’s MOOC and echoed throughout. Rein Tammsaar’s four‑point agenda gave the panel a shared policy lens, while Alex Walden and Hector Duroir supplied detailed corporate governance models that operationalised those points. Namit Agarwal’s data‑driven critique exposed the gap between rhetoric and reality, prompting calls for investor‑led accountability. Parvati Adani’s experiential query to an AI system starkly illustrated the philosophical limits of self‑governance, reinforcing the necessity of human oversight. Subsequent comments on stakeholder engagement, multilingual safety, and framework overload built on these turning points, steering the conversation toward collaborative, context‑sensitive solutions. Collectively, these insights reshaped the tone from abstract aspiration to actionable, multi‑stakeholder pathways for responsible AI.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

