AI That Empowers Safety Growth and Social Inclusion in Action
20 Feb 2026 10:00h - 11:00h
AI That Empowers Safety Growth and Social Inclusion in Action
Session at a glance
Summary
This discussion focused on developing responsible AI governance through global standards, public-private collaboration, and rights-based approaches to ensure AI benefits all people, not just advanced economies. The session brought together representatives from UNESCO, the UN Office of the High Commissioner for Human Rights, tech companies including Google, Microsoft, and LG AI Research, industry association NASSCOM, and civil society organizations to share best practices and challenges in implementing responsible AI.
Key speakers emphasized that responsible AI governance must be grounded in human rights principles, with companies having obligations to respect the UN Guiding Principles on Business and Human Rights. UNESCO announced a new global online course on AI ethics being developed with LG AI Research to make practical AI ethics training accessible worldwide. The UN Global Dialogue on AI Governance, launching in July, was presented as a member state-driven platform for sharing best practices and strengthening international cooperation on AI governance.
Company representatives detailed their internal governance structures, including dedicated responsible AI teams, executive oversight requirements, pre-launch testing protocols, and stakeholder engagement programs. However, significant implementation gaps remain, with research showing that while 40% of major tech companies have AI principles, only 10% meet governance expectations and none disclose human rights impact assessments. The discussion highlighted particular challenges for startups and smaller companies that lack resources for comprehensive governance frameworks.
Speakers stressed the importance of moving beyond English-centric AI development to include diverse languages and cultural contexts, noting that safety tools must be community-led and culturally grounded. The conversation concluded with calls for continued collaboration between governments, industry, academia, and civil society to ensure AI development serves human dignity and creates trustworthy systems that benefit everyone.
Keypoints
Major Discussion Points:
– Global AI Governance and Standards Development: The discussion emphasized the need for collaborative international frameworks, including the UN Global Dialogue on AI Governance launching in July, UNESCO’s AI ethics recommendations, and the development of practical tools like the global MOOC on AI ethics to translate principles into actionable guidance.
– Corporate Implementation of Responsible AI Practices: Companies shared their internal governance structures, including board-level oversight, risk assessment processes, human rights due diligence, and the challenge of operationalizing ethical principles across different organizational levels – from technical teams to executive leadership.
– Stakeholder Engagement and Inclusive Development: Panelists discussed the importance of involving civil society, academia, and diverse communities in AI development, particularly addressing language inclusion, cultural context, and ensuring AI systems serve broader populations rather than just dominant platforms or advanced economies.
– Investment and Market Incentives for Responsible AI: The conversation explored how capital markets and benchmarking can incentivize responsible innovation, with data showing significant gaps between companies’ stated AI principles and actual governance implementation, highlighting the need for accountability mechanisms.
– Bridging the Gap Between Frameworks and Action: A recurring theme was the challenge of moving from theoretical frameworks to practical implementation, particularly for smaller companies and startups that need simplified, actionable guidance rather than complex regulatory frameworks.
Overall Purpose:
The discussion aimed to explore how global standards, public-private collaboration, and rights-based approaches can enable responsible AI development with meaningful real-world impact. The session focused on sharing best practices from industry leaders while identifying practical pathways for broader adoption of responsible AI governance across different types of organizations and global contexts.
Overall Tone:
The discussion maintained a constructive and collaborative tone throughout, characterized by shared commitment to responsible AI development rather than adversarial positioning. Speakers demonstrated mutual respect and acknowledgment of the complexity of the challenges while remaining optimistic about solutions. The tone was professional yet urgent, recognizing both the opportunities and risks of AI development, and concluded with a call to action that emphasized collective responsibility and the need to move from good intentions to concrete implementation.
Speakers
Speakers from the provided list:
– Peggy Hicks – Office of the High Commissioner for Human Rights (OHCHR), involved in the BTEC project
– Tim Curtis – UNESCO, working on AI ethics and the Readiness Assessment Methodology Reports (RAMS)
– Rein Tammsaar – Ambassador, Permanent Representative of Estonia, Co-chair of the United Nations Global Dialogue on AI Governance
– Ankit Bose – NASSCOM (National Association of Software and Service Companies), working on responsible AI initiatives
– Alex Walden – Google, Lead for Human Rights, working on responsible AI governance and human rights due diligence
– Hector Duroir – Microsoft, Director of Responsible AI Public Policy
– Yuchil Kim – LG AI Research, Vice President, involved in AI ethics and the UNESCO MOOC development
– Namit Agarwal – World Benchmarking Alliance, working on accountability assessment of tech companies and responsible innovation incentives
– Parvati Adani – Sero Amarchan Mangaldas, provided concluding remarks on AI ethics and governance
Additional speakers:
None – all speakers mentioned in the transcript were included in the provided speakers names list.
Full session report
This comprehensive discussion on responsible AI governance brought together international organisations, technology companies, industry associations, and civil society representatives to explore how global standards, public-private collaboration, and rights-based approaches can enable responsible AI development with meaningful real-world impact. The session highlighted both the progress made and significant challenges remaining in translating AI ethics principles into practical implementation.
International Frameworks and Global Cooperation
The conversation began with an overview of emerging international frameworks designed to guide responsible AI development. Peggy Hicks from the UN Office of the High Commissioner for Human Rights (OHCHR) emphasised that responsible AI governance requires deliberate thought and engagement to avoid potential pitfalls, stressing that companies have fundamental responsibilities to respect human rights and address risks stemming from their products. The BTEC project at OHCHR aims to facilitate these conversations through convenings, company engagement, and the identification of good practices that can be shared across the industry.
Tim Curtis from UNESCO introduced a significant new initiative: a global massive open online course (MOOC) on AI ethics, developed in partnership with LG AI Research and delivered through Coursera. This course represents a shift from theoretical frameworks to practical application, focusing on “ethics by design” rather than reactive approaches. The MOOC will address fairness, transparency, safety, accountability, and inclusion at the decision-making stage rather than after system deployment. The course development involves experts from over ten countries across five continents, recognising that AI operates within diverse linguistic, cultural, and institutional contexts. Curtis also highlighted UNESCO’s Readiness Assessment Methodology Reports (RAMS) launched in over 80 countries, with India’s report having been released just two days prior to this discussion.
Ambassador Rein Tammsaar of Estonia, co-facilitator with Salvador of the upcoming Global Dialogue on AI Governance, outlined this member state-driven process mandated by UN General Assembly resolution, launching in July. This platform aims to facilitate government and stakeholder exchange of best practices whilst strengthening international cooperation on AI governance. The dialogue will focus on four key priorities: safe, secure, and trustworthy AI systems; closing capacity gaps, particularly for developing countries; governance approaches that work across borders; and ensuring AI remains anchored in human rights and international law.
Corporate Implementation and Governance Structures
The discussion revealed sophisticated internal governance structures within major technology companies. Alex Walden from Google described a multi-layered framework beginning with corporate values around freedom of expression and privacy, built upon human rights commitments aligned with UN Guiding Principles on Business and Human Rights. Google’s approach includes AI principles that provide operational guidance across all teams developing models or applications.
Google’s governance operates at multiple levels: model requirements mandating data validation, testing, and evaluations before market release; application-layer requirements for additional testing and guardrails; executive review processes ensuring leadership understands risks and mitigation strategies; and post-launch monitoring to address novel or residual risks. Their stakeholder engagement includes programmatic dialogue, ad-hoc consultation for specific products, trusted tester programmes providing third-party organisations with pre-launch model access, and research initiatives like the Impact Lab. Their recently launched Amplify Initiative represents an open-source approach allowing community members to participate in language model fine-tuning.
Microsoft’s approach, outlined by Hector Duroir, evolved from 2018 when few regulatory frameworks existed. The company developed AI principles around privacy, reliability, inclusion, fairness, safety, and security, which became the foundation for their Office of Responsible AI established in 2019. Microsoft’s Sensitive Use Case programme conducts triage and challenges potentially problematic applications, escalating concerns to an AI ethics committee with board-level representation. Their collaboration with civil society includes partnerships with NGOs on projects like Samishka in India to build community-led benchmarks, recognising that safety tools must be grounded in specific community contexts rather than simply translating English-language approaches.
Both companies emphasised voluntary commitments made at various AI summits, including commitments at this summit to develop multilingual capabilities and build better evaluations against safety risks beyond English norms. The OECD frontier model reporting framework, emerging from the Hiroshima AI process, provides transparency mechanisms creating feedback loops between deployment experiences and upstream development decisions.
Industry Ecosystem Challenges
Ankit Bose from NASSCOM provided insights into challenges faced across the technology ecosystem, particularly highlighting different needs of various company sizes. NASSCOM’s responsible AI mission, launched in 2021, focuses on developing open assets, building capacity, and supporting adoption across the entire ecosystem from government to startups and SMEs.
Bose identified a critical implementation gap: whilst numerous frameworks exist globally, there remains significant disconnect between concept-heavy approaches and actionable guidance. Developers and technologists often become overwhelmed by framework proliferation without clear direction on practical implementation. This challenge is particularly acute for startups, where founders must simultaneously build businesses, develop technology, assemble teams, secure funding, and implement governance practices. The tendency to treat governance as secondary during early development can create significant problems during scaling phases.
Effective implementation requires collaboration between typically siloed groups including technical teams, business units, legal and risk departments, and finance functions, working together on a use-case-by-use-case basis with high-impact applications receiving proportionally greater investment and focus.
Investment Perspectives and Market Incentives
Namit Agarwal from the World Benchmarking Alliance provided sobering data on corporate AI governance implementation. Their assessment of 200 influential technology companies revealed that whilst approximately 40% have disclosed AI principles, only slightly above 10% meet global expectations for governance implementation, and remarkably, none disclose human rights impact assessments. This progression from 40% to 10% to 0% illustrates the significant gap between stated intentions and actual implementation.
The World Benchmarking Alliance demonstrates how capital markets can incentivise responsible innovation through clear expectations tied to consequences for weak governance. Investors should focus on three key areas: AI governance and board oversight, including clear board-level responsibility for AI risk and executive incentives aligned with long-term human rights risk mitigation; implementation at product and business model levels, moving beyond policy statements to examine how ethical principles translate into product-level strategy; and robust human rights impact assessments with meaningful public summaries and integrated mitigation measures.
Cultural Context and Inclusion Challenges
A recurring theme was the critical importance of moving beyond English-centric AI development toward truly inclusive approaches. The challenge extends beyond simple translation to encompass cultural nuances, informal communication patterns, and contextual understanding that varies significantly across different communities.
LG AI Research, represented by Yuchil Kim, emphasised transparency through annual accountability reports sharing both best practices and implementation struggles. Kim referenced their third edition annual accountability report released the day before this discussion, reflecting the African proverb: “If you want to go fast, go alone. If you want to go far, go together.” Building trustworthy and safe AI ecosystems requires collective long-term commitment rather than individual competitive sprints.
Critical Perspectives on Fundamental Challenges
Parvati Adani from Sero Amarchan Mangaldas provided a powerful concluding perspective that reframed the technical and policy discussions around fundamental questions of responsibility and inclusion. Her conversation with an AI system about ethical limits revealed profound insights about the philosophical void at the heart of AI decision-making. The system’s response—acknowledging it has “no continuous thread of existence” and “cannot verify” its own ethical understanding—crystallised the fundamental challenge that these systems cannot bear moral responsibility or understand consequences.
Adani’s observation that “any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design” challenged the notion of universal AI solutions by highlighting how exclusion of marginalised voices represents intentional design choices rather than technical limitations.
Future Directions and Ongoing Challenges
Despite sophisticated frameworks and governance structures, significant implementation challenges remain. The gap between principles adoption and actual governance implementation is particularly pronounced in areas requiring ongoing operational commitment, such as human rights impact assessments and meaningful stakeholder engagement.
The discussion revealed tension between framework proliferation and the need for practical implementation guidance. Different organisational contexts require differentiated approaches—large technology companies can invest in comprehensive governance structures, while startups and smaller companies face resource constraints that make such approaches impractical.
Several unresolved challenges require continued attention: scaling responsible AI practices from large companies to resource-constrained startups; translating competing frameworks into actionable guidance for technical practitioners; ensuring meaningful stakeholder engagement beyond tokenistic consultation; addressing tensions between innovation speed and responsible development; creating effective incentive structures that reward responsible AI practices; and developing robust evaluation methods for AI systems across diverse languages and cultural contexts.
Conclusion
This discussion revealed both significant progress and substantial challenges in implementing responsible AI governance. Whilst sophisticated frameworks exist within major technology companies and international organisations, critical gaps remain in translating principles into practice, particularly for smaller companies and in ensuring truly inclusive development serving diverse global populations.
The conversation demonstrated consensus on fundamental principles—human rights as baseline requirements, the necessity of multi-stakeholder collaboration, the importance of language inclusion, and the value of transparency. However, it also revealed implementation complexity across different organisational contexts and the ongoing challenge of moving from intentions to concrete action.
The philosophical questions raised about AI systems’ inability to bear moral responsibility underscore the fundamental importance of human oversight and governance frameworks. The initiatives discussed—from the Global Dialogue on AI Governance launching in July to the UNESCO MOOC on AI ethics—represent important steps toward more effective and inclusive approaches to responsible AI development that truly serve the diverse global populations these technologies increasingly affect.
Session transcript
These are consequential challenges that have impacts in people’s lives on a day -to -day basis. And our session is going to address how global standards, collaborative public -private solutions, and rights -based approaches can enable responsible AI with meaningful real -world impact. And we know that these things don’t just happen on their own. It takes deliberation. It takes thought. It takes engagement to make sure that the products and approaches that we’re using in the AI field avoid some of the pitfalls that may be associated with them. And the companies are going to share some of the good practices that they’re engaging in about how that works in the real world. And we know if they don’t engage in that way, that the risks are there and very much present.
And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying to deliver these benefits for. Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. Companies, of course, have a responsibility to respect human rights and address the risk to people stemming from their products. And human rights due diligence, of course, is one of the process -based ways and a pragmatic way to weave this into corporate operations. But, of course, governments are the ones that also have a responsibility here, too, to create a level playing field, and we talk a lot about that.
We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well. Our BTEC project at OHCHR is aimed at how do we make this conversation happen. So through convenings like this one, through engaging with companies, pulling out their good practices and letting all of you hear about them and encouraging others to do the same is what that project is really about. And we are really looking at and working with, of course, how to use tools like the UN Guidelines. We’re working with the United Nations, the United Nations, and the United Nations to try to get the best out of the work that we’re doing.
And we’re working with the United Nations to try to get the best out of the work that we’re doing. So we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. UNESCO’s AI recommendations on ethics and figuring out how we weave those into the decisions and work that’s being done now.
And as I said, bringing this conversation to this summit where there is truly a global and multi -stakeholder effort is happening to really look at AI innovation and deployment has been incredibly important. So without further ado on that front, I want to hand over to my colleague and co -sponsor here, Tim, over to you.
Thanks, Peggy. And good morning, everyone. Ambassador Reintesma from Estonia and co -chair of the AI Dialogue that the United Nations is holding. Of course, Peggy and dear panelists, it’s really wonderful to be here with you today to be part of this conversation on responsible practices and industry standards. And as we all know now where AI is moving, you know, from something we discuss in theory to really something that is shaping the decisions in real time and real institutions. and of course for real people. I’d like to thank particularly the Office of the High Commission for inviting us to join in on this and for working with us on organising this event. It’s been a pleasure.
At UNESCO we often return to a simple idea that trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability and that’s why the recommendation on the ethics of AI we believe is so important because it does give the world a shared foundation, a first step on how AI could be built and used in ways that protect people’s rights, that promote fairness and support inclusion. So we’ve been translating this global agreement and framework into local realities and through what we call the RAMS, the Readiness Assessment Methodology Reports which we’ve now launched in around over 80 countries and just two days we did India’s readiness assessment report.
And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyond theory and towards this responsible human -centred deployment of AI we hear about. And so by grounding innovation in these evidence -based diagnostics, we hope to ensure that progress remains aligned with those shared values. But, of course, a recommendation only matters if it can be applied by people who are actually catering, creating and using AI. And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course, or a MOOC, as more commonly known, on the ethics of artificial intelligence.
And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide global audience, and to make a practical… for day -to -day work. And so, as I mentioned earlier, the key idea behind the MOOC is ethics by design. And so in simple terms, that we don’t wait until something goes wrong to ask these ethical questions. We should build these questions into the process from the beginning. And the course will help learners think through issues like fairness, transparency, safety, accountability and inclusion at the stage when decisions are still being made rather than after systems have already been deployed. The course, of course, is really going to be focusing on practical tools so that we can offer clear ways of thinking and working that can be used in everyday settings.
So it’ll help learners, for example, recognise common risks early, ask better questions during development, document the decisions made responsibly and think through the impact of AI systems on different groups of people. we’re moving beyond a one size fits all approach and we’ve done this by collaborating with experts from over 10 countries and 5 continents with some of the leading minds from the University of Oxford and the Alan Turing Institute and this global group, this global coalition is really vital because AI of course doesn’t operate in a vacuum it’s shaped by languages, it’s shaped by cultural norms and institutional capacities and of where it is developed and deployed so by integrating these diverse perspectives we’re trying to move from the theory again to the live reality so ultimately this MOOC is a capacity building effort with a simple purpose to help more people around the world build and use AI in ways that are responsible, inclusive and worthy of public trust we look forward of course to this continued collaboration with governments, with industry, with academia and civil society as we try to move forward we take it forward and we hope many of you will engage with the course when it launches, not only as learners, but also as partners in building a stronger culture of ethical innovation across the world.
Thank you very much.
Thanks, Tim Curtis, UNESCO. We’re all looking forward to it. Now we have anticipation. We’re very fortunate to have an addition to our program today with Ambassador Tomsar, the permanent representative of Estonia, who’s one of the co -facilitators for the Global Dialogue on AI that will be launched in July, and a big responsibility. And he’s here to tell us a little bit about where it’s heading and how you all can contribute. Please, Ambassador.
Thank you very much. Good morning. I don’t know, is it morning? Yeah, maybe. So after three days here in India, I think that I lost time. I’m not track of understanding. Is it morning or evening? But thank you, UNESCO and Office of the High Commissioner for Human Rights. for convening this really important discussion, and of course to all our hosts here in India. And I also thank partners who contributed to this work. Today I’ll speak on behalf of two co -chairs of United Nations Global Dialogue on AI Governance, and two co -chairs are from Salvador and Estonia. The first Global Dialogue on AI Governance was mandated by all member states through a General Assembly resolution adopted in August 25.
So this is a member states driven process. It belongs to every country, to all member states. And its task is very practical, while its scope is multilateral. So this… The aim is, you know, to come together. It is a platform where governments and stakeholders exchange best practices and experiences, and this, we believe, can strengthen international cooperation on AI governance and ensure human -centric AI supports sustainable development and reduce, indeed, digital divides that are already there. So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attention four points from these priorities. So first, they want safe, secure, and trustworthy AI systems, and the trust here, of course, is an absolute key word.
Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to participate fully in AI economy, and inclusivity and equal access are essential here. Third, they want governance approaches that can work across borders. and be practical. So fragmentation raises the cost and weakens trust. So interoperability is absolutely key. And fourth, and that is, I think, quite actual here, they want AI anchored in human rights and international law. And this includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability. Now, we know human rights are not optional. They are part of a mandate agreed by member states. And today’s focus on responsible practices and industry standards responds directly to these priorities.
And standards turn principles into action. They shape risk management, they clarify accountability, they guide human oversight, and they give companies and regulators tools they can apply in real systems. so let me say that the global dialogue will not and I guess it cannot impose one single model we will listen we will identify common ground we will build on existing initiatives ethics of AI was mentioned here and it’s of course one of them we’ll avoid or try to avoid duplication and we will focus on practical value so I encourage you to bring your experience into this process share what works share what doesn’t work help us identify approaches that can scale across regions and level of capacity and in best case if we succeed and failure is not an option safety and trust will be visible in how systems are designed deployed and governed they will be reflected in real safeguards and in benefits that reach more people and this is very important for us thank you So I thank you and wish you a productive day, practical exchanges that move our common work forward.
And with this, I give it over to the real experts and panel. Thank you very much.
Thank you, Ambassador. Wonderful to have you with us, and I think we’re all looking forward. We’re looking forward to having all of you join us in Geneva in July. So with that introduction by the three of us, we’re really, as the Ambassador said, going to turn it over to those who can really inform us about how this work is happening and I hope inspire us to both give support and emphasis, amplification to the work that you’re doing and bring more into the fold around responsible business conduct. So with that introduction, I’d really like to start, Ankit Bose, with you from NASSCOM. We had a great conversation yesterday. I’d love for you to inform our audience that NASSCOM represents the leading Indian tech industries and we want to hear more about your work and what you’re doing to encourage companies and help them to ensure a responsible work environment.
Thank you.
Thank you so much for having me here. And it’s my pleasure here to address the audience. So NASCOM has been there for almost four decades plus, right? We have been helping the tech industry in the country to shape and change the whole agenda for the country. I think that’s what we have been doing, specifically on responsible AI interests, right? I think the mission for NASCOM started in 2021. So we started with a gap that, you know, we were seeing a lot of AI was getting developed. But again, I think we found that there was some missing element that was the responsible, the trust, the human element. I think that that is how the mission started.
From that point in time, our main core objective has been to develop open assets, right? Build capacity, build, you know, adoption, right? And I think help all the different components in the ecosystem, right? Right from the government to the startup, the SMEs, right? All of them. So we have been trying to help them. And we’re trying to help them go up the ladder. and then really become aware, I think, not only the gloomy side of AI, but also the bright side if they adopt responsible AI governance practices right at the early. They can have a big upside. I think that’s what we have been doing.
Can I ask, Ankit, how does this work? You mentioned that full range of companies that are involved, and one of the topics we spoke a bit about yesterday is the difficulties sometimes when you have big companies, we have some of them represented here, but also startups and small and medium enterprises. How do you differentiate? How do you make sure that we’re engaging across that very differentiated group of industry?
Yeah, so I think if I take it, I think there’s the big techs, right? Then there are the services companies. Then there are the middle -sized, small, and startup. I think all five of them have different sort of engagement, right? The big tech, I think, are playing at the front foot, right? The services companies have to follow their contracts, right? The bigger services companies. The medium tier companies, they are really trying to understand how they grow their AI base at the same time build, you know, that services or product using right governance principles. But again, I think the bigger support is needed from the, you know, the smaller startup, right? Because they are really, really fighting for day to day, right?
I think, and believe me, I think a startup founder has to first build a business, a tech, a team, right? And also get funding and apparently focus on, you know, a lot of things around governance, right? I think in that whole journey that we have seen, they are putting it at a second or probably the side burner, which is something which we see is a complete no -no. If you do that, you know, when you’re building a product, right, you might miss when you’re scaling. I think that’s what we are.
Great. Thanks very much. I think we’re going to turn to the scale side of it now with Alex, you’re next in line. So, Alex Walden, you’ve been working on these issues within Google, and I think one of the insights that I’ve learned from you over the time we’ve known each other is really how complex it is to bring to product teams and those that are on the technologist side some of these issues of responsible business conduct and human rights, and give us the benefit of your wisdom about how that works and how we can do it better.
Thanks for the question, and I love that you said that because I do see a very important part of my role as making sure that the stakeholders that we work with understand how things are working within companies because that helps us be better and you be better advocates for helping us improve. So, anyway, but to your question, because I know I’m going to be fast, I think where it really starts for us is from the values perspective. Obviously, we’re a company that’s founded on values around freedom of expression and privacy and bringing the benefit of our technology to everyone, and so that is where… That’s where it begins. But obviously, we have things like… Like, ultimately, it’s the sort of governance inside of the company that is what permeates throughout the 180 ,000 people that work at Google to ensure that we are being responsible in the way that we’re developing AI.
So as a baseline for us, responsibility and thinking about what responsibility means has to start with human rights, and then we can build from there. So we have a corporate policy that says that we have a commitment to respect the UN guiding principles on business and human rights. And we’ve also built on that with things like our AI principles that reinforce sort of more of an operational way in which we can manifest those values in all of the teams that are working to develop the various models or applications of the models in, say, Google Cloud or YouTube or Search. Just to maybe hone in a little bit on the types of standards that we’re using, because I think that’s important because there’s so much work being done in our ecosystem.
We use the UN guiding principles. We use the work happening. We use the work happening at the OECD, the work at UNESCO. engagement with our peers in industry through the BTEC project, through global network initiative, and this is just a few, but all of the guidance that comes out of those places and the dialogue that happens there helps us ultimately inform how things are working inside the company. And then just one layer down, then I’ll stop. I think having programs and processes like training and dedicated teams, ultimately that’s how you operationalize this through getting a product to market. And so I can say more, but I think those are kind of the big picture structures for what’s required for a company to do this at scale.
So, you know, I’m not going to let you off the hook quite that easily. So we know that this isn’t always easy, though, right, that there are obstacles to really convincing people it’s worth the time. I’ve been in the room where hand -wringing is described as the, you know, no more hand -wringing about safety. We’re going to, we need to just move forward, and I’m sure there are pressures. that you face as the lead for human rights within this company trying to get your message heard. Tell me a bit about how you’ve been able to sometimes surmount some of those necessary challenges that you might see from that different perspective on whether or not these are hurdles or supports for the company to do its mission more effectively.
Well, I mean, I think in general we have sort of corporations are incentivized to put products on market that are safe and that are trusted by our consumers. People know Google best through Google Search or Gmail, the varieties of consumer -facing ways they’re engaging with our products. And so we do have an inherent sort of market business reason to put out products that people trust and deliver good outcomes. And so we have to have processes inside that… that make that real. And so what we do is we have model requirements just at the most granular level. Before any product goes to market, there are model requirements. And so those teams are focused on ensuring that they’re validating the data and doing testing and doing evaluations.
And that’s at the model level. And then at the application layer, we have requirements for teams to be, again, doing testing, additional evaluations, setting additional guardrails, and focusing on what mitigations are going to be put in place for, again, things like Gemini before that gets launched. And then we have to have executives review these things. So before anything goes to market, leadership needs to understand what the risks are and how we’re mitigating them and have a plan in place to address that. So that is an important part of the process for us. And then last, we have post -launch monitoring, because obviously we can do all the testing in the world, but once you’ve launched a product, you have to be able to do all the testing.
there may be novel or new or residual risks that arise. And so we have to have a process for continuing to monitor that, understanding it, getting feedback, improving and improving
Great. That’s super helpful, Alex, to understand that multilayered approach that needs to happen within companies, including, I think, that executive level that you mentioned. I mean, the signals from on top will actually inspire all of those other levels to do what we’re hoping they’ll do. And we have another example with us of some of these practices. I want to turn to Hector Duroir, who’s the director of responsible AI public policy at Microsoft. And we want to hear more about what you’re doing to embed responsible policy practices within Microsoft’s approach.
Thank you very much, Peggy, and thanks for having Microsoft here. So, yeah, I want to start with the inception of our responsible AI approach, which was in 2018. And at that stage, you didn’t have codes, directives, regulations. Frameworks guiding our approach. We’re nearly starting from a blank page. And we didn’t talk about foundational models or frontier models at that stage. It was all about specific AI system and applications, such as facial recognition, for instance, which was very popular. So we forged our AI principles around priorities such as privacy, such as reliability, inclusion, fairness, safety, security. And these high -level principles, the whole challenge was to translate them into practice afterwards. And so it’s really on this basis that we forged the Office of Responsible AI when we created it in 2019, around these principles, which then became our RAI standard, guiding all our actions across our different programs.
One of the programs that I want to reference here is our Sensitive Use Case program. So it’s a team within the Office of Responsible AI that is in charge of doing some triage, challenging basically sensitive use case coming from our different markets. on AI systems and models that could actually violate these principles that I was referencing. And so this team analyzed these use cases and then when it occurs that it’s necessary, bring them to our ITER committee, which is our AI ethics committee. And it involved Microsoft Board, both at the CTO level and the present level. And I think the board inclusion is very important in this kind of internal risk management framework. And so this work has been informed during the past years by many interesting developments.
So the OECD AI principles, obviously, but also the UNESCO recommendation on AI ethics. And I think all these principled approach that evolve or refine nuanced with AI capabilities are actually so important and very useful signals for us to refine our own AI governance program within Microsoft.
Hector, you’ve talked a little bit about how you look at it from an internal perspective. But we wanted to hear a bit of how you look externally, like what are the drivers between how you engage. across the sector and with the government side as well.
Yeah, and I think we always navigate this very interesting interplay between best practices and international norms and regulatory standards. And a very good example here is the line of voluntary commitments that have been signed across the AI summits. And so if you look at Letchley Park in the UK or the South Korea summit that happened afterwards, it really helped us, as Alex was referencing, to ground our model testing approach, especially against public safety and national security risks. So when we talk about cybersecurity, for instance, or loss of control, or CBRN risks, that really grounded some very solid testing approach with some concrete operational triggers, concrete high -risk domain that we’re monitoring at the model level.
So that was one. The OECD hyper -reporting framework, which came out of the Hiroshima AI process, is another very good tool that I was involved in and I want to reference here. It was launched along the lines of the Paris AI Summit. And actually, it’s a very good way to understand how risk management transparency works in practice and how real -world deployment experience and transparency experience can guide upstream developments. And so it’s this kind of feedback loop that it creates that’s very interesting. And because we’re in Delhi, just to reference the voluntary commitments that were signed yesterday, I think that’s another very, very, very good and positive approach that the Indian government have been taking, especially on one of the commitments which basically encourages company to forge multilingual capabilities approach.
So basically build better evaluations against safety risk, not only in English norms, but beyond English norms. And I think that speaks about, you know, our principle of inclusion. That’s so important, and I’m very happy that they initiated this work.
I have to say, one of the contrasts I’ve been making when I look at what’s been talked about here in Delhi as opposed to… prior summits is that issue of inclusion. And the language issue, I think, is so underrepresented in some of the conversations we have. So it’s wonderful that you’ve given that a shout out. We’re very fortunate, Yuchil, to have you with us as well. Yuchil Kim, who is the vice president at LG for AI research. So we’d really like to hear more about how you’re engaging with these global technical and policy standards. We talked about the UN Guiding Principles on Business and Human Rights, the UNESCO recommendation on AI ethics, and, of course, the MOOC that’s being worked on.
So give us a sense of how these frameworks are being engaged with by LG.
So the essential of MOOC is for the practitioner The practitioner usually is struggling with the same question How do I actually apply this in my day -to -day work? So we are focusing on the bridge in the gap So we provide the best standard risk So we get a lot of risk that Timothy mentioned So we also contribute to our own experience So I previously mentioned about our process And we made also our AI -powered data compliance system And also I will mention soon We have an annual report about AI ethics activities So I hope the MOOC can be a good practice for everyone It will launch in this half So last one is I want to talk about transparency So we have a lot of activities about the AI Responsible AI Inclusive AI So we published our annual accountability report on AI So yesterday we released the third edition.
So here are some some track of that. I will spread out after our session. So please refer my documents.
Well yeah. Wonderful. No I think it’s super interesting to understand both how you’ve been looking at that learning process within the company but also how that more global approach working with UNESCO is going to be very helpful and I think it’s one of those areas where we all know so much more needs needs to happen. But we’ve we’ve heard the the company perspective here and we’re very fortunate to have with us from the World Benchmarking Alliance, Namit Agarwal. And Namit, you know, I think one of the things that we’ve talked about is how we incentivize the race to the top amongst all of these actors in this space. And you’re going to, I hope, give us some some insights based on the the work that the World Benchmarking Alliance is doing about how capital and investment can be used to make sure that innovation is being approached in a responsible way.
Over to you, Nami.
Thanks for having me here. And I’m not representing investors, but we do work with several stakeholders, including investors, civil society, governments, and companies. So we are a nonprofit, and we try to strengthen accountability of the world’s most influential companies so that their impact on people and planet can be sustainable. We also assess the world’s most influential tech companies on whether they are advancing a trustworthy, rights -respecting, and inclusive digital future using standards such as the UN Guiding Principles, but also others that were mentioned by my fellow panelists here. Our role is to provide comparable, credible, and standardized data that our stakeholders can use because of the challenges that we face. Because it’s an ecosystem approach, so how can they work together in doing that?
So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our latest assessments of 2 ,000 companies at Davos last month, and particularly from the tech side, what we found is close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations, global expectations on the governance aspect of it, and none of these 200 companies that we assess disclose their reports on human rights impact assessment. And I think that clearly shows that while there is a lot of intent, some work is happening, but governance and accountability are not really there, so we need a lot of work to happen there. And we believe responsible innovation requires incentives for long -term risk management, clear expectations that are tied to capital.
So I think that the way we’re going to do this is to look at the market, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and consequences for weak governance because it has to be consequential for companies to move in that direction.
And I think that is where investors have a very catalytic role to play. We convene a coalition of investors and civil society organizations
Now, I mean, I think it’s so interesting that we work in a sector that is incredibly based on data, but yet we don’t necessarily bring it into this conversation in the ways that we need to, and that idea of both incentivizing the right practices and leverage within companies, but also, you know, too many conversations sort of focus on the tech industry as a whole and sort of group everybody together as if they’re all engaging in the same way. And so the work that you’re doing really helps us to understand those nuances more. Could you go a bit deeper and look a bit at some of the examples and concrete suggestions coming out of your work as to how to push that discussion forward more?
Absolutely. So I think the first thing is engagement and dialogue, and I think that is a very important way. And we have been fortunate to have good engagement with both Google and Microsoft on this panel. but again it’s important to build on engagement because it’s a continuous process it’s important for investors to engage with some of the leaders but also engage with companies who are fence -sitters to bring them along faster the laggard will definitely catch up and come on board but for investors and if you want to for capital and finance to incentivize responsible innovation, responsible AI there are three things that we believe investors should definitely do first is on AI governance and board oversight investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are aligned with long term human rights risk mitigation and whether governance applies across the full AI value chain second is on implementation at product and business model level and we heard some examples just now investors need to move beyond policy statements and ask companies how ethical principles are translated into product level strategy How high -risk use cases are identified and whether there are internal mechanisms and controls to, you know, identify harms as they emerge.
And third is robust human rights impact assessments and asking whether companies conduct AI -specific impact assessments. Are they publishing meaningful summaries? Are mitigation measures integrated into product cycles? And I think this is an area where we have seen a lot of gaps.
Great. Thanks, Namek. I wonder if we could actually take that one step further and get some input from the other members of the panel on what that looks like in practice. Because, of course, this is a panel that’s focused sort of on the company perspective. I think we have some of our real partners here on the civil society side. And as much as they understand that that conversation needs to happen, I think they sometimes find it difficult to be able to make sure that the way those risks are assessed really looks like bring in the voices, bring in the experiences of people, and particularly people in different contexts and different environments in which the companies are being, their products are being rolled out.
So those issues of stakeholder engagement, access, dialogue with the civil society side, it would be great to hear a little bit more about some of the lessons that you’ve all learned there. And I see you shaking your head. Please tell us from the NASSCOM perspective how you look at it.
Well, I think from an enterprise lens, right, I think when they are trying to implement responsible AI or trustworthy AI, right, I think the biggest issue is there are different groups internally, right, the tech group, the business group, the legal risk group, right, the finance group, right. And then all of them are working in silos, what we feel like, because the business want the best, you know, for the business, the tech wants to put the best technology, right. The risk is very, you know. Right. conservative, right? And finance always has upper limit on what they want to spend. So that’s what issue. I think what helps is if all of them build a collaboration which can be taken use case by use case.
I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first thing. The second thing is I think what from NASCOM what we are seeing, there’s a lot of frameworks which are getting developed, right? Every country, every place you go, there’s a new framework, right? But from the framework heavy or the concept heavy to action is not happening. I think that’s a big gap, right? So if a technologist is trying to implement responsible governance, right? A developer is trying to implement, right? He will be lost in the framework. He doesn’t know what’s actionable. I think what he should do. So I think that’s one big need.
I think that’s what we are also driving. We are trying to drive a multi, you know, multi -different organization -led approach where we have all different sizes of organizations, where we come together and start discussing, collaborating, implementing, right? I think that’s the second nugget. I think these are two points. I know time is up.
No, that’s great. I mean, I think it shows that that collaborative effort is going to be super important rather than a siloed approach for so many practical reasons as well, that companies can only respond to so many different frameworks. And what they need to do is have the simple guidance and support that they need to actually implement at this stage. Headquarter, do you want to say quick comments from the company side about how you’re facing those challenges?
Yeah, two very quick examples on how we involve civil society and academia in this process. So our work really sits at the intersection of policy, research, and engineering groups. And to inform product development with our responsible AI principles, we regularly publish some internal policies. And it’s an iterative process with our research teams, with our product teams. And as part of this process, we actually include academics who have a specific REST domain expertise or think tanks and civil society organizations which have been thinking very deep about the deployment of 1AI system, 1AI model in certain contexts. And so that really informs the product that we do from the inception. And I think the second example that was raised is the big topic and the governance challenge that we face is the importance of refining AI evaluations.
That’s the constant thing. And in India, for instance, we’ve been working with some NGOs around a project named Samishka to build some community -led benchmarks, which is basically a safety tool that we include afterwards in the system construction, to really get data sets that are grounded in a community with a specific… cultural aspects, specific contextual aspects, because if you just translate safety tools from English to another language, you lose all the context for which this safety tool has been built. And so that’s another example of really an area where we need more cooperation between civil society and governments and companies. It’s really how do we build these safety tools beyond English norms such as an India.
That’s great, and it takes work to do that, and the more we can spread, you’ve done some of the work, you know how to do some of it, and diffuse it amongst other companies that could learn from it. That’s part of what we’re trying to do with BTEC, but I think there’s a lot more to be done. Yuchil, do you want to come in?
Yes, I agree with his comment. The safety, we should work together. So that’s the reason why we make our annual report, because sharing our best practice and also sharing our struggles, what we had a struggle of, that we think that’s a very important thing. So this is my colleague mentioned about it. There’s an African proverb that said, if we want to go fast, go alone. If we want to go far, go together. So building a trustworthy and safe ecosystem is not a sprint. So it’s a long journey, so we can go together.
It’s a long journey with a lot of sprints happening day to day, as far as I can tell. Some of them here at the summit, but over to you, Allie.
So much sprinting. I think maybe just to pick up specifically on the stakeholder engagement piece. So a few things. One, I think it’s important for companies to have a programmatic approach to stakeholder engagement, so we need to have ways in which we’re regularly engaging with stakeholders in general, not just on a specific product question. But so I would say first a programmatic approach, and then second, something that is more ad hoc. So when we need to consult specifically on a product, we need to have a sort of process and way to do that. The other thing is we have programs internally like trusted tester programs where we are working with third -party organizations to make sure that they have early access or pre -launch access to models or to a product in order to test it so that we can identify potential risks or errors ahead of time and address them before we launch a product.
And then last, just to highlight something that we do is similar to others, our research team called Impact Lab, which is part of the overall human rights programmatic work at the company, engages directly with communities in doing research to inform how we are improving our products and what we’re developing. So that work is also happening through the research team specifically. And they recently launched something called the Amplify Initiative, which is an open -source app. This is specifically on language inclusion that allows… members of the public and communities to engage in the fine -tuning work around our language models. So specifically that there is a lot of, there’s a wealth of information and expertise out there that we should all be benefiting from, that we can benefit from, and it’s open source so we can also share it with others in industry.
That’s great to hear, and I’m sure more needs to be done on that front, but the amplification effect is so crucial. Look, we could probably go on talking all day, but I see the clock is ticking down. Now, fortunately, rather than us try to draw the conclusions from this, we’ve welcomed in another speaker to give us some concluding remarks to pull some of these pieces together. I’m very happy to invite Parvati Adani from Sero Amarchan Mangaldas to help us think through some of these issues. Please.
Thank you for that. I think it’s partly easy and partly tough because there was a lot to understand, but I think we can go back from this. conversation. But firstly, thank you. You’ve held the conversation beautifully. And thank you to UNESCO, United Nations, NASSCOM, and everybody in this room who brought their knowledge, their conscience to this conversation. Actually, I just want to talk a little bit about a conversation with a machine. As we were thinking about this topic, and we were engaging on this issue, I wanted to share something that I might, I feel resonates with a lot of what you’ve talked about. I did something in preparation, I decided to ask the tools that we’re talking about over here, a question that we avoid asking ourselves.
Do you, I’m talking to the tool, do you have ethical limits? Do you understand the difference between what it can do and what it should do? And I’m going to quote verbatim. On conscious, at a conscious level, the answer is I don’t know. And neither does anybody. Nobody else. The gap is a philosophical, uncomfortable position where think of me as having no home inside. I have no continuous thread of existence and I cannot verify about myself what you have asked me. I don’t have any consequences to bear. Now, what came back, though unexpectedly thoughtful, showed us about restraints, values and what it appears to have internalized. It acknowledged the difference between instruction and conscience, a lot of what we’ve talked about today.
And so, I think when we talk about this, we said human rights are not optional. We cannot ignore the impact on people and planet. we have to make incentives for good governance so when a tool cannot understand this for itself I think we have to do the job what we have chosen in India and when we are having this conversation this location is not ceremonial, it’s very deliberate that we have thought about innovation over restraint and we have to think about that being the right choice we allow innovation to be in a safe place without feeling the weight of the regulation and I think we have a lot to learn from all of you who have been doing this for so long the privacy, the safety, the impact on children and vulnerable groups the question is whether the people that we are talking about are going to be the subjects of the transformation or just its audience or just the object An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution.
So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design. I think the ideas of an interoperable, flexible system is a forward -looking and an inclusive one. I think a lot of what you mentioned, Alex, about governance inside the company, it’s wonderful what you’re talking about. And I think the voluntary commitments that have been reflected in this summit is also fantastic. So now we come to the harder work. The ambition is real. The infrastructure exists. But ensuring that… We don’t leave with just good intentions and good ideas, but action. Thank you.
Thank you, Praveen and Dhani. It’s wonderful to hear those perspectives. We’re coming to the close of this session. Just a few parting words to all of you. I think we’ve done enough in this short conversation to really give a sense of how complex some of these issues are, the dynamics within companies and externally and then globally across different geographies, the challenges that are faced. But the reality is that all of us have a responsibility to engage on these, and we each have different roles. We’ve heard a bit about what some of the companies are doing. We’ve heard a little bit about how we can challenge them and incentivize the actions that they take in the space.
There are good practices, but they’re not universally applied. They’re not available to some companies. There are companies that may want to engage in this, and we can help them to do this. NASSCOM and I have been… We’ve been discussing a little bit about how we make, simplify things, bring in more into the fold of this conversation. And, of course, we’re here in an environment where we have governments that are looking at what do we need to do to create responsible business practices and incentivize them as well. So I hope everybody walks out of the room thinking, what can I do to continue this conversation? How can I differentiate between companies that are thinking about these issues in a way that will deliver for myself, for my children, for my future the ways that we want to see?
AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give those values, that will inform and give us human dignity going forward in the future. So thank you all so much for joining us. Thank you for fitting this into your schedule today, and enjoy the rest of the summit. Thank you. Thank you. Thank you. Thank you. Thank you.
Peggy Hicks
Speech speed
169 words per minute
Speech length
2469 words
Speech time
876 seconds
Practical safeguards and human‑rights due diligence
Explanation
Hicks calls for concrete safeguards that make AI work for all people, not just advanced economies, and stresses human‑rights due diligence as a pragmatic way to embed responsibility into corporate operations. She links these measures to the broader goal of responsible AI governance.
Evidence
“And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying to deliver these benefits for.” [1]. “And human rights due diligence, of course, is one of the process -based ways and a pragmatic way to weave this into corporate operations.” [9].
Major discussion point
Major discussion point 1: Global standards and governance frameworks for responsible AI
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Incentives for responsible AI engagement
Explanation
Hicks argues that companies need incentives so that those engaging responsibly are rewarded, linking financial motivation to ethical behavior in AI development.
Evidence
“We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well.” [12].
Major discussion point
Major discussion point 1: Global standards and governance frameworks for responsible AI
Topics
Financial mechanisms | Human rights and the ethical dimensions of the information society
Tim Curtis
Speech speed
158 words per minute
Speech length
740 words
Speech time
280 seconds
UNESCO MOOC on ethics by design
Explanation
Curtis presents a UNESCO‑led massive open online course that embeds ethics by design into AI education, aiming to make AI ethics learning accessible worldwide and to translate principles into practical day‑to‑day work.
Evidence
“And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course, or a MOOC, as more commonly known, on the ethics of artificial intelligence.” [27]. “And so, as I mentioned earlier, the key idea behind the MOOC is ethics by design.” [29]. “And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide global audience, and to make a practical… for day -to -day work.” [30].
Major discussion point
Major discussion point 2: Capacity building, education and operational tools
Topics
Capacity development | Artificial intelligence | Human rights and the ethical dimensions of the information society
Trust earned through design choices and safeguards
Explanation
Curtis emphasizes that trust in AI is built not by ambition but through concrete design choices, safeguards, and accountability, reinforcing the need for trustworthy AI systems.
Evidence
“At UNESCO we often return to a simple idea that trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability and that’s why the recommendation on the ethics of AI we believe is so important because it does give the world a shared foundation, a first step on how AI could be built and used in ways that protect people’s rights, that promote fairness and support inclusion.” [22].
Major discussion point
Major discussion point 1: Global standards and governance frameworks for responsible AI
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Rein Tammsaar
Speech speed
126 words per minute
Speech length
576 words
Speech time
273 seconds
Four priority pillars for responsible AI
Explanation
Tammsaar outlines four pillars that member states prioritize: trustworthy AI, closing capacity gaps, interoperability, and anchoring AI in human rights and international law. These pillars guide the development of global standards and cross‑border cooperation.
Evidence
“So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attention four points from these priorities.” [16]. “So first, they want safe, secure, and trustworthy AI systems, and the trust here, of course, is an absolute key word.” [18]. “Second, they want to close capacity gaps.” [19]. “So interoperability is absolutely key.” [21]. “And fourth, and that is, I think, quite actual here, they want AI anchored in human rights and international law.” [6].
Major discussion point
Major discussion point 1: Global standards and governance frameworks for responsible AI
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Standards turn principles into action
Explanation
Tammsaar stresses that standards are the mechanism that converts high‑level AI principles into concrete operational practices.
Evidence
“And standards turn principles into action.” [60].
Major discussion point
Major discussion point 1: Global standards and governance frameworks for responsible AI
Topics
Artificial intelligence | The enabling environment for digital development
Ankit Bose
Speech speed
179 words per minute
Speech length
758 words
Speech time
253 seconds
Breaking internal silos through multi‑organization collaboration
Explanation
Bose describes NASSCOM’s effort to move away from siloed approaches by fostering a multi‑organization, cross‑size collaboration that discusses, collaborates and implements responsible AI frameworks across the industry.
Evidence
“We are trying to drive a multi, you know, multi‑different organization‑led approach where we have all different sizes of organizations, where we come together and start discussing, collaborating, implementing, right?” [70]. “And then all of them are working in silos, what we feel like, because the business want the best, you know, for the business, the tech wants to put the best technology, right.” [72]. “The medium tier companies, they are really trying to understand how they grow their AI base at the same time build, you know, that services or product using right governance principles.” [55].
Major discussion point
Major discussion point 3: Corporate internal practices and accountability mechanisms
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Alex Walden
Speech speed
184 words per minute
Speech length
1023 words
Speech time
332 seconds
Google’s multilayered AI governance model
Explanation
Walden outlines Google’s three‑tier governance: model‑level requirements, application‑level guardrails with testing and evaluations, and post‑launch monitoring, complemented by trusted‑tester programs for early risk identification.
Evidence
“And then at the application layer, we have requirements for teams to be, again, doing testing, additional evaluations, setting additional guardrails, and focusing on what mitigations are going to be put in place for, again, things like Gemini before that gets launched.” [45]. “And then last, we have post‑launch monitoring, because obviously we can do all the testing in the world, but once you’ve launched a product, you have to be able to do all the testing.” [46]. “Before any product goes to market, there are model requirements.” [47]. “And that’s at the model level.” [48]. “The other thing is we have programs internally like trusted tester programs where we are working with third‑party organizations to make sure that they have early access or pre‑launch access to models or a product in order to test it so that we can identify potential risks or errors ahead of time and address them before we launch a product.” [54].
Major discussion point
Major discussion point 3: Corporate internal practices and accountability mechanisms
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
Hector Duroir
Speech speed
150 words per minute
Speech length
891 words
Speech time
356 seconds
Microsoft Sensitive Use Case program and board oversight
Explanation
Duroir highlights Microsoft’s Sensitive Use Case program and stresses board inclusion as essential for internal risk‑management, translating high‑level AI principles into concrete governance actions.
Evidence
“One of the programs that I want to reference here is our Sensitive Use Case program.” [57]. “And I think all these principled approach that evolve or refine nuanced with AI capabilities are actually so important and very useful signals for us to refine our own AI governance program within Microsoft.” [58]. “And I think the board inclusion is very important in this kind of internal risk management framework.” [63].
Major discussion point
Major discussion point 3: Corporate internal practices and accountability mechanisms
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Financial mechanisms
Community‑led multilingual safety benchmarks
Explanation
Duroir describes a community‑driven benchmark project in India that creates safety tools grounded in local cultural and linguistic contexts, preventing loss of meaning when safety tools are merely translated.
Evidence
“And in India, for instance, we’ve been working with some NGOs around a project named Samishka to build some community‑led benchmarks, which is basically a safety tool that we include afterwards in the system construction, to really get data sets that are grounded in a community with a specific… cultural aspects, specific contextual aspects, because if you just translate safety tools from English to another language, you lose all the context for which this safety tool has been built.” [76].
Major discussion point
Major discussion point 4: Multi‑stakeholder engagement, inclusion and cultural context
Topics
Closing all digital divides | Human rights and the ethical dimensions of the information society | Artificial intelligence
Yuchil Kim
Speech speed
146 words per minute
Speech length
272 words
Speech time
111 seconds
LG AI‑powered data compliance system and annual accountability report
Explanation
Kim explains that LG has built an AI‑powered data‑compliance system and publishes an annual accountability report on AI ethics activities, providing concrete guidance and transparency for practitioners.
Evidence
“And we made also our AI‑powered data compliance system And also I will mention soon We have an annual report about AI ethics activities … we published our annual accountability report on AI So yesterday we released the third edition.” [23].
Major discussion point
Major discussion point 3: Corporate internal practices and accountability mechanisms
Topics
Data governance | Artificial intelligence | Capacity development
Namit Agarwal
Speech speed
175 words per minute
Speech length
681 words
Speech time
233 seconds
Investor‑driven board oversight and incentives for responsible AI
Explanation
Agarwal calls for investors to demand clear board‑level AI risk responsibility, aligned executive incentives, and governance across the AI value chain, linking capital to long‑term risk‑management incentives and robust human‑rights impact assessments.
Evidence
“investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are aligned with long term human rights risk mitigation and whether governance applies across the full AI value chain” [7]. “And we believe responsible innovation requires incentives for long -term risk management, clear expectations that are tied to capital.” [8]. “And third is robust human rights impact assessments and asking whether companies conduct AI -specific impact assessments.” [2].
Major discussion point
Major discussion point 5: Incentivising responsible innovation through capital and benchmarks
Topics
Financial mechanisms | Human rights and the ethical dimensions of the information society | Artificial intelligence
Parvati Adani
Speech speed
133 words per minute
Speech length
564 words
Speech time
253 seconds
AI must recognise language, gender and informality; frameworks incomplete otherwise
Explanation
Adani stresses that any safe and trusted AI framework that fails to understand informality, language, and gender is fundamentally incomplete, warning that such gaps limit AI’s universal applicability.
Evidence
“So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident.” [78]. “It’s incomplete by design.” [80]. “An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution.” [81].
Major discussion point
Major discussion point 4: Multi‑stakeholder engagement, inclusion and cultural context
Topics
Closing all digital divides | Human rights and the ethical dimensions of the information society | Artificial intelligence
Agreements
Agreement points
Human rights must be the foundational baseline for responsible AI development
Speakers
– Peggy Hicks
– Alex Walden
– Rein Tammsaar
Arguments
Global standards, collaborative public-private solutions, and rights-based approaches are essential for enabling responsible AI with meaningful real-world impact
Corporate responsibility must start with human rights as a baseline, building on UN Guiding Principles and operational AI principles
AI must be anchored in human rights and international law, including protecting vulnerable groups and ensuring oversight and accountability
Summary
All three speakers emphasize that human rights cannot be optional and must serve as the fundamental foundation for any responsible AI governance framework, with specific reference to UN Guiding Principles
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Multi-stakeholder collaboration and engagement is essential for effective AI governance
Speakers
– Peggy Hicks
– Tim Curtis
– Ankit Bose
– Alex Walden
– Hector Duroir
Arguments
Global standards, collaborative public-private solutions, and rights-based approaches are essential for enabling responsible AI with meaningful real-world impact
The course will help learners think through issues like fairness, transparency, safety, accountability and inclusion at the stage when decisions are still being made rather than after systems have already been deployed
Multi-stakeholder collaboration is needed to move from framework-heavy concepts to actionable implementation
Companies need programmatic stakeholder engagement approaches, including trusted tester programs and community research initiatives
Working with civil society and academia helps inform product development and build community-led benchmarks for safety
Summary
Speakers consistently emphasize the need for collaborative approaches involving governments, companies, civil society, and academia to effectively implement responsible AI practices
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Language inclusion and cultural context are critical for truly inclusive AI systems
Speakers
– Hector Duroir
– Parvati Adani
Arguments
AI systems must work beyond English norms and address multilingual capabilities to ensure true inclusion
AI systems that cannot understand diverse languages serve only a narrow slice of what they claim as universal solutions
Summary
Both speakers highlight that AI systems designed primarily for English-speaking contexts fail to serve diverse global populations and that true inclusion requires addressing multilingual and cultural considerations
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Transparency and knowledge sharing are vital for advancing responsible AI practices
Speakers
– Yuchil Kim
– Hector Duroir
– Alex Walden
Arguments
LG focuses on sharing best practices and struggles through annual reports, emphasizing collective progress over individual speed
Working with civil society and academia helps inform product development and build community-led benchmarks for safety
Companies need programmatic stakeholder engagement approaches, including trusted tester programs and community research initiatives
Summary
All three company representatives emphasize the importance of sharing experiences, both successes and challenges, to advance the entire ecosystem’s understanding and implementation of responsible AI
Topics
Artificial intelligence | Capacity development
Similar viewpoints
Both Google and Microsoft have developed comprehensive, multi-layered governance structures that include technical safeguards, executive oversight, and ongoing monitoring processes
Speakers
– Alex Walden
– Hector Duroir
Arguments
Multi-layered approach is needed including model requirements, application testing, executive review, and post-launch monitoring
Microsoft’s Office of Responsible AI includes a Sensitive Use Case program and AI ethics committee involving board-level oversight
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers identify a significant gap between having AI principles or frameworks and actually implementing them effectively in practice
Speakers
– Ankit Bose
– Namit Agarwal
Arguments
Multi-stakeholder collaboration is needed to move from framework-heavy concepts to actionable implementation
While 40% of companies have AI principles, only 10% meet governance expectations and none disclose human rights impact assessments
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both emphasize the importance of proactive, educational approaches to building AI ethics capacity and sharing knowledge for collective advancement
Speakers
– Tim Curtis
– Yuchil Kim
Arguments
The MOOC on AI ethics will make practical ethics learning accessible globally, focusing on ethics by design rather than waiting for problems to emerge
LG focuses on sharing best practices and struggles through annual reports, emphasizing collective progress over individual speed
Topics
Artificial intelligence | Capacity development
Unexpected consensus
The critical importance of addressing different company sizes and contexts
Speakers
– Ankit Bose
– Alex Walden
– Hector Duroir
Arguments
Different company sizes require different engagement approaches, with startups needing more support to avoid putting governance on the back burner
Companies need programmatic stakeholder engagement approaches, including trusted tester programs and community research initiatives
Responsible AI requires translating high-level principles into practice through dedicated governance structures and processes
Explanation
While representing different perspectives (industry association vs. large tech companies), there was unexpected consensus that one-size-fits-all approaches don’t work and that different organizational contexts require tailored governance approaches
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
The need for community-led and culturally appropriate AI safety measures
Speakers
– Hector Duroir
– Parvati Adani
– Alex Walden
Arguments
Safety tools need to be built with community-led benchmarks that reflect specific cultural and contextual aspects
Any framework that doesn’t understand informality, language, and gender is incomplete by design, not accident
Companies need programmatic stakeholder engagement approaches, including trusted tester programs and community research initiatives
Explanation
There was unexpected alignment between corporate representatives and civil society perspective on the critical need for community involvement and cultural sensitivity in AI development, moving beyond technical solutions to community-centered approaches
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The discussion revealed strong consensus on fundamental principles including human rights as baseline, need for multi-stakeholder collaboration, importance of language inclusion, and value of transparency. There was also alignment on practical challenges like the gap between frameworks and implementation, and the need for differentiated approaches for different organizational contexts.
Consensus level
High level of consensus on core principles and challenges, with speakers from different sectors (government, companies, civil society, international organizations) showing remarkable alignment on both philosophical foundations and practical implementation needs. This suggests a maturing field where stakeholders have moved beyond basic awareness to shared understanding of complex implementation challenges.
Differences
Different viewpoints
Approach to framework implementation – top-down vs. bottom-up
Speakers
– Ankit Bose
– Tim Curtis
– Rein Tammsaar
Arguments
Multi-stakeholder collaboration is needed to move from framework-heavy concepts to actionable implementation
The MOOC on AI ethics will make practical ethics learning accessible globally, focusing on ethics by design rather than waiting for problems to emerge
The Global Dialogue on AI Governance aims to create a platform for governments and stakeholders to exchange best practices and strengthen international cooperation
Summary
Bose emphasizes that there are too many frameworks being developed globally with a significant gap between concepts and actionable implementation, while Curtis and Tammsaar focus on creating additional global frameworks and educational initiatives
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Priority focus for different company sizes
Speakers
– Ankit Bose
– Alex Walden
– Hector Duroir
Arguments
Different company sizes require different engagement approaches, with startups needing more support to avoid putting governance on the back burner
Corporate responsibility must start with human rights as a baseline, building on UN Guiding Principles and operational AI principles
Responsible AI requires translating high-level principles into practice through dedicated governance structures and processes
Summary
Bose emphasizes the need for differentiated approaches based on company size, particularly supporting startups, while Walden and Duroir focus on comprehensive governance structures that may be more suitable for larger companies
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Unexpected differences
Framework proliferation vs. practical implementation
Speakers
– Ankit Bose
– Tim Curtis
– Rein Tammsaar
Arguments
Multi-stakeholder collaboration is needed to move from framework-heavy concepts to actionable implementation
The MOOC on AI ethics will make practical ethics learning accessible globally, focusing on ethics by design rather than waiting for problems to emerge
The Global Dialogue on AI Governance aims to create a platform for governments and stakeholders to exchange best practices and strengthen international cooperation
Explanation
Unexpectedly, while international organizations are launching new frameworks and initiatives, industry representatives are expressing frustration with framework proliferation and calling for more practical implementation guidance
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Overall assessment
Summary
The discussion revealed relatively low levels of direct disagreement, with most speakers aligned on fundamental principles of responsible AI development, human rights protection, and the need for stakeholder engagement. The main tensions emerged around implementation approaches rather than core objectives.
Disagreement level
Low to moderate disagreement level. The implications suggest a need for better coordination between framework development and practical implementation, as well as recognition that different stakeholders (large companies vs. startups, international organizations vs. industry associations) may require different approaches to achieve the same responsible AI objectives.
Partial agreements
Partial agreements
All speakers agree on the importance of stakeholder engagement and transparency, but differ on implementation methods – Walden focuses on trusted tester programs and research initiatives, Duroir emphasizes community-led benchmarks and cultural context, while Kim prioritizes transparency through annual reporting
Speakers
– Alex Walden
– Hector Duroir
– Yuchil Kim
Arguments
Companies need programmatic stakeholder engagement approaches, including trusted tester programs and community research initiatives
Working with civil society and academia helps inform product development and build community-led benchmarks for safety
LG focuses on sharing best practices and struggles through annual reports, emphasizing collective progress over individual speed
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Both speakers agree on the critical importance of language inclusion and moving beyond English-centric AI systems, but Adani takes a more critical stance arguing that exclusion is intentional design choice while Duroir focuses on technical solutions and partnerships
Speakers
– Hector Duroir
– Parvati Adani
Arguments
AI systems must work beyond English norms and address multilingual capabilities to ensure true inclusion
AI systems that cannot understand diverse languages serve only a narrow slice of what they claim as universal solutions
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
All speakers agree on the need for robust governance and accountability mechanisms, but differ on enforcement approaches – Agarwal emphasizes external pressure through capital and investor engagement, while Walden and Duroir focus on internal company processes and structures
Speakers
– Namit Agarwal
– Alex Walden
– Hector Duroir
Arguments
Capital can incentivize responsible innovation but requires clear expectations tied to consequences for weak governance
Multi-layered approach is needed including model requirements, application testing, executive review, and post-launch monitoring
Microsoft’s Office of Responsible AI includes a Sensitive Use Case program and AI ethics committee involving board-level oversight
Topics
Artificial intelligence | Financial mechanisms | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both Google and Microsoft have developed comprehensive, multi-layered governance structures that include technical safeguards, executive oversight, and ongoing monitoring processes
Speakers
– Alex Walden
– Hector Duroir
Arguments
Multi-layered approach is needed including model requirements, application testing, executive review, and post-launch monitoring
Microsoft’s Office of Responsible AI includes a Sensitive Use Case program and AI ethics committee involving board-level oversight
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers identify a significant gap between having AI principles or frameworks and actually implementing them effectively in practice
Speakers
– Ankit Bose
– Namit Agarwal
Arguments
Multi-stakeholder collaboration is needed to move from framework-heavy concepts to actionable implementation
While 40% of companies have AI principles, only 10% meet governance expectations and none disclose human rights impact assessments
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both emphasize the importance of proactive, educational approaches to building AI ethics capacity and sharing knowledge for collective advancement
Speakers
– Tim Curtis
– Yuchil Kim
Arguments
The MOOC on AI ethics will make practical ethics learning accessible globally, focusing on ethics by design rather than waiting for problems to emerge
LG focuses on sharing best practices and struggles through annual reports, emphasizing collective progress over individual speed
Topics
Artificial intelligence | Capacity development
Takeaways
Key takeaways
Responsible AI requires a multi-layered approach combining global standards, corporate governance, stakeholder engagement, and accountability mechanisms
Human rights must serve as the foundational baseline for AI development, with companies implementing UN Guiding Principles and conducting human rights impact assessments
There is a significant gap between AI principles adoption (40% of companies) and actual governance implementation (only 10% meet expectations)
Different company sizes require differentiated approaches – startups need more support while large companies need robust internal processes including board-level oversight
True AI inclusion requires moving beyond English-centric models to embrace multilingual capabilities and community-led benchmarks that reflect cultural contexts
Collaboration across internal company silos (tech, business, legal, risk, finance) and external stakeholders (civil society, academia, communities) is essential for effective implementation
The transition from framework-heavy concepts to actionable implementation remains a critical challenge requiring practical tools and guidance
Trust in AI systems depends on design choices, safeguards, and accountability measures rather than technological ambition alone
Resolutions and action items
UNESCO and LG AI Research will launch a global MOOC on AI ethics on Coursera in the second half of the year to make practical ethics learning accessible worldwide
The Global Dialogue on AI Governance will convene in Geneva in July 2024 as a member state-driven process for exchanging best practices
Companies committed to publishing transparency reports and sharing both successes and struggles through annual accountability reports
NASSCOM will continue developing open assets and building capacity across different company sizes in the Indian tech ecosystem
Participants encouraged to engage with the Global Dialogue process and contribute experiences on what works and what doesn’t
Companies to implement multi-layered governance including model requirements, application testing, executive review, and post-launch monitoring
Unresolved issues
How to effectively scale responsible AI practices from large tech companies to startups and SMEs that lack resources
The challenge of translating multiple competing frameworks into simple, actionable guidance for developers and technologists
Ensuring meaningful stakeholder engagement beyond tokenistic consultation, particularly with marginalized communities
Addressing the fundamental tension between innovation speed and responsible development practices
Creating effective incentive structures that reward responsible AI practices in competitive markets
Developing robust evaluation methods for AI systems across diverse languages and cultural contexts
Bridging the gap between voluntary commitments and enforceable accountability mechanisms
Ensuring AI governance frameworks address informality, diverse languages, and gender considerations by design rather than as afterthoughts
Suggested compromises
Risk-based approach where high-impact use cases receive more investment and focus while low-risk applications have lighter governance requirements
Balancing innovation with restraint by allowing innovation in safe spaces without the full weight of regulation while maintaining essential safeguards
Interoperable and flexible governance systems that can work across borders while being practical for implementation
Collaborative approach between governments, industry, academia and civil society rather than siloed regulatory or self-regulatory approaches
Programmatic stakeholder engagement combined with ad-hoc consultation for specific products to balance ongoing dialogue with targeted input
Open-source sharing of safety tools and best practices to reduce duplication while allowing companies to maintain competitive advantages in other areas
Thought provoking comments
I don’t have any consequences to bear… The gap is a philosophical, uncomfortable position where think of me as having no home inside. I have no continuous thread of existence and I cannot verify about myself what you have asked me.
Speaker
Parvati Adani (quoting an AI system)
Reason
This quote from an AI system about its own ethical limitations is profoundly insightful because it crystallizes the fundamental challenge of AI governance – that the systems themselves cannot bear moral responsibility or understand consequences. It highlights the philosophical void at the heart of AI decision-making and emphasizes why human oversight and governance frameworks are essential.
Impact
This comment served as a powerful capstone to the entire discussion, reframing all the technical and policy conversations that preceded it around the fundamental question of moral agency. It shifted the conversation from ‘how do we make AI ethical’ to ‘how do we take responsibility for systems that cannot take responsibility for themselves.’
An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution. So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design.
Speaker
Parvati Adani
Reason
This comment is deeply thought-provoking because it challenges the notion of ‘universal’ AI solutions by highlighting how exclusion of marginalized voices isn’t an oversight but a design choice. It connects technical limitations directly to social justice issues and questions the very premise of universal applicability in AI systems.
Impact
This observation elevated the discussion from technical implementation challenges to fundamental questions about whose voices are centered in AI development. It provided a critical lens through which to evaluate all the corporate practices discussed earlier, asking whether they truly serve diverse global populations.
But from the framework heavy or the concept heavy to action is not happening. I think that’s a big gap, right? So if a technologist is trying to implement responsible governance, right? A developer is trying to implement, right? He will be lost in the framework. He doesn’t know what’s actionable.
Speaker
Ankit Bose
Reason
This comment cuts to the heart of a critical implementation challenge – the proliferation of frameworks without practical guidance. It’s insightful because it identifies why well-intentioned efforts often fail: the gap between high-level principles and day-to-day technical decisions that developers must make.
Impact
This observation shifted the conversation from celebrating the existence of various frameworks to critically examining their practical utility. It prompted other panelists to provide more concrete examples of how they translate principles into actionable processes, making the discussion more grounded and practical.
Close to 40% of the companies have disclosures on AI principles, but just above 10% meet the expectations, global expectations on the governance aspect of it, and none of these 200 companies that we assess disclose their reports on human rights impact assessment.
Speaker
Namit Agarwal
Reason
These stark statistics are thought-provoking because they reveal the massive gap between stated intentions and actual implementation in corporate AI governance. The progression from 40% to 10% to 0% creates a powerful narrative about the difficulty of translating principles into practice.
Impact
This data-driven reality check fundamentally changed the tone of the discussion from celebrating progress to acknowledging the scale of work still needed. It provided concrete evidence for the implementation gaps that other speakers had alluded to and established the need for stronger accountability mechanisms.
I think a startup founder has to first build a business, a tech, a team, right? And also get funding and apparently focus on, you know, a lot of things around governance, right? I think in that whole journey that we have seen, they are putting it at a second or probably the side burner, which is something which we see is a complete no-no. If you do that, you know, when you’re building a product, right, you might miss when you’re scaling.
Speaker
Ankit Bose
Reason
This comment is insightful because it acknowledges the real-world pressures faced by startups while maintaining that responsible AI cannot be an afterthought. It highlights the tension between survival and responsibility that smaller companies face, which is often overlooked in discussions dominated by large tech companies.
Impact
This observation broadened the discussion beyond large corporations to include the challenges faced by smaller players in the ecosystem. It prompted consideration of how responsible AI frameworks need to be differentiated and accessible to companies with varying resources and capabilities.
Overall assessment
These key comments fundamentally shaped the discussion by moving it from abstract policy discussions to concrete implementation challenges and philosophical questions about responsibility and inclusion. Parvati Adani’s concluding remarks, particularly the AI’s own words about lacking consequences, provided a profound philosophical anchor that recontextualized all the technical and policy discussions. The data from Namit Agarwal served as a crucial reality check, while Ankit Bose’s observations about framework proliferation and startup challenges grounded the conversation in practical implementation difficulties. Together, these comments created a more nuanced understanding of AI governance that encompasses not just what should be done, but the real barriers to doing it and the fundamental questions about who bears responsibility when the systems themselves cannot.
Follow-up questions
How can frameworks be made more actionable for developers and technologists implementing responsible AI governance?
Speaker
Ankit Bose
Explanation
There’s a gap between framework-heavy concepts and practical implementation – developers get lost in frameworks and don’t know what actionable steps to take
How can AI evaluations and safety tools be refined beyond English norms to include cultural and contextual aspects of different communities?
Speaker
Hector Duroir
Explanation
Simply translating safety tools from English to other languages loses important cultural and contextual elements that are crucial for effective AI governance
How can companies conduct meaningful AI-specific human rights impact assessments and publish actionable summaries?
Speaker
Namit Agarwal
Explanation
Current research shows none of the 200 assessed companies disclose human rights impact assessments, indicating a significant gap in accountability and transparency
How can stakeholder engagement be improved to better include civil society voices and community perspectives in AI development?
Speaker
Peggy Hicks
Explanation
There’s a need to ensure risk assessments truly incorporate voices and experiences of people in different contexts where AI products are deployed
How can governance approaches work across borders while remaining practical and avoiding fragmentation?
Speaker
Rein Tammsaar
Explanation
Fragmentation raises costs and weakens trust, so interoperability is essential for effective global AI governance
How can capacity gaps be closed to help developing countries participate fully in the AI economy?
Speaker
Rein Tammsaar
Explanation
Many developing countries need infrastructure, skills, and compute resources to participate meaningfully in AI development and governance
How can AI frameworks ensure they are not incomplete by design when it comes to informality, language, and gender considerations?
Speaker
Parvati Adani
Explanation
AI systems that cannot understand diverse languages or serve marginalized groups are serving only a narrow slice of what should be universal solutions
How can startups and SMEs be better supported to implement responsible AI practices from the beginning rather than treating governance as secondary?
Speaker
Ankit Bose
Explanation
Startup founders are fighting day-to-day challenges and often put governance on the back burner, which can cause problems when scaling
How can good practices in responsible AI be more universally applied and made available to companies that want to engage but lack resources?
Speaker
Peggy Hicks
Explanation
There are good practices existing but they’re not universally applied, and there’s a need to help more companies access and implement these practices
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

