Multistakeholder Partnerships for Thriving AI Ecosystems
20 Feb 2026 15:00h - 16:00h
Multistakeholder Partnerships for Thriving AI Ecosystems
Summary
The panel examined how artificial intelligence can accelerate sustainable development but must be deployed responsibly to avoid deepening existing inequities [1-5]. UNDP warned that AI’s rapid evolution is currently inequitable and, without firm responsible-use commitments, the AI equity gap could worsen [3-5]. To confront this, the Hamburg Sustainability Conference introduced the Hamburg Declaration on AI for Sustainable Development, establishing principles and seeking concrete commitments from governments, businesses and civil society [10-12].
Bärbel Kofler emphasized that governments need legal frameworks and global coordination so AI benefits all citizens, highlighting a “power gap” evident in the concentration of venture-capital (≈17 % in the Global South) and data-center capacity (≈0.1 % in the Global South) [38-44][49-58][61-63]. She called for investment in open-source tools, vocational training and university curricula to bring small- and medium-sized enterprises into AI development [75-80]. Arundhati Bhattacharya argued that technology must be democratized, describing Salesforce’s 1 % profit, product and time pledge and its large-scale skilling programme that has created 3.9 million certified “Trailblazers” in India [92-99][100-103]. She cited India’s financial-inclusion journey-UIDAI’s biometric IDs and the UPI system-as examples of AI-enabled digital tools that channel subsidies directly to citizens and lower borrowing costs [108-118][119-126].
Nakul Jain positioned Wadwani AI Global as a convener that bridges technology providers with governments, illustrating successful education-reading-fluency pilots in Gujarat and health-focused tuberculosis projects that required early-stage policy and evaluation partnerships [146-152][155-168][170-176]. Under the Hamburg Declaration, Germany’s ministry reported exceeding its target of training 160 000 people (reaching 190 000), releasing 12 AI building blocks (now 15), delivering 30 AI diagnostics (now 55) and supporting projects in Kenya, Cambodia and India on satellite data, cervical-cancer detection and multilingual datasets [236-244][245-254]. Tata Consultancy Services added that India’s responsible-AI task force has spawned AI Centres of Excellence, a “Trusty Platform” for responsible-AI evaluation, and initiatives to green AI through resource-aware computing and a quantum-valley partnership [199-207][210-218][270-274].
Looking ahead, panelists identified gaps such as a global repository or marketplace for AI solutions and regional evaluation hubs, while urging private firms to adopt self-regulation and expand skilling missions to sustain collaboration [292-300][280-288][276-279]. All participants agreed that AI progress cannot be achieved in silos; multistakeholder collective action, anchored by the Hamburg Declaration, is essential to translate high-level principles into measurable impact [376-379].
Keypoints
Major discussion points
– Responsible AI governance is essential to avoid widening equity gaps.
The UNDP opener stresses AI’s development is “not equitable” and warns of an “AI equity gap” that could worsen inequality if not responsibly managed [1-5]. Bärbel Kofler expands on this, describing a “power gap” manifested in venture-capital flows (≈ 17 % to the Global South) and data-center capacity (≈ 0.1 % in the Global South) that must be closed through government-led frameworks, energy policies, and open-source investment [49-60][61-63].
– Multi-stakeholder partnerships are the backbone of inclusive AI ecosystems.
Kofler emphasizes that AI “depends on multistakeholder engagement” and must involve civil society, governments, and the private sector [38-40]. Nakul Jain notes that technology is the easy part; the real challenge is “institutional mechanisms” and embedding solutions from day one through government, technical partners, and field-level capacity-building [146-152][158-166]. The moderator’s question to Arundhati also frames the issue as “how do we distribute responsibility between different stakeholders?” [84-87].
– The Hamburg AI Declaration provides concrete, measurable commitments.
Since its adoption, the German ministry has “trained 160 000 people” (actually 190 000) and delivered AI building blocks, diagnostics, and datasets-exceeding targets such as 12 building blocks (now 15) and 30 diagnostics (now 55 datasets) [238-246]. Ongoing projects illustrate impact: satellite-data services for Kenyan farmers, cervical-cancer detection in Cambodia, and multilingual datasets for India [250-257].
– Private-sector actors contribute by democratizing technology, skilling, and scaling solutions.
Arundhati describes Salesforce’s “1 % profit, 1 % product, 1 % time” model, massive up-skilling of 3.9 million “Trailblazers,” and the role of digital ID and UPI in India’s financial-inclusion success [92-99][110-118]. Wadwani AI positions itself as a “convener of technology for social good,” bridging startups, governments, and NGOs to move pilots into field deployment [173-175].
– Future challenges and opportunities: shared repositories, AI assurance, greening AI, and appropriate model choices.
Jain calls for a global marketplace of solutions and regional evaluation hubs to standardise “AI assurance” across countries [292-301]. Speaker 1 highlights the need for sensing infrastructure, compute equity, and “greening of AI” through resource-aware designs [210-218]. In low-resource settings, traditional ML often outperforms large language models, prompting a shift toward small, task-specific models [363-365].
Overall purpose / goal
The panel convened to examine how AI can be harnessed responsibly for sustainable development, identify governance gaps, showcase concrete actions (e.g., the Hamburg Declaration), and chart collaborative pathways that involve governments, international bodies, the private sector, academia, and civil society.
Overall tone
The discussion begins with a formal, cautionary tone about risks and inequities, moves into a collaborative and solution-focused mood as participants share concrete examples and successes, and ends on a pragmatic, forward-looking note emphasizing collective action, concrete commitments, and the need for new partnership mechanisms. Throughout, the tone remains constructive and optimistic, shifting from problem-identification to actionable recommendations.
Speakers
– Robert Opp
– Role/Title: Representative and Chief Digital Officer, United Nations Development Programme (UNDP)[S10][S11]
– Area of Expertise: AI for sustainable development, digital transformation, multistakeholder partnerships
– Bärbel Kofler
– Role/Title: Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development; Member of the Bundestag[S13][S14][S15]
– Area of Expertise: Government policy, international development, AI governance, sustainable development
– Arundhati Bhattacharya
– Role/Title: Chairperson and CEO, Salesforce South Asia; Former Chairperson, State Bank of India[S16][S17]
– Area of Expertise: Digital transformation, financial inclusion, AI ethics, technology for inclusive growth
– Nakul Jain
– Role/Title: CEO and Managing Director, Wadwani AI Global[S18][S19]
– Area of Expertise: AI solutions for underserved communities, impact-focused AI deployment, multi-stakeholder partnerships
– Speaker 1
– Role/Title: Representative of Tata Consultancy Services (TCS) (inferred from discussion of TCS endorsement)
– Area of Expertise: Responsible AI, AI evaluation platforms, greening AI, industry-academia collaborations
– Audience Member 1
– Role/Title: Founder, Corral Inc[S4]
– Area of Expertise: AI/ML technologies (inferred from question on LLM vs. traditional ML)
– Audience Member 2
– Role/Title: Audience participant (part of a German group; specific affiliation not specified)[S1][S2][S3]
– Area of Expertise: (not specified)
– Audience Member 3
– Role/Title: Undergraduate student, Economics, University of Delhi (as stated in transcript)
– Area of Expertise: Economics, interest in AI’s impact on business models
Additional speakers:
– (None – all speakers identified in the provided list appear in the transcript.)
The session opened with a stark warning from the UN Development Programme that, while artificial intelligence (AI) can be a powerful catalyst for sustainable development, its current trajectory is “not equitable” and risks widening an “AI equity gap” that could exacerbate existing inequalities if responsible-use commitments are not put in place [1-5]. This framing set the tone for a discussion centred on how to harness AI’s benefits without deepening social and economic divides.
The moderator, Robert Opp, introduced the panel and linked the conversation to the Hamburg Sustainability Conference, which annually produces the Hamburg Declaration on AI for Sustainable Development – a set of principles and measurable commitments endorsed by governments, private firms and civil-society actors [1-5]. The declaration was presented as a concrete mechanism to move from high-level rhetoric to actionable steps.
Bärbel Kofler, Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, stressed that AI’s future depends on “multistakeholder engagement” involving governments, civil society and the private sector [38-40]. She identified a “power gap” rather than an “innovation gap”, noting that only about 17 % of venture-capital flows go to the Global South despite it housing over 90 % of the world’s population, and that data-centre capacity in the South is a mere 0.1 % of global capacity [55-58]. To close this gap, Kofler called for legal frameworks, coordinated global policies, investment in energy-efficient infrastructure, open-source tools and large-scale vocational and university training programmes that bring small- and medium-sized enterprises into AI development [61-63][75-80].
Arundhati Bhattacharya, Chairperson and CEO of Salesforce South Asia, argued that technology must be “democratised” to have impact [92-93]. Salesforce’s “1 % profit, 1 % product, 1 % time” pledge directs resources to the non-profit sector, while its “Trailblazer” programme has up-skilled 3.9 million users in India [96-104][100-104]. She illustrated the transformative power of digital tools through India’s financial-inclusion journey: biometric IDs from UIDAI and the Universal Payment Interface (UPI) enabled direct subsidy transfers, turning dormant bank accounts into cash-flow assets that reduced borrowing costs for low-income borrowers [108-118][121-126][124-129]. Bhattacharya stressed that policymakers must create ethical infrastructure, protect data privacy and ensure that technology is offered “in an ethical manner” [124-129]; she also emphasized that large corporates should self-regulate to avoid heavy-handed external regulation, a stance that contrasts with Kofler’s government-led approach [280-283].
Nakul Jain, CEO of Wadwani AI Global, positioned his organisation as a “convener of technology for social good”, bridging startups, governments and NGOs to ensure AI solutions move from labs to the field [173-175]. He argued that building the technology is “the easiest part” and that success hinges on institutional mechanisms that embed solutions from day one within existing government programmes [146-152][158-166]. Jain cited two pilots: an oral-reading-fluency tool for Gujarat schools co-developed with the state government and technical partners, and a tuberculosis-diagnosis project that involved the Indian Council of Medical Research to define evaluation criteria from the outset [153-168][170-176]. He further noted that in low-resource settings traditional machine-learning models often outperform large language models (LLMs) because of bandwidth and device constraints [350-354].
Dr. Sachin Loda, Chief Scientist & Head of Research at Tata Consultancy Services, expanded the discussion to data infrastructure and compute equity. He highlighted the need for extensive sensing networks-such as low-cost air-pollution sensors-to generate high-quality, high-velocity data for AI analytics [192-196][197-199]. He noted India’s responsible-AI task force, AI Centres of Excellence (e.g., the collaboration with IIT Kanpur), and initiatives to “green” AI through resource-aware model design and a “quantum valley” partnership with IBM and the Andhra government to explore quantum-hardware solutions [200-207][210-218][270-274].
The panel then returned to concrete progress under the Hamburg Declaration. Kofler reported that Germany’s ministry exceeded its training target, delivering 190 000 AI-related trainings against a commitment of 160 000 [236-241]. The ministry also released 12 AI building blocks for climate action (now 15) [242-246], 30 AI diagnostics (now 15) [247-250], and 55 open datasets [251-254], with pilots supporting satellite-data services for Kenyan farmers, cervical-cancer detection in Cambodia, and multilingual datasets for India [242-246][247-250][251-254].
Areas of consensus emerged across the discussion. All speakers affirmed that responsible AI governance is essential to prevent widening equity gaps [1-5][49-58][124-129][149-164][200-203]; that multistakeholder partnerships-linking government, private firms, academia and civil society-are the backbone of inclusive AI ecosystems [38-40][146-152][280-288][250-257]; and that large-scale capacity-building programmes are critical, as evidenced by Germany’s training numbers, Salesforce’s Trailblazer community and the teacher-training components of Wadwani’s pilots [236-241][96-104][165-166][75-80]. Participants also agreed on the importance of open-source data and tools to democratise AI [80-81][192-199][95-99].
The panel highlighted several nuanced disagreements. Kofler emphasised a government-led approach-legal frameworks, open-source investment and state-funded training-to close the power gap [55-58][61-63][75-80]. Bhattacharya advocated for corporate self-regulation, arguing that large firms must proactively adopt ethical standards to avoid heavy-handed external regulation [280-283]. Jain positioned his organisation as a neutral convener, proposing a global repository of AI solutions and a marketplace of shared playbooks, governance frameworks and talent pools to facilitate cross-border deployment [292-298]; he also called for regional AI-assurance hubs to standardise evaluation criteria across countries [298-301].
During the Q&A, an audience member asked how to bring together fragmented, competing initiatives. Kofler and Jain responded that coordinated frameworks, shared repositories and “hand-holding” mechanisms are needed to avoid duplication and to align disparate efforts around a common purpose [300-306]. This echoed the earlier call for a global marketplace and highlighted the urgency of consolidating initiatives.
Looking forward, the panel identified further opportunities. Jain reiterated the need for a global repository and marketplace, as well as regional evaluation hubs, to enable startups to scale tools from India to Ethiopia [292-298]. Bhattacharya highlighted the need for expanded skilling missions through industry bodies such as NASCOM and AICTE [280-288]. Dr. Loda stressed the importance of building affordable sensing infrastructure, repurposing legacy hardware and developing quantum-valley capabilities to narrow the compute gap between the Global North and South [210-218][213-218]. All agreed that “technology is the easiest part” and that lasting impact depends on embedding AI solutions within existing institutional frameworks and providing ongoing support for end-users [146-152][158-166].
In closing, Opp reminded the audience that the Hamburg Declaration is a living instrument that invites further endorsement via a QR-code link [376-380], and he summarised the overarching message: AI can accelerate sustainable development, but only through coordinated, multistakeholder action, robust governance, and measurable commitments can the risk of deepening inequities be avoided [376-380]. The discussion outlines a pragmatic roadmap for responsible AI that balances optimism with concrete, accountable actions.
From the perspective of the UN Development Program, certainly we see a concern with what is happening in the development and adoption of AI in that there’s no question and we are very convinced that AI can have such a powerful impact on sustainable development in a positive way. It can help us really close some of those gaps that we see in persistent development challenges. However, we know that the way that it is evolving now is not equitable. And if we do not have a kind of commitment to responsible use of AI, we fear that the AI equity gap could actually get worse or inequality could get worse. And more so, it also can be harmful in some cases if we’re not applying AI responsibly.
So we want to get into then the point about… How do we actually address some of those challenges? How do we get some of the… measures in place or the kinds of commitments, principles that we need to have as an overall community, especially in the international development community, but beyond in terms of private sector and others as well. In terms of what are our commitments to making sure that we are deploying and using AI in a responsible way and building AI ecosystems. So this does tie in with the process that has been a feature of previous AI summits as well as the Hamburg Sustainability Conference that is hosted every year in the city of Hamburg and sponsored by the government of Germany with other partners like UNDP in the city of Hamburg.
And as part of that process, we have introduced a declaration on AI for sustainable development, the Hamburg Declaration. And so we’ll talk about that a little bit in the context of this conversation after exploring some of the kind of opening thoughts around multistate. Thank you very much. in general. So, I’m going to start off and introduce our panelists and then we’ll go through a couple of rounds of questions. Be thinking about your questions as well because I think that we’ll have time to go to you as well for a Q &A and so happy to get your thoughts and get some interaction with the panel as part of this session. So, with that, I have a very distinguished panel to introduce to you and it’s a real pleasure to have them here today.
I would like to introduce, sitting on my left, Dr. Barbel Koffler, who is the Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development. She’s a long -standing champion of human rights and development, a member of the Bundestag since 2004, and she plays a key role in shaping Germany’s global development economy. Thank you. Yes, exactly. We are also honored to have Ms. Arundhati Bhattacharya, who’s the chairperson and CEO of Salesforce South Asia, a transformative leader with over four decades of experience. She made history as the first woman chairperson of the State Bank of India, where she led one of the country’s most significant digital transformation journeys. She oversees a huge part of Salesforce business in this region and beyond, shaping how technology partnerships drive innovation skills and inclusive growth.
We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven technology leader. He works to advance AI solutions for underserved communities across the global south. And his team has done 40 deployments, of AI reaching more than 150 million people in use cases such as health care, healthcare, agriculture, and education. And we’re also joined by Dr. Sachin Loda, who’s the chief scientist and head of research at Tata Consultancy Services. He’s a leader in cybersecurity and privacy research. He heads Tata Consultancy Services’ work on trustworthy AI, quantum resilience, cloud de -risking, and privacy by design. And that is all about translating cutting -edge research into real -world, award -winning innovation.
Please join me in welcoming our panelists. Okay, so let us start. Dr. Kofler, I’d like to start with you, please. You know, you represent the government perspective here on this panel. What do you think are the difficulties with the distinct roles of government, but also some of the other players like… private sector, civil society and international organizations, the roles that they play in building AI ecosystems that are both innovative and inclusive.
I used this one in the meantime. Thank you for the question and thank you for having me also as a representative of the government. But I’m very happy to be on a panel with scientists, with academia and with private sector because at the end of the day, artificial intelligence and how we shape really the future with that is depending on multistakeholder engagement. There has to be an engagement from all parts of society and also including civil society because it’s a broad outreach in our societies we have to undertake. So that’s why I think it’s very important to discuss it. You were pointing out challenges and advantages of artificial intelligence. And I think there is the role.
I think there is the role of governments coming in. because, yes, there are tremendous advantages of artificial intelligence. In my sphere of politics, we discuss about how we can use that for detect diseases better through AI, cancer in various topics. We discuss about climate change and how we can make prediction applicable for everybody on the ground. We discuss about administration and the advantages in administration. AI could offer. But at the end of the day, all those hopes will come into reality if we shape that in a framework, in a legal framework, in a framework we coordinate with partners around the globe and within society, which makes all those advantages accessible for everybody, for all parts of society, for any citizen to address its government through AI conversation in its own language, for example, to smallholder farmers if they get the chance to make use of it, to doctors in remote areas who can really then detect with the new technology diseases or whatever.
So we have to close the gap. And I would say it’s not an innovation gap, it’s a power gap. Because innovative people are existing around the globe. Ideas are created around the globe in all spheres of society. So I’ve been here on a panel or at a session, this brilliant young Indian startup, yeah, how would I call it, people, I don’t know, I forgot the English word, sorry. This brilliant people who are creating startups. And that is possible all over the globe, but the environment has to be there. And if you look how venture capital, for example, is distributed, and we know that, I think, it’s 17 % of venture capital only is in those parts of the world who are presenting more than 90 % of the people.
So we feel a power gap at the end of the day. If you look about where you have data center resources, it’s even worse. Global North, it’s almost all the data centers available capacity and only 0 .1 % is somewhere in the Global South. So there is a big gap and we have to overcome that gap. We have to close that gap. And I think that’s something governments have to do also and where they utilize to create an environment. If it’s coming to energy consumption, if it’s coming to an enabling environment for research, if it’s coming to skill training for everybody. I learned in one of the sessions that we should change our mindset and everybody should get a job.
And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. But to do so, we have to invest in the mindset and in the skills of all users. We have to work on vocational training and university training to make research, education, and the needs of even small and medium -sized enterprises heard and linked together to bring small and medium -sized enterprises into the position to participate also from that new technology.
Not only the big players. So all those things need framework and need governance. And we have to make sure that the outcome, the research, the results, the data sets are available for everybody. That’s why we are also investing in open source. It’s something we very much are aligned with the Indian ideas because if we don’t do so, the first thing, the advantages are
Okay, now it’s working. So framework and governance, important factors to have in the multi -stakeholder partnerships. Arundhati, maybe let me turn to you with a very similar question, but taking this from the private sector perspective. How do you think the responsibility should be distributed between the different stakeholders? And what does industry or private sector companies, tech companies in this case, uniquely bring to the table? And where is partnership essential?
Thank you and good morning, every one of you. So I have been very fortunate in the fact that I have worked with two organizations. The one that I worked with initially was the State Bank of India, which is the largest bank in India and also the bank that really and truly spearheaded the financial inclusion program of the current government, which is the PM Chandan Yojana. And while doing that, I realized that any improvement in technology, if it is not really democratized, then it doesn’t really have an impact. If you want it to have an impact, you need to democratize technology. That’s the first thing. The second thing is currently I work at Salesforce. And Salesforce, again, it was set up with the intention that business is a platform for change.
We have what we call a one -by -one -by -one mission, which is we contribute 1 % of our profits or equity. 1 % of our products and 1 % of our time to the community, to the non -profit community. Of course, in India, it is 2 % of the profits because that’s what the law demands. And while doing that, we ensure that the non -profit sector not only has access to our products but also knows how to use them and use them to the best of their abilities. Along with that, we also do a lot of work in skilling various people. So we call people who basically are trained in the sales force technology as trailblazers. And India has 3 .9 million of them, the second largest up to the US.
And this is a community that has been literally nurtured by us. So it’s not only a question of ensuring that we are getting our products, out to the community, to the non -profit sector, to the various enterprises where we obviously market our products to them and then help them implement it in their organizations. But we are doing a lot of work on the skilling front as well because we feel that is something that’s very, very essential. Without that, India will not really be able to take the advantage of all that technology brings to it. But one thing is very clear in our heads. And that is that a populous nation like India can never really have the standard of living that it deserves unless technology is a part of the play.
We tried this financial inclusion initiative for 15 long years before we actually were successful in 2014. And the reason for that was technology. Because by that time, the unique identification authority which gave us our biometric units, you know, marker to enable us to get the unique identification number. And that’s what we’re doing. that helped us do the KYC and second the spread of the mobile networks enabled us to actually approach the 600 ,000 villages which is India before this we never had that connectivity now it was because of this that these people then got accounts 97 % of these accounts were without any balances and a lot of people asked us was this merely a ploy was this merely you know some kind of action without any meaning but very soon we found that those accounts started getting money because the government could then target the subsidies they were giving to below poverty line people directly to the citizens without having to go through any middlemen in fact some politician at some point of time had said that only 13 paise after for every one rupee of subsidy actually goes to the person who needs it Here, when the money started flowing directly into the account, the whole of the rupee was going to the person who needed the subsidy.
Now, over and above that, we then had the UPI, which is the Universal Payment Interface. And what did that do? That enabled even small customers and small shopkeepers to start actually taking money in a digital form. And because they were taking money in the digital form, these accounts of theirs, which had been opened, started showing a cash flow. And the moment an account has cash flow, bankers then become interested in lending you money because they know that there is a cash flow to sort of back it up, to enable you to repay it. So here is a person who was taking a loan of, say, 2 ,000 rupees, paying 10 % a month, suddenly becomes eligible for that 2 ,000 rupees at 7 % a year.
Imagine the difference it can make in the life of that person. And this is all… All on account of TikTok. AI is not going to be anything different. AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved. And in order to solve it, we therefore need to understand people are eager to take it. They will take the technology because it improves their lives. But in order for them to take it, it’s up to the policy makers to make those interventions to ensure that they are not being taken for a ride. That whatever is being offered to them is being offered in an ethical manner. That we are doing the right, we are creating the right kind of infrastructure to enable them to actually be able to access it.
That the right to privacy of their data is properly maintained. So I think the policy makers, the people who are there, who are conducting all of this, they have a far bigger responsibility. Because adoption is not something which is a policy. Adoption is not a problem in India. Adoption will happen. as soon as people realize that it’s helping them in their regular lives adoption is not going to be a problem there’s not going to be any pessimism on adoption and if you go to the floor where the expo is you’ll actually see people how interested they are in finding out how it’s going to improve their lives but it is people who make the policies people who make the infrastructure available people who actually make the initiatives available who need to take the responsibilities and companies like ours we need to take the responsibility as to how we skill the people enable them to understand what is good for them and what is not good for them make them understand that there is both good and bad and they need to choose the better they should not be taken for a ride so all of us have a role to play and I think we need to be aware of those roles and we need to play them well thank you thank you so much
so actually those Those two answers complement each other so nicely because, as you were saying, Dr. Koffler, there’s the governance and the framework, and this is key to the Indian experience as well. Government sets down the framework, the rails, but then private sector has such an important role in scaling, in innovating, in helping people be successful and interact with that. So I’m going to turn now to Nakul. And a question now because, you know, your Wadwani AI Global has done a lot of work in the space of multi -stakeholder partnerships and the rollout of AI for specific use cases. From your experience, where have multi -stakeholder partnerships already demonstrated tangible impact in terms of advancing responsible AI for development?
And what are the conditions that helped ensure these collaborations? Translated into sustained impact rather than… and just remaining solved.
Thank you for that question, Rob. Also, thank you for having me here. I hope this is working. It’s working, but it won’t be working. All right, okay. Yeah, no, thank you, everyone, for being here. So, you know, my answer will be from my experience of deploying some of these solutions on field and what has worked for us, what has not worked for us. Now, what we have realized is that building technology, at least for an application organization like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi -stakeholder ecosystem that can bring in their expertise, who will take care of everything that sits around the technology.
The ecosystem that works is the one that ensures from day one that we are able to do everything that we need to do. What will be the institutional mechanism? of getting something done. How will the solution, the use case be institutionalized in the ecosystem, how will it be embedded within the framework we are speaking about, within the department ministry we are talking about, within the problem area and the people who will be using it. That cannot be an afterthought, that has to be something that needs to come in from day one. And I’ll give you a few examples, some of the solutions that we were able to deploy at scale and what helped them work.
So we built a solution in education around oral reading fluency, essentially to assess students on their reading abilities. Technology part, like I said, not very difficult to do. But this is not something that Wadwani AI Global could have done alone. This required a collaboration between government, ownership from government, and what Arunati also mentioned, that it is required for policy makers to own this problem. We as an organization, came in as a facilitator. But the government of Gujarat was the one who led this entire initiative, who made sure that data is available, who helped us find the right partners who could then be leveraged to annotate some of this data. The other thing, rather than trying to create this solution in silos, what we also started thinking about is where will it get embedded, right?
Tomorrow, if this has to be used, and there are tons and tons of applications that are already being pitched to government for education, pitched to teachers, how do you ensure that they actually get adopted? And one way to do that was to understand how is government currently running their programs. So we started working with government and their technical partner to work on this together. So rather than thinking about this as a solution that we have created and we give you an app, we started figuring out how can whatever we develop plug into the existing system. . in classrooms, wherever they were, and then how do you ensure that there is a monitoring mechanism established so that the government is now fully capable of not just following up on the adoption, but also to figure out if there’s a genuine impact of technology.
All of this was only possible because we had this collaboration between the government with the technical partner who government was working with us being the AI partner. And the partners who were working with government at school levels to help them programmatically, to help teachers build capacity on this, to help teachers understand it, to also counsel teachers, because there’s also a constant threat about AI trying to replace jobs. So there’s also that handholding that is required at field level. And as a tech organization, we could not have done that. So you need that partner as well. Moving to a slightly different example around healthcare. And we have done a lot of product. in tuberculosis and you know some of the things remain common around your government partnership data collection through them and you know your programmatic partnership but what additional support we got was from icmr how can we start thinking about evaluating this from day one because health being a much more sensitive area right directly impacting the lives you know when some of our decisions give outcome but some of our models give those decisions how do you ensure that evaluation and the success criteria is not an afterthought so you don’t just work with the government you also work with the agency who will eventually evaluate it so you are finally following your from the first day itself how you should be optimizing for what parameters and what results so these are some of the examples robert i can think of where you know technology government ecosystem partners played a role came together and delivered something
thank you nicole and just a quick follow up on that so Wadwani AI Global is how would you describe the space that you occupy in that? You’re not government, you’re not exactly private sector you’re kind of an integrator, right?
Yeah no absolutely, I think what we would like to call ourselves is a convener of technology, especially for social good, so our role essentially is to ensure there is an impact, leveraging artificial intelligence who all need to participate is something that we work with partners along we work with governments, whether it requires capacity building whether it requires advisory to the government, whether it requires actual product development we help government with everything because what we have also realized is while government is well intent, there is a need for a lot of hand holding because they have priorities and artificial intelligence intelligence, while now it’s become a buzzword, they still need a lot of hand -holding in order to make sure that some of these solutions just don’t end up living in labs and some of these rooms like this, but actually go on field.
So our role becomes even more important to make sure that everyone who needs to participate comes along, participates, and ensures that there is an actual impact on field. Thank you.
And the reason I asked that is because also to go over to Sachin, you know, there are, you know, kind of the roles of, you know, private sector innovation, and there’s the government role. The international organizations like UNDP have also bring a kind of multilateral and global perspective on some of these things. But, you know, companies like Tata Consultancy also serve kind of in the middle there as well. And I’d love your perspectives on that same question, which is, you know, where are you seeing multi -stakeholder partnerships resulting in tangible and tangible results? And I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question.
I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great I think that’s a great question. I think that’s a
to contextualize it’s not just true for help it’s true for any domain you look at and and yes data is fragmented siloed that’s true even within enterprise it’s of course true for at a bigger scale I personally believe that bringing this data kit together is important to create open data ecosystem but I believe problem is lies somewhere else the problem lies in the sensing infrastructure that you need to put together to be able to have high quality high volume high velocity data I take example of for example I mean air pollution monitoring right just think of that you probably could do a much better job if you had sufficient number of sensors deployed across the region of interest probably they are going to be large in number.
You will have to think of, you know, how do you make it cheaper, faster, better. And having done that, it will also depend on, you know, how you could analyze, process and sort of derive insights. That’s where AI will come in, right? So I think government can play a very big role in sort of catalyzing this and enabling sort of, you know, putting together a great sensing infrastructure could be thought of, you know, part of the digital public infrastructure. On top of that, private and public entities could innovate, academicians and inventor community could innovate. So what are we doing here? So we are. So Indian government actually has been very proactive on the overall responsible AI mission.
I think. I was part of the task force that was set up by. principal scientific advisor to Indian government, Professor K. Vijay Raghavan on responsible AI. This happened about eight years back. And we kind of tried to study the implications of AI and the responsible facet of AI in Indian context then. And following that, a lot of mission projects were launched. So as part of that, I have seen now there are different AICOEs supported by government. So we are part of AICOE for sustainability in collaboration with IIT Kanpur. And it’s going live. In fact, some of you might have attended a session by IIT Kanpur team yesterday on this very same topic, where we are looking at a lot of interesting problems in overall sustainability domain.
So that’s just data. I just want to make, So if you just mention your other two points super quickly. Yeah, yeah. So I’ll just mention about compute, which is, again, a very important facet. There is a big gap between what is available in global north versus global south. I have just two ideas to share. One is we should think of how we could repurpose the legacy hardware. And you would see that a lot of high -tech companies have innovated along that. It’s not necessary that you need to have current latest hardware. You could do a lot of innovations around it. Two, of course, you should explore the new hardware that is becoming available. So TCS, IBM, and Andhra government have come together to create a quantum valley in India, which could open ways for that.
So I think these are some quick remarks I thought I’d make.
Great. No, I mean, and the depth is incredible, right? And you’re talking about these very… Very concrete collaboration. that are making this, you know, a possibility. Okay, I think we’ll go straight into just a quicker second round and then we’re going to go for some comments from you. And I want to bridge this now into, you know, we’ve been talking about multi -stakeholder partnerships and how there are different roles across different parts and stakeholders that have different roles to play. But what’s clear is that we also need collective action. And so I wanted just to kind of think about, okay, how do we move to kind of getting overall kind of collective action in these spaces for responsible use of AI that kind of gives that sort of global framing around it.
And as I mentioned at the beginning, one of the things that we have been doing with the, as part of the Hamburg Sustainability Conference and the Hamburg Declaration, on responsible AI for sustainable development. is really looking at this space of collective action. So a couple of years ago, we did the first Hamburg Declaration. We have about 50 organizations that have endorsed the declaration. It has a number of principles related to it around responsible use of AI. And Dr. Koffler, I’m going to turn back to you to ask you a question especially about that process, which is since the adoption of the Hamburg AI Declaration, what tangible progress have we seen and what are the concrete actions that you think are now required to move from the higher level principles into sustained and scaled impact?
Well, thank you for mentioning the concrete action because that’s actually what really it is all about. We were coming up with that idea on Hamburg Sustainability Conference and on that paper, not because we want to create another system, but because we want to create another paper or we want to invite to another conference. We really want to come up with commitment every stakeholder has to undertake when they are signing that memorandum. And so my ministry also was signing it, and we had concrete, tangible, measurable duties we have to fulfill, commitments we have to fulfill. I would love to give you a few examples. In the sake of time, I’m trying to be very short and very brief.
For example, one of the commitments was training people. We were committing to train 160 ,000 people within one year. We fulfilled that already, and that makes me very proud because we were trained already 190 ,000. So that is the first step we were committing. We were committing to open up AI building blocks, 12 AI building blocks, especially for using AI to climate action. We are now by a number of 15. And we were also committing 30 AI diagnostics. We are now by a number of 15. and we achieved 55 data sets. So that’s what it’s all about. We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps and that’s what my ministry did until now.
There are examples all over the world we are working with partners. We are working, for example, in Kenya. There is about analyzing satellite data which makes them available for farmers. We are working with Cambodia where it’s about medicine to detect cervix cancer, for example. We are working with the Indian government. Your government is doing a fantastic project including languages of the subcontinent, analyzing, we were just talking about it, analyzing the data sets for many languages which have to be included in the overall AI framework. So we are supporting Bershini by nine data sets. We were supporting the collection. So that’s what it’s all about. at the end of the day, and that will be my last sentence.
I invite everybody in the room, and maybe you spread the thought, to join Hamburg Declaration, because we started that, and it’s continuing. Government should be part, private sector should be part, civil society should be part, academia should be part of that, and come up with concrete, measurable, tangible commitments which could be fulfilled or can be fulfilled then. So you’re
I’m glad you mentioned that, because the QR code that you see on the screen here, it gives you more information. If you would like to endorse the Hamburg Declaration, just go to the website there. We’re getting short on time for audience interaction here, but Sachin, really quickly, TCS endorsed the Hamburg Declaration last year. What? What progress have you seen since then?
Yeah, so I’ll just quickly point out… to some activities we have been doing since then. So as I said, we are part of responsible AI deliberation in India and elsewhere. And we have launched a big program around it, where Carnegie Mellon is now our academic partner. And we are also talking to some Indian academics. As you rightly said, these are very hard problems, and this requires significant collaborations across sectors. We are also building our own technology to be able to evaluate, calibrate, and sort of help AI engineers build more responsible AI. It’s called Trusty Platform. More details we can share offline. We are also very keen on greening of AI, and that requires… a lot of resource -aware AI.
work. So this is again very significant and that’s something we are on.
Fantastic. And you just mentioned greening AI. That’s one of the key principles of the declaration. Arundhati and Nakul, I’ll turn back to you now for some quick thoughts before we go for some Q &A. Where do you think as we look ahead, we’ve talked about the Hamburg Declaration as one sort of platform, but where do you see the strongest opportunities for new types of partnerships going forward? Things that maybe don’t exist yet but that should? Or what do you think is the next wave of this?
Well, you know, as an organization, Salesforce from 2014 has had an office of humane and ethical use of technology. So this is something that I think most large corporates have to have a self -regulatory feature. This is important because unless you self -regulate you will have regulators coming down with a very heavy hand at some point of time or the other and that is not something you want if you want to keep innovating. So that’s something I think everybody, all of us should look at. And having said that, the other ways in which we can actually take part with the stakeholders is to continue to co -innovate with many of our customers, those who are interested, to take forward our participation in many of these forums that are there.
For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry body. We also do internships in partnership with AICT, which is the All India Council of Technical Education. So there are very many apex bodies that are willing to give us space for innovation. For the private sector to come in and be part of the initiative. and the advantage of going through them is they have far better reach to the colleges, to the communities that enables us to actually bring our products directly to them. So I think if we look at all of these taking them together, that really makes the story much better and much stronger.
Thank you very much. Nakul, same question to you.
So in the interest of time I’ll try to be quick. Two things that we feel is missing in the ecosystem one is this global repository of solutions that we are trying to work on and this is not just like a DPG platform or a DPI platform but even that can include private sector. If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now. So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared covers? governance ecosystem frameworks has a talent pool that could be leveraged who worked on these solutions.
So I think that’s one thing that is currently missing. The other is around AI assurance. So is there an opportunity for us to create regional evaluation hubs, which are, you know, for a given particular region, because I see a lot of organizations struggle within country to get solutions evaluated. Now, imagine a situation where we are trying to take solution from one country to other, but there is no clear guidelines around how the evaluation, you know, yardsticks would differ from one country to another. So that could be another area of collaboration between, you know, various organizations who can set this up.
Great ideas from you both on very concrete things. Okay, so we have time for a couple of questions from Okay, all right, we’ve got a lot of hands. I’m going to try. We’re going to take a couple of questions. We’re going to take a couple of questions at the same time. And then we’ll just go back. for one more round from the panelists for answers. So I think we’ll start here, and then I saw another hand here. We’ll try to do three quick ones. Okay, so one, two, three.
My question is to Mr. Robert and Nakul. So I want to ask is, in comparison to LLMs, your chat GPTs, Geminis, what is the value being added in comparison to LLMs versus traditional ML? I mean, your classical improvements over functions, et
Okay. Then there was a question over here. There are two questions there. Yes, gentleman in the blue, and then beside him.
My question is to Mr. Bhadwani. You talked about bringing together people. There are fragmented initiatives. There are fragmented enabling services. But they are competing. So how do we bring them together for a shared purpose? And… And to create… a bigger impact.
Okay. Hi, sir. Hi, ma ‘am. I’m current undergraduate student of economics at University of Delhi. My question to you, ma ‘am Bhattacharya, first of all, I would like to understand from your perspective, having served in the private and in the public sector, how do you see all this debate that is around that SaaS as a service will be dead as a result of all this, but the context window, the context you have will actually be the most beneficial. So what will actually happen might, one might say that the valuation will be doubled by all these companies who are actually in this space. So in that context, if I ask you that how do you create value for small enterprises that’s happened?
For TCS, for example, the valuation is around $100 billion. Cursor, which is a very new company, the valuation is $30 billion. So an employee count of 300 is creating so much value, and that’s at 600 ,000 employee count is creating that value. So in that context, with the democratization of software and tech actually happen or is it just the big tech that benefit out of it? There will obviously be the benefit of people here and there but in the larger context,
Okay, so some very technical questions here. Arundhati, do you want to start with that one first about how to create value for small companies?
Yeah, so you know, over here, I don’t think we are talking merely about the value something creates on a particular day. And by the way, those values, they keep sliding up and down. Nobody really knows who’s the winner in the race till the race is over and the race is far from over. Okay, so towards that extent, I would ask you not to be too, you know, not to be too pressured or too influenced by things like SaaS is dead and the entire valuation will come to only these few companies and things like that. End of the if there are no users, what valuation will this company have? You need users. Now who creates users?
Users are created by many many processes and methods and only just having an LLM is not going to give you all of that. So a company when it actually works, there are workflows, there are governance rules, there are auditability requirements, there is observability, there is everything in over there and therefore just to think about one particular development by some particular company making that kind of a statement, I think at this point of time is a little too ahead of its time. Things will shake themselves out. Having said that, it’s also true that whether it be TCS, whether it be Salesforce, whether it be State Bank of India or whether it be any other company, the ways in which we do our work is going to change.
The ways in which people take in technology and that influences their life. will change and companies that will be amenable to the change companies that will take advantage of the change are the ones that are going to sustain are the ones that are going to grow up in value but my sincere request to you because you’re a youngster is don’t try to look at market cap while trying to create your company do the right things the market cap will follow okay it’s not market cap that makes you want to work it’s your work and your satisfaction and being able to get something that really and truly helps people improve their standards of living improve the way they actually stay in this world that is what should drive you not the market cap numbers
and you wanted to add quickly to that
yeah I just want to add to that think of LLM as something that is worldwide Okay. And that’s a great advancement of AI. But you need something that is industry wise, something that is company wise, something that is context wise, task wise, right? We are far from that. And there are lots of challenges. I mean, just before this, Arundhati and I were discussing the kind of data silos that exist even within an enterprise, right? So we are looking at AI and agentic AI in particular is something that is going to bring more connectedness within enterprise. There is significant amount of work to be done. But you imagine now this combined intelligence, right? Where you have different agents coordinating and working together will probably help us get more.
So some of parts could be more than one is very likely scenario. There are challenges to get it done. Okay. But nature is, the nature of work may change, but there is plenty of work coming at us. LLM has just solved part of the problem
Nakul there were a couple questions about how do we bring people together for shared purpose and specific question I think it was LLM versus ML is that what you’re saying yeah do you want to take this sorry we’ve got just a few minutes before we have to close the session so just
so LLM versus traditional ML in our experience what we have realized is in a lot of places specifically because we work in impact sector a lot of work we do is for ground level users resource is a big challenge right you’re talking about situations where the cell mobile phones are very basic internet is a challenge adoption is an issue in those scenarios we have often learned that traditional ML models have worked much better to serve the purpose and large language models have not been able to serve the purpose and in a lot of cases not even deployable right so there are genuine technical issues because we have of infrastructure because of low resource settings, which have not allowed LLMs to sort of reach where we would want them to reach, right?
Second, in such cases, small language models or small AI is what we’ve been now moving towards to have it serve very specific purpose rather trying to give general intelligence to, you know, some of these users on field, right? So that’s a very quick answer I could have just gone on. To answer your question, sir, I don’t think we are trying to eliminate competition. I think the way we should be thinking about this, that there are different ways organizations can collaborate, right? Some of them could collaborate based on geographical synergies. Some of them can collaborate based on expertise, you know, synergies. Some of them can collaborate based at what part of the life cycle of this entire AI deployment they can collaborate on that.
And absolutely, there will be consortia that will be created. It will still continue to compete with each other. And that’s a good thing to ensure that there’s also quality in the… but the idea is to foster collaboration where those synergies are possible, at least we start with that.
Thank you. Sorry, sir, we’re going to have to close the panel because we have literally 10 seconds left on the counter. I just want to say thank you very much to our panelists. We have gone from highest level government frameworks to questions about competitive landscape and how small companies can exist or can prosper, but the through line to all of this is that it doesn’t happen in silos. It cannot happen with only one stakeholder of society. This has to be multistakeholder and we have to commit to collective action. So I again refer to the QR code if you’re interested in the Hamburg Declaration that has more information on how to endorse that. Please join me in thanking our panelists for the excellent insights.
And thanks to all of you and good rest of summit. you. Thank you. Thank you. Thank you.
Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today’s discussion but of course of being an essential partner in the work of the rou…
EventThese efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeholder engagement is identified as a vital component of responsible AI governance…
EventGlobal AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from government, industry, civil society, and youth organisations. Discussions explored th…
UpdatesAI governance can learn from Internet governance frameworks and mechanisms, but requires more extensive global partnership due to the concentrated control of AI technology. The approach should be mult…
EventMulti-stakeholder cooperation and inclusive governance frameworks are essential
EventHenry van Burgsteden: Thank you very much. Yes, I’m here. Dear colleagues, partners and friends, on behalf of Vincent Martin, the Director of the FOWL Office of Innovation, I would like to sincerely t…
EventThis comment challenges the tech-centric view that dominates many AI discussions by revealing that technology development is actually the simplest component. It highlights the critical importance of i…
Event– **Multi-stakeholder Approach and Inclusive Development**: Drawing parallels to internet governance, speakers stressed that no single stakeholder has all the answers. They advocated for bringing toge…
EventSeveral concrete commitments emerged:
EventImplementation and Accountability Opp emphasizes the need for concrete, implementable commitments in the Hamburg Declaration. He stresses the importance of balancing ambition with feasibility in the …
EventDr. Kofler referenced the Hamburg Sustainability Conference as an example of concrete commitments being made, emphasising that this partnership represents a commitment to measurable outcomes rather th…
EventThe private sector’s role in producing innovations and democratizing technology is emphasized. The private industry helps incentivize the democratization of technology through the downward cost curves…
EventJack Clark:Thank you very much. I come here today to offer a brief overview of why AI has become a subject of concern for the world’s nations, what the next few years hold for the development of the t…
EventRegarding the private sector as monolithic is a mistake from a policy-making and governance perspective, Dion pointed out. The private sector consists of three categories of actors: the IT community, …
BlogPrivate sectors have the technological solutions
EventSustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology – it can help solve climate and environmental challenges through modeling and a…
EventIn conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful consideration must be given to address challenges such as fragmentation, financing,…
EventArtificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing society. Its integration is positive, as it enhances strategy setting, decision mak…
EventCosta Rica has chosen to lead by example. Together with the OECD, we’re leading the development of the OECD AI Policy Toolkit, a practical instrument designed to help countries translate principles in…
Event“Robert Opp acted as the moderator of the panel and is identified as a representative from the UN Development Programme.”
The knowledge base lists Robert Opp as the moderator and UNDP representative for the discussion [S15] and records his opening remarks on multistakeholder involvement [S96].
“Bärbel Kofler, Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, emphasized multistakeholder engagement for AI.”
Her role and focus on responsible AI are confirmed by entries describing her as Parliamentary State Secretary and a stakeholder in AI governance [S13] and [S14].
“Only about 17 % of venture‑capital flows go to the Global South, highlighting a financing gap for AI development.”
The knowledge base notes a financing gap with less than 1 % of venture-capital funding directed to Africa, providing related context but a different figure [S102].
“The UN Development Programme warned that AI’s current trajectory is “not equitable” and could widen an “AI equity gap”.”
General statements in the knowledge base acknowledge that AI development carries significant risks, supporting the concern about inequitable outcomes [S90].
“Arundhati Bhattacharya, Chairperson and CEO of Salesforce South Asia, advocated for the democratization of technology and inclusive AI.”
Her involvement in building inclusive AI societies is recorded in the knowledge base, confirming her leadership role and focus on democratization [S105].
“Calls for large‑scale vocational and university training programmes to up‑skill SMEs for AI development.”
The knowledge base discusses skilling and education in AI, highlighting opportunities and challenges in India, which adds nuance to the reported training initiatives [S95].
The panel displayed strong consensus on four pillars: (1) the necessity of robust, multi‑stakeholder governance frameworks to prevent AI‑driven inequities; (2) the centrality of collaborative partnerships across government, industry, academia and civil society; (3) the urgency of large‑scale training and capacity‑building programmes; and (4) the importance of open‑source data and shared resources to democratize AI. These convergences suggest a solid foundation for collective action and signal that future policy and programme design can build on shared commitments such as the Hamburg Declaration.
High consensus – most speakers reiterated the same themes with concrete examples, indicating a unified stance that can drive coordinated implementation of responsible AI for sustainable development.
The panel shows substantial consensus that AI can drive sustainable development and that multi‑stakeholder collaboration is essential. However, there are clear disagreements on who should lead the governance effort, the preferred mechanisms for closing the AI equity gap, the technical suitability of LLMs in low‑resource settings, and whether open‑source or proprietary approaches best achieve democratisation.
Moderate – while all participants share the overarching goal of responsible, inclusive AI, they diverge on strategic pathways and technical choices. These differences could affect policy design, funding allocations and partnership models, potentially slowing coordinated action unless reconciled.
The discussion was driven forward by a series of pivotal remarks that moved the dialogue from abstract optimism about AI to a nuanced examination of systemic inequities, concrete implementation challenges, and actionable solutions. Bärbel Kofler’s framing of the ‘power gap’ set the problem definition; Arundhati’s financial‑inclusion story illustrated the transformative potential when policy and technology align; Nakul Jain’s emphasis on ecosystem design and his marketplace proposal provided a roadmap for scaling impact; and the Hamburg Declaration updates supplied proof that collective commitments can translate into measurable outcomes. Together, these comments reshaped the conversation, prompting participants to focus on governance, infrastructure, and cross‑sector collaboration rather than technology alone, and they anchored the panel’s recommendations in real‑world examples and actionable next steps.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

