Multistakeholder Partnerships for Thriving AI Ecosystems

20 Feb 2026 15:00h - 16:00h

Multistakeholder Partnerships for Thriving AI Ecosystems

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence can accelerate sustainable development but must be deployed responsibly to avoid deepening existing inequities [1-5]. UNDP warned that AI’s rapid evolution is currently inequitable and, without firm responsible-use commitments, the AI equity gap could worsen [3-5]. To confront this, the Hamburg Sustainability Conference introduced the Hamburg Declaration on AI for Sustainable Development, establishing principles and seeking concrete commitments from governments, businesses and civil society [10-12].


Bärbel Kofler emphasized that governments need legal frameworks and global coordination so AI benefits all citizens, highlighting a “power gap” evident in the concentration of venture-capital (≈17 % in the Global South) and data-center capacity (≈0.1 % in the Global South) [38-44][49-58][61-63]. She called for investment in open-source tools, vocational training and university curricula to bring small- and medium-sized enterprises into AI development [75-80]. Arundhati Bhattacharya argued that technology must be democratized, describing Salesforce’s 1 % profit, product and time pledge and its large-scale skilling programme that has created 3.9 million certified “Trailblazers” in India [92-99][100-103]. She cited India’s financial-inclusion journey-UIDAI’s biometric IDs and the UPI system-as examples of AI-enabled digital tools that channel subsidies directly to citizens and lower borrowing costs [108-118][119-126].


Nakul Jain positioned Wadwani AI Global as a convener that bridges technology providers with governments, illustrating successful education-reading-fluency pilots in Gujarat and health-focused tuberculosis projects that required early-stage policy and evaluation partnerships [146-152][155-168][170-176]. Under the Hamburg Declaration, Germany’s ministry reported exceeding its target of training 160 000 people (reaching 190 000), releasing 12 AI building blocks (now 15), delivering 30 AI diagnostics (now 55) and supporting projects in Kenya, Cambodia and India on satellite data, cervical-cancer detection and multilingual datasets [236-244][245-254]. Tata Consultancy Services added that India’s responsible-AI task force has spawned AI Centres of Excellence, a “Trusty Platform” for responsible-AI evaluation, and initiatives to green AI through resource-aware computing and a quantum-valley partnership [199-207][210-218][270-274].


Looking ahead, panelists identified gaps such as a global repository or marketplace for AI solutions and regional evaluation hubs, while urging private firms to adopt self-regulation and expand skilling missions to sustain collaboration [292-300][280-288][276-279]. All participants agreed that AI progress cannot be achieved in silos; multistakeholder collective action, anchored by the Hamburg Declaration, is essential to translate high-level principles into measurable impact [376-379].


Keypoints


Major discussion points


Responsible AI governance is essential to avoid widening equity gaps.


The UNDP opener stresses AI’s development is “not equitable” and warns of an “AI equity gap” that could worsen inequality if not responsibly managed [1-5]. Bärbel Kofler expands on this, describing a “power gap” manifested in venture-capital flows (≈ 17 % to the Global South) and data-center capacity (≈ 0.1 % in the Global South) that must be closed through government-led frameworks, energy policies, and open-source investment [49-60][61-63].


Multi-stakeholder partnerships are the backbone of inclusive AI ecosystems.


Kofler emphasizes that AI “depends on multistakeholder engagement” and must involve civil society, governments, and the private sector [38-40]. Nakul Jain notes that technology is the easy part; the real challenge is “institutional mechanisms” and embedding solutions from day one through government, technical partners, and field-level capacity-building [146-152][158-166]. The moderator’s question to Arundhati also frames the issue as “how do we distribute responsibility between different stakeholders?” [84-87].


The Hamburg AI Declaration provides concrete, measurable commitments.


Since its adoption, the German ministry has “trained 160 000 people” (actually 190 000) and delivered AI building blocks, diagnostics, and datasets-exceeding targets such as 12 building blocks (now 15) and 30 diagnostics (now 55 datasets) [238-246]. Ongoing projects illustrate impact: satellite-data services for Kenyan farmers, cervical-cancer detection in Cambodia, and multilingual datasets for India [250-257].


Private-sector actors contribute by democratizing technology, skilling, and scaling solutions.


Arundhati describes Salesforce’s “1 % profit, 1 % product, 1 % time” model, massive up-skilling of 3.9 million “Trailblazers,” and the role of digital ID and UPI in India’s financial-inclusion success [92-99][110-118]. Wadwani AI positions itself as a “convener of technology for social good,” bridging startups, governments, and NGOs to move pilots into field deployment [173-175].


Future challenges and opportunities: shared repositories, AI assurance, greening AI, and appropriate model choices.


Jain calls for a global marketplace of solutions and regional evaluation hubs to standardise “AI assurance” across countries [292-301]. Speaker 1 highlights the need for sensing infrastructure, compute equity, and “greening of AI” through resource-aware designs [210-218]. In low-resource settings, traditional ML often outperforms large language models, prompting a shift toward small, task-specific models [363-365].


Overall purpose / goal


The panel convened to examine how AI can be harnessed responsibly for sustainable development, identify governance gaps, showcase concrete actions (e.g., the Hamburg Declaration), and chart collaborative pathways that involve governments, international bodies, the private sector, academia, and civil society.


Overall tone


The discussion begins with a formal, cautionary tone about risks and inequities, moves into a collaborative and solution-focused mood as participants share concrete examples and successes, and ends on a pragmatic, forward-looking note emphasizing collective action, concrete commitments, and the need for new partnership mechanisms. Throughout, the tone remains constructive and optimistic, shifting from problem-identification to actionable recommendations.


Speakers

Robert Opp


– Role/Title: Representative and Chief Digital Officer, United Nations Development Programme (UNDP)[S10][S11]


– Area of Expertise: AI for sustainable development, digital transformation, multistakeholder partnerships


Bärbel Kofler


– Role/Title: Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development; Member of the Bundestag[S13][S14][S15]


– Area of Expertise: Government policy, international development, AI governance, sustainable development


Arundhati Bhattacharya


– Role/Title: Chairperson and CEO, Salesforce South Asia; Former Chairperson, State Bank of India[S16][S17]


– Area of Expertise: Digital transformation, financial inclusion, AI ethics, technology for inclusive growth


Nakul Jain


– Role/Title: CEO and Managing Director, Wadwani AI Global[S18][S19]


– Area of Expertise: AI solutions for underserved communities, impact-focused AI deployment, multi-stakeholder partnerships


Speaker 1


– Role/Title: Representative of Tata Consultancy Services (TCS) (inferred from discussion of TCS endorsement)


– Area of Expertise: Responsible AI, AI evaluation platforms, greening AI, industry-academia collaborations


Audience Member 1


– Role/Title: Founder, Corral Inc[S4]


– Area of Expertise: AI/ML technologies (inferred from question on LLM vs. traditional ML)


Audience Member 2


– Role/Title: Audience participant (part of a German group; specific affiliation not specified)[S1][S2][S3]


– Area of Expertise: (not specified)


Audience Member 3


– Role/Title: Undergraduate student, Economics, University of Delhi (as stated in transcript)


– Area of Expertise: Economics, interest in AI’s impact on business models


Additional speakers:


(None – all speakers identified in the provided list appear in the transcript.)


Full session reportComprehensive analysis and detailed insights

The session opened with a stark warning from the UN Development Programme that, while artificial intelligence (AI) can be a powerful catalyst for sustainable development, its current trajectory is “not equitable” and risks widening an “AI equity gap” that could exacerbate existing inequalities if responsible-use commitments are not put in place [1-5]. This framing set the tone for a discussion centred on how to harness AI’s benefits without deepening social and economic divides.


The moderator, Robert Opp, introduced the panel and linked the conversation to the Hamburg Sustainability Conference, which annually produces the Hamburg Declaration on AI for Sustainable Development – a set of principles and measurable commitments endorsed by governments, private firms and civil-society actors [1-5]. The declaration was presented as a concrete mechanism to move from high-level rhetoric to actionable steps.


Bärbel Kofler, Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, stressed that AI’s future depends on “multistakeholder engagement” involving governments, civil society and the private sector [38-40]. She identified a “power gap” rather than an “innovation gap”, noting that only about 17 % of venture-capital flows go to the Global South despite it housing over 90 % of the world’s population, and that data-centre capacity in the South is a mere 0.1 % of global capacity [55-58]. To close this gap, Kofler called for legal frameworks, coordinated global policies, investment in energy-efficient infrastructure, open-source tools and large-scale vocational and university training programmes that bring small- and medium-sized enterprises into AI development [61-63][75-80].


Arundhati Bhattacharya, Chairperson and CEO of Salesforce South Asia, argued that technology must be “democratised” to have impact [92-93]. Salesforce’s “1 % profit, 1 % product, 1 % time” pledge directs resources to the non-profit sector, while its “Trailblazer” programme has up-skilled 3.9 million users in India [96-104][100-104]. She illustrated the transformative power of digital tools through India’s financial-inclusion journey: biometric IDs from UIDAI and the Universal Payment Interface (UPI) enabled direct subsidy transfers, turning dormant bank accounts into cash-flow assets that reduced borrowing costs for low-income borrowers [108-118][121-126][124-129]. Bhattacharya stressed that policymakers must create ethical infrastructure, protect data privacy and ensure that technology is offered “in an ethical manner” [124-129]; she also emphasized that large corporates should self-regulate to avoid heavy-handed external regulation, a stance that contrasts with Kofler’s government-led approach [280-283].


Nakul Jain, CEO of Wadwani AI Global, positioned his organisation as a “convener of technology for social good”, bridging startups, governments and NGOs to ensure AI solutions move from labs to the field [173-175]. He argued that building the technology is “the easiest part” and that success hinges on institutional mechanisms that embed solutions from day one within existing government programmes [146-152][158-166]. Jain cited two pilots: an oral-reading-fluency tool for Gujarat schools co-developed with the state government and technical partners, and a tuberculosis-diagnosis project that involved the Indian Council of Medical Research to define evaluation criteria from the outset [153-168][170-176]. He further noted that in low-resource settings traditional machine-learning models often outperform large language models (LLMs) because of bandwidth and device constraints [350-354].


Dr. Sachin Loda, Chief Scientist & Head of Research at Tata Consultancy Services, expanded the discussion to data infrastructure and compute equity. He highlighted the need for extensive sensing networks-such as low-cost air-pollution sensors-to generate high-quality, high-velocity data for AI analytics [192-196][197-199]. He noted India’s responsible-AI task force, AI Centres of Excellence (e.g., the collaboration with IIT Kanpur), and initiatives to “green” AI through resource-aware model design and a “quantum valley” partnership with IBM and the Andhra government to explore quantum-hardware solutions [200-207][210-218][270-274].


The panel then returned to concrete progress under the Hamburg Declaration. Kofler reported that Germany’s ministry exceeded its training target, delivering 190 000 AI-related trainings against a commitment of 160 000 [236-241]. The ministry also released 12 AI building blocks for climate action (now 15) [242-246], 30 AI diagnostics (now 15) [247-250], and 55 open datasets [251-254], with pilots supporting satellite-data services for Kenyan farmers, cervical-cancer detection in Cambodia, and multilingual datasets for India [242-246][247-250][251-254].


Areas of consensus emerged across the discussion. All speakers affirmed that responsible AI governance is essential to prevent widening equity gaps [1-5][49-58][124-129][149-164][200-203]; that multistakeholder partnerships-linking government, private firms, academia and civil society-are the backbone of inclusive AI ecosystems [38-40][146-152][280-288][250-257]; and that large-scale capacity-building programmes are critical, as evidenced by Germany’s training numbers, Salesforce’s Trailblazer community and the teacher-training components of Wadwani’s pilots [236-241][96-104][165-166][75-80]. Participants also agreed on the importance of open-source data and tools to democratise AI [80-81][192-199][95-99].


The panel highlighted several nuanced disagreements. Kofler emphasised a government-led approach-legal frameworks, open-source investment and state-funded training-to close the power gap [55-58][61-63][75-80]. Bhattacharya advocated for corporate self-regulation, arguing that large firms must proactively adopt ethical standards to avoid heavy-handed external regulation [280-283]. Jain positioned his organisation as a neutral convener, proposing a global repository of AI solutions and a marketplace of shared playbooks, governance frameworks and talent pools to facilitate cross-border deployment [292-298]; he also called for regional AI-assurance hubs to standardise evaluation criteria across countries [298-301].


During the Q&A, an audience member asked how to bring together fragmented, competing initiatives. Kofler and Jain responded that coordinated frameworks, shared repositories and “hand-holding” mechanisms are needed to avoid duplication and to align disparate efforts around a common purpose [300-306]. This echoed the earlier call for a global marketplace and highlighted the urgency of consolidating initiatives.


Looking forward, the panel identified further opportunities. Jain reiterated the need for a global repository and marketplace, as well as regional evaluation hubs, to enable startups to scale tools from India to Ethiopia [292-298]. Bhattacharya highlighted the need for expanded skilling missions through industry bodies such as NASCOM and AICTE [280-288]. Dr. Loda stressed the importance of building affordable sensing infrastructure, repurposing legacy hardware and developing quantum-valley capabilities to narrow the compute gap between the Global North and South [210-218][213-218]. All agreed that “technology is the easiest part” and that lasting impact depends on embedding AI solutions within existing institutional frameworks and providing ongoing support for end-users [146-152][158-166].


In closing, Opp reminded the audience that the Hamburg Declaration is a living instrument that invites further endorsement via a QR-code link [376-380], and he summarised the overarching message: AI can accelerate sustainable development, but only through coordinated, multistakeholder action, robust governance, and measurable commitments can the risk of deepening inequities be avoided [376-380]. The discussion outlines a pragmatic roadmap for responsible AI that balances optimism with concrete, accountable actions.


Session transcriptComplete transcript of the session
Robert Opp

From the perspective of the UN Development Program, certainly we see a concern with what is happening in the development and adoption of AI in that there’s no question and we are very convinced that AI can have such a powerful impact on sustainable development in a positive way. It can help us really close some of those gaps that we see in persistent development challenges. However, we know that the way that it is evolving now is not equitable. And if we do not have a kind of commitment to responsible use of AI, we fear that the AI equity gap could actually get worse or inequality could get worse. And more so, it also can be harmful in some cases if we’re not applying AI responsibly.

So we want to get into then the point about… How do we actually address some of those challenges? How do we get some of the… measures in place or the kinds of commitments, principles that we need to have as an overall community, especially in the international development community, but beyond in terms of private sector and others as well. In terms of what are our commitments to making sure that we are deploying and using AI in a responsible way and building AI ecosystems. So this does tie in with the process that has been a feature of previous AI summits as well as the Hamburg Sustainability Conference that is hosted every year in the city of Hamburg and sponsored by the government of Germany with other partners like UNDP in the city of Hamburg.

And as part of that process, we have introduced a declaration on AI for sustainable development, the Hamburg Declaration. And so we’ll talk about that a little bit in the context of this conversation after exploring some of the kind of opening thoughts around multistate. Thank you very much. in general. So, I’m going to start off and introduce our panelists and then we’ll go through a couple of rounds of questions. Be thinking about your questions as well because I think that we’ll have time to go to you as well for a Q &A and so happy to get your thoughts and get some interaction with the panel as part of this session. So, with that, I have a very distinguished panel to introduce to you and it’s a real pleasure to have them here today.

I would like to introduce, sitting on my left, Dr. Barbel Koffler, who is the Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development. She’s a long -standing champion of human rights and development, a member of the Bundestag since 2004, and she plays a key role in shaping Germany’s global development economy. Thank you. Yes, exactly. We are also honored to have Ms. Arundhati Bhattacharya, who’s the chairperson and CEO of Salesforce South Asia, a transformative leader with over four decades of experience. She made history as the first woman chairperson of the State Bank of India, where she led one of the country’s most significant digital transformation journeys. She oversees a huge part of Salesforce business in this region and beyond, shaping how technology partnerships drive innovation skills and inclusive growth.

We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven technology leader. He works to advance AI solutions for underserved communities across the global south. And his team has done 40 deployments, of AI reaching more than 150 million people in use cases such as health care, healthcare, agriculture, and education. And we’re also joined by Dr. Sachin Loda, who’s the chief scientist and head of research at Tata Consultancy Services. He’s a leader in cybersecurity and privacy research. He heads Tata Consultancy Services’ work on trustworthy AI, quantum resilience, cloud de -risking, and privacy by design. And that is all about translating cutting -edge research into real -world, award -winning innovation.

Please join me in welcoming our panelists. Okay, so let us start. Dr. Kofler, I’d like to start with you, please. You know, you represent the government perspective here on this panel. What do you think are the difficulties with the distinct roles of government, but also some of the other players like… private sector, civil society and international organizations, the roles that they play in building AI ecosystems that are both innovative and inclusive.

Bärbel Kofler

I used this one in the meantime. Thank you for the question and thank you for having me also as a representative of the government. But I’m very happy to be on a panel with scientists, with academia and with private sector because at the end of the day, artificial intelligence and how we shape really the future with that is depending on multistakeholder engagement. There has to be an engagement from all parts of society and also including civil society because it’s a broad outreach in our societies we have to undertake. So that’s why I think it’s very important to discuss it. You were pointing out challenges and advantages of artificial intelligence. And I think there is the role.

I think there is the role of governments coming in. because, yes, there are tremendous advantages of artificial intelligence. In my sphere of politics, we discuss about how we can use that for detect diseases better through AI, cancer in various topics. We discuss about climate change and how we can make prediction applicable for everybody on the ground. We discuss about administration and the advantages in administration. AI could offer. But at the end of the day, all those hopes will come into reality if we shape that in a framework, in a legal framework, in a framework we coordinate with partners around the globe and within society, which makes all those advantages accessible for everybody, for all parts of society, for any citizen to address its government through AI conversation in its own language, for example, to smallholder farmers if they get the chance to make use of it, to doctors in remote areas who can really then detect with the new technology diseases or whatever.

So we have to close the gap. And I would say it’s not an innovation gap, it’s a power gap. Because innovative people are existing around the globe. Ideas are created around the globe in all spheres of society. So I’ve been here on a panel or at a session, this brilliant young Indian startup, yeah, how would I call it, people, I don’t know, I forgot the English word, sorry. This brilliant people who are creating startups. And that is possible all over the globe, but the environment has to be there. And if you look how venture capital, for example, is distributed, and we know that, I think, it’s 17 % of venture capital only is in those parts of the world who are presenting more than 90 % of the people.

So we feel a power gap at the end of the day. If you look about where you have data center resources, it’s even worse. Global North, it’s almost all the data centers available capacity and only 0 .1 % is somewhere in the Global South. So there is a big gap and we have to overcome that gap. We have to close that gap. And I think that’s something governments have to do also and where they utilize to create an environment. If it’s coming to energy consumption, if it’s coming to an enabling environment for research, if it’s coming to skill training for everybody. I learned in one of the sessions that we should change our mindset and everybody should get a job.

And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. But to do so, we have to invest in the mindset and in the skills of all users. We have to work on vocational training and university training to make research, education, and the needs of even small and medium -sized enterprises heard and linked together to bring small and medium -sized enterprises into the position to participate also from that new technology.

Not only the big players. So all those things need framework and need governance. And we have to make sure that the outcome, the research, the results, the data sets are available for everybody. That’s why we are also investing in open source. It’s something we very much are aligned with the Indian ideas because if we don’t do so, the first thing, the advantages are

Robert Opp

Okay, now it’s working. So framework and governance, important factors to have in the multi -stakeholder partnerships. Arundhati, maybe let me turn to you with a very similar question, but taking this from the private sector perspective. How do you think the responsibility should be distributed between the different stakeholders? And what does industry or private sector companies, tech companies in this case, uniquely bring to the table? And where is partnership essential?

Arundhati Bhattacharya

Thank you and good morning, every one of you. So I have been very fortunate in the fact that I have worked with two organizations. The one that I worked with initially was the State Bank of India, which is the largest bank in India and also the bank that really and truly spearheaded the financial inclusion program of the current government, which is the PM Chandan Yojana. And while doing that, I realized that any improvement in technology, if it is not really democratized, then it doesn’t really have an impact. If you want it to have an impact, you need to democratize technology. That’s the first thing. The second thing is currently I work at Salesforce. And Salesforce, again, it was set up with the intention that business is a platform for change.

We have what we call a one -by -one -by -one mission, which is we contribute 1 % of our profits or equity. 1 % of our products and 1 % of our time to the community, to the non -profit community. Of course, in India, it is 2 % of the profits because that’s what the law demands. And while doing that, we ensure that the non -profit sector not only has access to our products but also knows how to use them and use them to the best of their abilities. Along with that, we also do a lot of work in skilling various people. So we call people who basically are trained in the sales force technology as trailblazers. And India has 3 .9 million of them, the second largest up to the US.

And this is a community that has been literally nurtured by us. So it’s not only a question of ensuring that we are getting our products, out to the community, to the non -profit sector, to the various enterprises where we obviously market our products to them and then help them implement it in their organizations. But we are doing a lot of work on the skilling front as well because we feel that is something that’s very, very essential. Without that, India will not really be able to take the advantage of all that technology brings to it. But one thing is very clear in our heads. And that is that a populous nation like India can never really have the standard of living that it deserves unless technology is a part of the play.

We tried this financial inclusion initiative for 15 long years before we actually were successful in 2014. And the reason for that was technology. Because by that time, the unique identification authority which gave us our biometric units, you know, marker to enable us to get the unique identification number. And that’s what we’re doing. that helped us do the KYC and second the spread of the mobile networks enabled us to actually approach the 600 ,000 villages which is India before this we never had that connectivity now it was because of this that these people then got accounts 97 % of these accounts were without any balances and a lot of people asked us was this merely a ploy was this merely you know some kind of action without any meaning but very soon we found that those accounts started getting money because the government could then target the subsidies they were giving to below poverty line people directly to the citizens without having to go through any middlemen in fact some politician at some point of time had said that only 13 paise after for every one rupee of subsidy actually goes to the person who needs it Here, when the money started flowing directly into the account, the whole of the rupee was going to the person who needed the subsidy.

Now, over and above that, we then had the UPI, which is the Universal Payment Interface. And what did that do? That enabled even small customers and small shopkeepers to start actually taking money in a digital form. And because they were taking money in the digital form, these accounts of theirs, which had been opened, started showing a cash flow. And the moment an account has cash flow, bankers then become interested in lending you money because they know that there is a cash flow to sort of back it up, to enable you to repay it. So here is a person who was taking a loan of, say, 2 ,000 rupees, paying 10 % a month, suddenly becomes eligible for that 2 ,000 rupees at 7 % a year.

Imagine the difference it can make in the life of that person. And this is all… All on account of TikTok. AI is not going to be anything different. AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved. And in order to solve it, we therefore need to understand people are eager to take it. They will take the technology because it improves their lives. But in order for them to take it, it’s up to the policy makers to make those interventions to ensure that they are not being taken for a ride. That whatever is being offered to them is being offered in an ethical manner. That we are doing the right, we are creating the right kind of infrastructure to enable them to actually be able to access it.

That the right to privacy of their data is properly maintained. So I think the policy makers, the people who are there, who are conducting all of this, they have a far bigger responsibility. Because adoption is not something which is a policy. Adoption is not a problem in India. Adoption will happen. as soon as people realize that it’s helping them in their regular lives adoption is not going to be a problem there’s not going to be any pessimism on adoption and if you go to the floor where the expo is you’ll actually see people how interested they are in finding out how it’s going to improve their lives but it is people who make the policies people who make the infrastructure available people who actually make the initiatives available who need to take the responsibilities and companies like ours we need to take the responsibility as to how we skill the people enable them to understand what is good for them and what is not good for them make them understand that there is both good and bad and they need to choose the better they should not be taken for a ride so all of us have a role to play and I think we need to be aware of those roles and we need to play them well thank you thank you so much

Robert Opp

so actually those Those two answers complement each other so nicely because, as you were saying, Dr. Koffler, there’s the governance and the framework, and this is key to the Indian experience as well. Government sets down the framework, the rails, but then private sector has such an important role in scaling, in innovating, in helping people be successful and interact with that. So I’m going to turn now to Nakul. And a question now because, you know, your Wadwani AI Global has done a lot of work in the space of multi -stakeholder partnerships and the rollout of AI for specific use cases. From your experience, where have multi -stakeholder partnerships already demonstrated tangible impact in terms of advancing responsible AI for development?

And what are the conditions that helped ensure these collaborations? Translated into sustained impact rather than… and just remaining solved.

Nakul Jain

Thank you for that question, Rob. Also, thank you for having me here. I hope this is working. It’s working, but it won’t be working. All right, okay. Yeah, no, thank you, everyone, for being here. So, you know, my answer will be from my experience of deploying some of these solutions on field and what has worked for us, what has not worked for us. Now, what we have realized is that building technology, at least for an application organization like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi -stakeholder ecosystem that can bring in their expertise, who will take care of everything that sits around the technology.

The ecosystem that works is the one that ensures from day one that we are able to do everything that we need to do. What will be the institutional mechanism? of getting something done. How will the solution, the use case be institutionalized in the ecosystem, how will it be embedded within the framework we are speaking about, within the department ministry we are talking about, within the problem area and the people who will be using it. That cannot be an afterthought, that has to be something that needs to come in from day one. And I’ll give you a few examples, some of the solutions that we were able to deploy at scale and what helped them work.

So we built a solution in education around oral reading fluency, essentially to assess students on their reading abilities. Technology part, like I said, not very difficult to do. But this is not something that Wadwani AI Global could have done alone. This required a collaboration between government, ownership from government, and what Arunati also mentioned, that it is required for policy makers to own this problem. We as an organization, came in as a facilitator. But the government of Gujarat was the one who led this entire initiative, who made sure that data is available, who helped us find the right partners who could then be leveraged to annotate some of this data. The other thing, rather than trying to create this solution in silos, what we also started thinking about is where will it get embedded, right?

Tomorrow, if this has to be used, and there are tons and tons of applications that are already being pitched to government for education, pitched to teachers, how do you ensure that they actually get adopted? And one way to do that was to understand how is government currently running their programs. So we started working with government and their technical partner to work on this together. So rather than thinking about this as a solution that we have created and we give you an app, we started figuring out how can whatever we develop plug into the existing system. . in classrooms, wherever they were, and then how do you ensure that there is a monitoring mechanism established so that the government is now fully capable of not just following up on the adoption, but also to figure out if there’s a genuine impact of technology.

All of this was only possible because we had this collaboration between the government with the technical partner who government was working with us being the AI partner. And the partners who were working with government at school levels to help them programmatically, to help teachers build capacity on this, to help teachers understand it, to also counsel teachers, because there’s also a constant threat about AI trying to replace jobs. So there’s also that handholding that is required at field level. And as a tech organization, we could not have done that. So you need that partner as well. Moving to a slightly different example around healthcare. And we have done a lot of product. in tuberculosis and you know some of the things remain common around your government partnership data collection through them and you know your programmatic partnership but what additional support we got was from icmr how can we start thinking about evaluating this from day one because health being a much more sensitive area right directly impacting the lives you know when some of our decisions give outcome but some of our models give those decisions how do you ensure that evaluation and the success criteria is not an afterthought so you don’t just work with the government you also work with the agency who will eventually evaluate it so you are finally following your from the first day itself how you should be optimizing for what parameters and what results so these are some of the examples robert i can think of where you know technology government ecosystem partners played a role came together and delivered something

Robert Opp

thank you nicole and just a quick follow up on that so Wadwani AI Global is how would you describe the space that you occupy in that? You’re not government, you’re not exactly private sector you’re kind of an integrator, right?

Nakul Jain

Yeah no absolutely, I think what we would like to call ourselves is a convener of technology, especially for social good, so our role essentially is to ensure there is an impact, leveraging artificial intelligence who all need to participate is something that we work with partners along we work with governments, whether it requires capacity building whether it requires advisory to the government, whether it requires actual product development we help government with everything because what we have also realized is while government is well intent, there is a need for a lot of hand holding because they have priorities and artificial intelligence intelligence, while now it’s become a buzzword, they still need a lot of hand -holding in order to make sure that some of these solutions just don’t end up living in labs and some of these rooms like this, but actually go on field.

So our role becomes even more important to make sure that everyone who needs to participate comes along, participates, and ensures that there is an actual impact on field. Thank you.

Robert Opp

And the reason I asked that is because also to go over to Sachin, you know, there are, you know, kind of the roles of, you know, private sector innovation, and there’s the government role. The international organizations like UNDP have also bring a kind of multilateral and global perspective on some of these things. But, you know, companies like Tata Consultancy also serve kind of in the middle there as well. And I’d love your perspectives on that same question, which is, you know, where are you seeing multi -stakeholder partnerships resulting in tangible and tangible results? And I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question.

I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great I think that’s a great question. I think that’s a

Speaker 1

to contextualize it’s not just true for help it’s true for any domain you look at and and yes data is fragmented siloed that’s true even within enterprise it’s of course true for at a bigger scale I personally believe that bringing this data kit together is important to create open data ecosystem but I believe problem is lies somewhere else the problem lies in the sensing infrastructure that you need to put together to be able to have high quality high volume high velocity data I take example of for example I mean air pollution monitoring right just think of that you probably could do a much better job if you had sufficient number of sensors deployed across the region of interest probably they are going to be large in number.

You will have to think of, you know, how do you make it cheaper, faster, better. And having done that, it will also depend on, you know, how you could analyze, process and sort of derive insights. That’s where AI will come in, right? So I think government can play a very big role in sort of catalyzing this and enabling sort of, you know, putting together a great sensing infrastructure could be thought of, you know, part of the digital public infrastructure. On top of that, private and public entities could innovate, academicians and inventor community could innovate. So what are we doing here? So we are. So Indian government actually has been very proactive on the overall responsible AI mission.

I think. I was part of the task force that was set up by. principal scientific advisor to Indian government, Professor K. Vijay Raghavan on responsible AI. This happened about eight years back. And we kind of tried to study the implications of AI and the responsible facet of AI in Indian context then. And following that, a lot of mission projects were launched. So as part of that, I have seen now there are different AICOEs supported by government. So we are part of AICOE for sustainability in collaboration with IIT Kanpur. And it’s going live. In fact, some of you might have attended a session by IIT Kanpur team yesterday on this very same topic, where we are looking at a lot of interesting problems in overall sustainability domain.

So that’s just data. I just want to make, So if you just mention your other two points super quickly. Yeah, yeah. So I’ll just mention about compute, which is, again, a very important facet. There is a big gap between what is available in global north versus global south. I have just two ideas to share. One is we should think of how we could repurpose the legacy hardware. And you would see that a lot of high -tech companies have innovated along that. It’s not necessary that you need to have current latest hardware. You could do a lot of innovations around it. Two, of course, you should explore the new hardware that is becoming available. So TCS, IBM, and Andhra government have come together to create a quantum valley in India, which could open ways for that.

So I think these are some quick remarks I thought I’d make.

Robert Opp

Great. No, I mean, and the depth is incredible, right? And you’re talking about these very… Very concrete collaboration. that are making this, you know, a possibility. Okay, I think we’ll go straight into just a quicker second round and then we’re going to go for some comments from you. And I want to bridge this now into, you know, we’ve been talking about multi -stakeholder partnerships and how there are different roles across different parts and stakeholders that have different roles to play. But what’s clear is that we also need collective action. And so I wanted just to kind of think about, okay, how do we move to kind of getting overall kind of collective action in these spaces for responsible use of AI that kind of gives that sort of global framing around it.

And as I mentioned at the beginning, one of the things that we have been doing with the, as part of the Hamburg Sustainability Conference and the Hamburg Declaration, on responsible AI for sustainable development. is really looking at this space of collective action. So a couple of years ago, we did the first Hamburg Declaration. We have about 50 organizations that have endorsed the declaration. It has a number of principles related to it around responsible use of AI. And Dr. Koffler, I’m going to turn back to you to ask you a question especially about that process, which is since the adoption of the Hamburg AI Declaration, what tangible progress have we seen and what are the concrete actions that you think are now required to move from the higher level principles into sustained and scaled impact?

Bärbel Kofler

Well, thank you for mentioning the concrete action because that’s actually what really it is all about. We were coming up with that idea on Hamburg Sustainability Conference and on that paper, not because we want to create another system, but because we want to create another paper or we want to invite to another conference. We really want to come up with commitment every stakeholder has to undertake when they are signing that memorandum. And so my ministry also was signing it, and we had concrete, tangible, measurable duties we have to fulfill, commitments we have to fulfill. I would love to give you a few examples. In the sake of time, I’m trying to be very short and very brief.

For example, one of the commitments was training people. We were committing to train 160 ,000 people within one year. We fulfilled that already, and that makes me very proud because we were trained already 190 ,000. So that is the first step we were committing. We were committing to open up AI building blocks, 12 AI building blocks, especially for using AI to climate action. We are now by a number of 15. And we were also committing 30 AI diagnostics. We are now by a number of 15. and we achieved 55 data sets. So that’s what it’s all about. We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps and that’s what my ministry did until now.

There are examples all over the world we are working with partners. We are working, for example, in Kenya. There is about analyzing satellite data which makes them available for farmers. We are working with Cambodia where it’s about medicine to detect cervix cancer, for example. We are working with the Indian government. Your government is doing a fantastic project including languages of the subcontinent, analyzing, we were just talking about it, analyzing the data sets for many languages which have to be included in the overall AI framework. So we are supporting Bershini by nine data sets. We were supporting the collection. So that’s what it’s all about. at the end of the day, and that will be my last sentence.

I invite everybody in the room, and maybe you spread the thought, to join Hamburg Declaration, because we started that, and it’s continuing. Government should be part, private sector should be part, civil society should be part, academia should be part of that, and come up with concrete, measurable, tangible commitments which could be fulfilled or can be fulfilled then. So you’re

Robert Opp

I’m glad you mentioned that, because the QR code that you see on the screen here, it gives you more information. If you would like to endorse the Hamburg Declaration, just go to the website there. We’re getting short on time for audience interaction here, but Sachin, really quickly, TCS endorsed the Hamburg Declaration last year. What? What progress have you seen since then?

Speaker 1

Yeah, so I’ll just quickly point out… to some activities we have been doing since then. So as I said, we are part of responsible AI deliberation in India and elsewhere. And we have launched a big program around it, where Carnegie Mellon is now our academic partner. And we are also talking to some Indian academics. As you rightly said, these are very hard problems, and this requires significant collaborations across sectors. We are also building our own technology to be able to evaluate, calibrate, and sort of help AI engineers build more responsible AI. It’s called Trusty Platform. More details we can share offline. We are also very keen on greening of AI, and that requires… a lot of resource -aware AI.

work. So this is again very significant and that’s something we are on.

Robert Opp

Fantastic. And you just mentioned greening AI. That’s one of the key principles of the declaration. Arundhati and Nakul, I’ll turn back to you now for some quick thoughts before we go for some Q &A. Where do you think as we look ahead, we’ve talked about the Hamburg Declaration as one sort of platform, but where do you see the strongest opportunities for new types of partnerships going forward? Things that maybe don’t exist yet but that should? Or what do you think is the next wave of this?

Arundhati Bhattacharya

Well, you know, as an organization, Salesforce from 2014 has had an office of humane and ethical use of technology. So this is something that I think most large corporates have to have a self -regulatory feature. This is important because unless you self -regulate you will have regulators coming down with a very heavy hand at some point of time or the other and that is not something you want if you want to keep innovating. So that’s something I think everybody, all of us should look at. And having said that, the other ways in which we can actually take part with the stakeholders is to continue to co -innovate with many of our customers, those who are interested, to take forward our participation in many of these forums that are there.

For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry body. We also do internships in partnership with AICT, which is the All India Council of Technical Education. So there are very many apex bodies that are willing to give us space for innovation. For the private sector to come in and be part of the initiative. and the advantage of going through them is they have far better reach to the colleges, to the communities that enables us to actually bring our products directly to them. So I think if we look at all of these taking them together, that really makes the story much better and much stronger.

Robert Opp

Thank you very much. Nakul, same question to you.

Nakul Jain

So in the interest of time I’ll try to be quick. Two things that we feel is missing in the ecosystem one is this global repository of solutions that we are trying to work on and this is not just like a DPG platform or a DPI platform but even that can include private sector. If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now. So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared covers? governance ecosystem frameworks has a talent pool that could be leveraged who worked on these solutions.

So I think that’s one thing that is currently missing. The other is around AI assurance. So is there an opportunity for us to create regional evaluation hubs, which are, you know, for a given particular region, because I see a lot of organizations struggle within country to get solutions evaluated. Now, imagine a situation where we are trying to take solution from one country to other, but there is no clear guidelines around how the evaluation, you know, yardsticks would differ from one country to another. So that could be another area of collaboration between, you know, various organizations who can set this up.

Robert Opp

Great ideas from you both on very concrete things. Okay, so we have time for a couple of questions from Okay, all right, we’ve got a lot of hands. I’m going to try. We’re going to take a couple of questions. We’re going to take a couple of questions at the same time. And then we’ll just go back. for one more round from the panelists for answers. So I think we’ll start here, and then I saw another hand here. We’ll try to do three quick ones. Okay, so one, two, three.

Audience Member 1

My question is to Mr. Robert and Nakul. So I want to ask is, in comparison to LLMs, your chat GPTs, Geminis, what is the value being added in comparison to LLMs versus traditional ML? I mean, your classical improvements over functions, et

Robert Opp

Okay. Then there was a question over here. There are two questions there. Yes, gentleman in the blue, and then beside him.

Audience Member 2

My question is to Mr. Bhadwani. You talked about bringing together people. There are fragmented initiatives. There are fragmented enabling services. But they are competing. So how do we bring them together for a shared purpose? And… And to create… a bigger impact.

Audience Member 3

Okay. Hi, sir. Hi, ma ‘am. I’m current undergraduate student of economics at University of Delhi. My question to you, ma ‘am Bhattacharya, first of all, I would like to understand from your perspective, having served in the private and in the public sector, how do you see all this debate that is around that SaaS as a service will be dead as a result of all this, but the context window, the context you have will actually be the most beneficial. So what will actually happen might, one might say that the valuation will be doubled by all these companies who are actually in this space. So in that context, if I ask you that how do you create value for small enterprises that’s happened?

For TCS, for example, the valuation is around $100 billion. Cursor, which is a very new company, the valuation is $30 billion. So an employee count of 300 is creating so much value, and that’s at 600 ,000 employee count is creating that value. So in that context, with the democratization of software and tech actually happen or is it just the big tech that benefit out of it? There will obviously be the benefit of people here and there but in the larger context,

Robert Opp

Okay, so some very technical questions here. Arundhati, do you want to start with that one first about how to create value for small companies?

Arundhati Bhattacharya

Yeah, so you know, over here, I don’t think we are talking merely about the value something creates on a particular day. And by the way, those values, they keep sliding up and down. Nobody really knows who’s the winner in the race till the race is over and the race is far from over. Okay, so towards that extent, I would ask you not to be too, you know, not to be too pressured or too influenced by things like SaaS is dead and the entire valuation will come to only these few companies and things like that. End of the if there are no users, what valuation will this company have? You need users. Now who creates users?

Users are created by many many processes and methods and only just having an LLM is not going to give you all of that. So a company when it actually works, there are workflows, there are governance rules, there are auditability requirements, there is observability, there is everything in over there and therefore just to think about one particular development by some particular company making that kind of a statement, I think at this point of time is a little too ahead of its time. Things will shake themselves out. Having said that, it’s also true that whether it be TCS, whether it be Salesforce, whether it be State Bank of India or whether it be any other company, the ways in which we do our work is going to change.

The ways in which people take in technology and that influences their life. will change and companies that will be amenable to the change companies that will take advantage of the change are the ones that are going to sustain are the ones that are going to grow up in value but my sincere request to you because you’re a youngster is don’t try to look at market cap while trying to create your company do the right things the market cap will follow okay it’s not market cap that makes you want to work it’s your work and your satisfaction and being able to get something that really and truly helps people improve their standards of living improve the way they actually stay in this world that is what should drive you not the market cap numbers

Robert Opp

and you wanted to add quickly to that

Speaker 1

yeah I just want to add to that think of LLM as something that is worldwide Okay. And that’s a great advancement of AI. But you need something that is industry wise, something that is company wise, something that is context wise, task wise, right? We are far from that. And there are lots of challenges. I mean, just before this, Arundhati and I were discussing the kind of data silos that exist even within an enterprise, right? So we are looking at AI and agentic AI in particular is something that is going to bring more connectedness within enterprise. There is significant amount of work to be done. But you imagine now this combined intelligence, right? Where you have different agents coordinating and working together will probably help us get more.

So some of parts could be more than one is very likely scenario. There are challenges to get it done. Okay. But nature is, the nature of work may change, but there is plenty of work coming at us. LLM has just solved part of the problem

Robert Opp

Nakul there were a couple questions about how do we bring people together for shared purpose and specific question I think it was LLM versus ML is that what you’re saying yeah do you want to take this sorry we’ve got just a few minutes before we have to close the session so just

Nakul Jain

so LLM versus traditional ML in our experience what we have realized is in a lot of places specifically because we work in impact sector a lot of work we do is for ground level users resource is a big challenge right you’re talking about situations where the cell mobile phones are very basic internet is a challenge adoption is an issue in those scenarios we have often learned that traditional ML models have worked much better to serve the purpose and large language models have not been able to serve the purpose and in a lot of cases not even deployable right so there are genuine technical issues because we have of infrastructure because of low resource settings, which have not allowed LLMs to sort of reach where we would want them to reach, right?

Second, in such cases, small language models or small AI is what we’ve been now moving towards to have it serve very specific purpose rather trying to give general intelligence to, you know, some of these users on field, right? So that’s a very quick answer I could have just gone on. To answer your question, sir, I don’t think we are trying to eliminate competition. I think the way we should be thinking about this, that there are different ways organizations can collaborate, right? Some of them could collaborate based on geographical synergies. Some of them can collaborate based on expertise, you know, synergies. Some of them can collaborate based at what part of the life cycle of this entire AI deployment they can collaborate on that.

And absolutely, there will be consortia that will be created. It will still continue to compete with each other. And that’s a good thing to ensure that there’s also quality in the… but the idea is to foster collaboration where those synergies are possible, at least we start with that.

Robert Opp

Thank you. Sorry, sir, we’re going to have to close the panel because we have literally 10 seconds left on the counter. I just want to say thank you very much to our panelists. We have gone from highest level government frameworks to questions about competitive landscape and how small companies can exist or can prosper, but the through line to all of this is that it doesn’t happen in silos. It cannot happen with only one stakeholder of society. This has to be multistakeholder and we have to commit to collective action. So I again refer to the QR code if you’re interested in the Hamburg Declaration that has more information on how to endorse that. Please join me in thanking our panelists for the excellent insights.

And thanks to all of you and good rest of summit. you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Robert Opp acted as the moderator of the panel and is identified as a representative from the UN Development Programme.”

The knowledge base lists Robert Opp as the moderator and UNDP representative for the discussion [S15] and records his opening remarks on multistakeholder involvement [S96].

Confirmedhigh

“Bärbel Kofler, Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, emphasized multistakeholder engagement for AI.”

Her role and focus on responsible AI are confirmed by entries describing her as Parliamentary State Secretary and a stakeholder in AI governance [S13] and [S14].

Additional Contextmedium

“Only about 17 % of venture‑capital flows go to the Global South, highlighting a financing gap for AI development.”

The knowledge base notes a financing gap with less than 1 % of venture-capital funding directed to Africa, providing related context but a different figure [S102].

Additional Contextmedium

“The UN Development Programme warned that AI’s current trajectory is “not equitable” and could widen an “AI equity gap”.”

General statements in the knowledge base acknowledge that AI development carries significant risks, supporting the concern about inequitable outcomes [S90].

Confirmedhigh

“Arundhati Bhattacharya, Chairperson and CEO of Salesforce South Asia, advocated for the democratization of technology and inclusive AI.”

Her involvement in building inclusive AI societies is recorded in the knowledge base, confirming her leadership role and focus on democratization [S105].

Additional Contextmedium

“Calls for large‑scale vocational and university training programmes to up‑skill SMEs for AI development.”

The knowledge base discusses skilling and education in AI, highlighting opportunities and challenges in India, which adds nuance to the reported training initiatives [S95].

External Sources (105)
S1
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S2
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S3
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S4
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S5
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S6
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S7
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S8
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S10
High-Level Session 4: From Summit of the Future to WSIS+ 20 — – Robert Opp: Representative from UNDP Robert Opp’s comment broadened the discussion on environmental sustainability: “…
S11
WS #278 Digital Solidarity & Rights-Based Capacity Building — Robert Opp: Okay. Thank you. Well, it’s a pleasure to be here. As Jennifer said, I’m Robert Opp. I come from the Unite…
S12
Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG — – CLAIRE: No role/title mentioned ROBERT OPP: Okay. Hello, everyone. This is a strange way of doing a workshop with e…
S13
Responsible AI for Shared Prosperity — -Barbel Kofler- Parliamentary State Secretary to the Federal Minister for Economic Cooperation and Development of German…
S14
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Dr. Bärbel Kofler- Title: Parliamentary State Secretary to the Federal Ministry of Economic Cooperation and Development…
S15
Multistakeholder Partnerships for Thriving AI Ecosystems — -Bärbel Kofler- Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, me…
S16
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S17
S18
https://dig.watch/event/india-ai-impact-summit-2026/multistakeholder-partnerships-for-thriving-ai-ecosystems — We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven te…
S19
Multistakeholder Partnerships for Thriving AI Ecosystems — – Arundhati Bhattacharya- Nakul Jain – Nakul Jain- Arundhati Bhattacharya
S20
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S21
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S23
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Aubra Anthony, Armando Guio-Español, Ojoma Ochai, Robert Opp, Anshul Sonak Yu Ping Chan: Good afternoon, everyone, and …
S24
War crimes and gross human rights violations: e-evidence | IGF 2023 WS #535 — Additionally, the need for a clear and applicable legal framework that incorporates open source intelligence and data fr…
S25
Opening and Sustaining Government Data | IGF 2023 Networking Session #86 — By implementing these recommendations, governments can utilize open data to improve decision-making, foster innovation, …
S26
Thinking Big on Digital Inclusion — In conclusion, while mobile penetration is high in Sub-Saharan Africa, access to advanced technology remains limited. Pr…
S27
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at tra…
S28
Advancing Scientific AI with Safety Ethics and Responsibility — And also create more awareness about the main fundamental thing is that they will be expected to document whatever testi…
S29
Keynote-Ankur Vora — Policymakers must build inclusive governance, safeguards, and infrastructure to ensure everyone benefits
S30
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Awesome. Thanks so much, Maru. And as one of the PAI folks, thanks for being here, everyone. It’s great to see you all. …
S31
WS #123 Responsible AI in Security Governance Risks and Innovation — Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today…
S32
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeh…
S33
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S35
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S36
WSIS Action Line C7: E-Agriculture — Henry van Burgsteden: Thank you very much. Yes, I’m here. Dear colleagues, partners and friends, on behalf of Vincent Ma…
S37
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — The analysis emphasises the significance of multi-stakeholder engagement in policy processes, specifically in the contex…
S38
Open Forum #30 High Level Review of AI Governance Including the Discussion — Several concrete commitments emerged from the discussion:
S39
Creating Eco-friendly Policy System for Emerging Technology — In an era distinguished by rapid technological growth, Mwikali underscores the necessity for intergenerational dialogue …
S40
Open Forum #66 the Ecosystem for Digital Cooperation in Development — All speakers emphasised the importance of collaboration between government, private sector, civil society, and academia….
S41
Donor roundtable: Enabling impact at scale in supporting inclusive and sustainable digital economies — Additionally, the analysis emphasizes the importance of public-private partnerships between donors and the private secto…
S42
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The discussion acknowledged environmental and social challenges, including impacts from increased electricity generation…
S43
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S44
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Sustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology…
S45
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — For closing the context gap, Patel proposed three interconnected solutions. First, organisations must connect proprietar…
S46
Powering AI Global Leaders Session AI Impact Summit India — The rapid acceleration of AI technology expands the gap between those who can effectively use it and those who cannot. C…
S47
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Infrastructure as Foundation ## Key Challenges and Opportunities ## Regulatory App…
S48
Multistakeholder Partnerships for Thriving AI Ecosystems — So we feel a power gap at the end of the day. If you look about where you have data center resources, it’s even worse. G…
S49
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S50
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Initiatives by UNIDO and potential changes to financial structures reflect the international community’s acknowledgment …
S51
Keynotes — Both speakers, despite representing different institutional perspectives (government vs. human rights oversight), agree …
S52
Setting the Rules_ Global AI Standards for Growth and Governance — Very high level of consensus with no significant disagreements identified. This strong alignment across industry, govern…
S53
WS #219 Generative AI Llms in Content Moderation Rights Risks — Given the technical limitations of LLMs with low-resource languages, these contexts require a greater reliance on human …
S54
How Small AI Solutions Are Creating Big Social Change — But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So mo…
S55
Open Forum #3 Cyberdefense and AI in Developing Economies — Philipp Grabensee: you know, follows up on the session you had in Riyadh and I think we all agreed that the bottleneck i…
S56
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ammari highlighted META’s open-source approach to large language models, explaining, “META has adopted an open source me…
S57
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Another concern addressed was the inherent biases and limitations of large language models trained on skewed web data. T…
S58
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S59
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S60
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S61
Laying the foundations for AI governance — – The potential for science-based approaches to provide common ground Artemis Seaford: So the greatest obstacle, in my …
S62
AI Meets Agriculture Building Food Security and Climate Resilien — The collaborative approach involving multiple stakeholders allows solutions to be deployed with confidence across differ…
S64
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S65
The Foundation of AI Democratizing Compute Data Infrastructure — Availability of top-performing open models is necessary but insufficient condition Open models, talent development, and…
S66
Open Forum #17 AI Regulation Insights From Parliaments — AI governance requires ongoing education for all stakeholders – politicians, policymakers, and the general public. This …
S67
10 Digital Governance and Diplomacy Trends for 2022 — Open standards, data, and software have shaped the growth of the internet for decades from open internet standards (TCP/…
S68
Host Country Open Stage — Nordhaug argues that digital public goods provide governments and organizations with greater control and sovereignty com…
S69
Waves of infrastructure Open Systems Open Source Open Cloud — While both support India’s technology development, there’s an unexpected disagreement on approach – Renu emphasizes open…
S70
WS #152 a Competition Rights Approach to Digital Markets — Government offices should use open source by default and transmit this to public procurement requirements Governments c…
S71
WS #123 Responsible AI in Security Governance Risks and Innovation — Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today…
S72
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeh…
S73
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S74
Open Forum #30 High Level Review of AI Governance Including the Discussion — AI governance can learn from Internet governance frameworks and mechanisms, but requires more extensive global partnersh…
S75
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S76
WSIS Action Line C7: E-Agriculture — Henry van Burgsteden: Thank you very much. Yes, I’m here. Dear colleagues, partners and friends, on behalf of Vincent Ma…
S77
Multistakeholder Partnerships for Thriving AI Ecosystems — This comment challenges the tech-centric view that dominates many AI discussions by revealing that technology developmen…
S78
Open Forum #33 Building an International AI Cooperation Ecosystem — – **Multi-stakeholder Approach and Inclusive Development**: Drawing parallels to internet governance, speakers stressed …
S80
Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG — Implementation and Accountability Opp emphasizes the need for concrete, implementable commitments in the Hamburg Declar…
S81
GermanAsian AI Partnerships Driving Talent Innovation the Future — Dr. Kofler referenced the Hamburg Sustainability Conference as an example of concrete commitments being made, emphasisin…
S82
Prosperity Through Data Infrastructure — The private sector’s role in producing innovations and democratizing technology is emphasized. The private industry help…
S83
UNSC meeting: Artificial intelligence, peace and security — Jack Clark:Thank you very much. I come here today to offer a brief overview of why AI has become a subject of concern fo…
S84
[Webinar summary] What is the role of the private sector towards a peaceful cyberspace? — Regarding the private sector as monolithic is a mistake from a policy-making and governance perspective, Dion pointed ou…
S85
Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135 — Private sectors have the technological solutions
S86
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Sustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology…
S87
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful con…
S88
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing soc…
S89
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Costa Rica has chosen to lead by example. Together with the OECD, we’re leading the development of the OECD AI Policy To…
S90
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S91
Main Session | Policy Network on Artificial Intelligence — Brando Benifei: Yes. So obviously there have been around four important resolutions this year regarding AI. One was pr…
S92
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Security Council, the discussions centered around the concept of accidental risks associa…
S93
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S94
AI for Social Good Using Technology to Create Real-World Impact — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for AI…
S95
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S96
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Robert Opp:Okay. Hello, okay. No, thanks for that. I think all of these interventions so far have drawn attention to som…
S97
WS #103 Aligning strategies, protecting critical infrastructure — – The need to move from high-level discussions to concrete, actionable measures
S98
Protection of Subsea Communication Cables — The discussion produced several concrete commitments and initiatives. The ITU Advisory Body for Submarine Cable Resilien…
S99
The Declaration for the Future of the Internet: Principles to Action — A fervent supporter of converting principles into tangible actions, Donohoe stresses the critical necessity of transform…
S100
Closing Session  — And I think the fact that the advisory body consists of governments and industry experts show that these are true. Truly…
S101
Agents of inclusion: Community networks &amp; media meet-up | IGF 2023 — She implies there is a gap in the use of technology in South America.
S102
Open Forum #40 Building a Child Rights Respecting Inclusive Digital Future — Speakers identified significant financing gaps, noting that less than 1% of venture capital funding goes to Africa, and …
S103
HIGH LEVEL LEADERS SESSION I — The argument made was for the establishment of a global framework or mechanism to address this gap. The sentiment regard…
S104
Leaders TalkX: Building inclusive and knowledge-driven digital societies — Many communities lack reliable internet connectivity and stable electricity, which are foundational elements for develop…
S105
Building Inclusive Societies with AI — Arundhati Bhattacharya show a video which will give you context of what the informal sector is, what are some of the in…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Robert Opp
1 argument138 words per minute1956 words845 seconds
Argument 1
AI equity gap and need for responsible use (Robert Opp)
EXPLANATION
Robert warns that while AI can drive sustainable development, its current trajectory is inequitable and could widen existing gaps. He stresses the necessity of a commitment to responsible AI to prevent worsening inequality and potential harms.
EVIDENCE
He notes that AI has powerful potential for development but its evolution is not equitable, warning that without responsible use the AI equity gap could worsen and cause inequality and harm [1-5]. He later emphasizes the importance of framework and governance in multi-stakeholder partnerships [83-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robert Opp’s warning about an AI equity gap and the need for responsible AI is reflected in UNDP remarks and multistakeholder partnership discussions that stress power gaps, governance frameworks and responsible use of AI [S10][S12][S15].
MAJOR DISCUSSION POINT
AI equity gap
AGREED WITH
Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain, Speaker 1
B
Bärbel Kofler
4 arguments160 words per minute1225 words458 seconds
Argument 1
Power gap, need for legal framework and open‑source resources (Bärbel Kofler)
EXPLANATION
Kofler highlights a global power gap where innovation exists worldwide but resources such as venture capital and data centers are concentrated in the Global North. She calls for legal frameworks, open‑source resources, and government action to close this gap.
EVIDENCE
She describes the power gap, noting that only 17 % of venture capital goes to regions where 90 % of people live, and that 0.1 % of data-center capacity is in the Global South [55-58]. She stresses the need for legal frameworks and open-source investment to make AI benefits accessible to all [47-53][80-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session notes quote Kofler’s statement “it’s not an innovation gap, it’s a power gap” and call for legal frameworks, open-source investment and governance to close the gap [S15][S13][S14].
MAJOR DISCUSSION POINT
Power gap and open‑source
AGREED WITH
Robert Opp, Arundhati Bhattacharya, Nakul Jain, Speaker 1
DISAGREED WITH
Arundhati Bhattacharya, Nakul Jain
Argument 2
Government must create enabling environment, training, and data access (Bärbel Kofler)
EXPLANATION
Kofler argues that governments need to provide an enabling environment that includes energy, research infrastructure, and skill development, as well as open data, to ensure AI benefits are widely shared.
EVIDENCE
She mentions the need for energy consumption support, research environments, and skill training for all users, emphasizing vocational and university training and making datasets publicly available [61-63][75-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources stress that governments need to provide energy, research infrastructure, skill development and open data to enable AI benefits, aligning with Kofler’s points on an enabling environment [S26][S25][S15].
MAJOR DISCUSSION POINT
Enabling environment and capacity building
Argument 3
Hamburg Declaration commitments: training, AI building blocks, datasets, country projects (Bärbel Kofler)
EXPLANATION
Kofler outlines concrete actions taken under the Hamburg Declaration, including exceeding training targets, delivering AI building blocks, diagnostics, datasets, and partnering on projects in Kenya, Cambodia, and India.
EVIDENCE
She reports that the commitment to train 160 000 people was exceeded with 190 000 trained, that 12 AI building blocks were expanded to 15, 30 diagnostics to 15, and 55 datasets were delivered, alongside projects in Kenya (satellite data for farmers), Cambodia (cervix cancer detection), and India (language datasets) [238-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Hamburg Declaration is highlighted as a collective action platform that set training targets and delivered AI building blocks and datasets across Kenya, Cambodia and India [S15][S13].
MAJOR DISCUSSION POINT
Hamburg Declaration tangible outcomes
Argument 4
Trained 190 000 people, delivered 12 AI building blocks, 30 diagnostics, 55 datasets, projects in Kenya, Cambodia, India (Bärbel Kofler)
EXPLANATION
Kofler reiterates the specific quantitative achievements of the Hamburg Declaration, emphasizing the scale of training and the breadth of AI tools and datasets produced for diverse regions.
EVIDENCE
She cites the numbers: 190 000 people trained, 12 AI building blocks (now 15), 30 diagnostics (now 15), and 55 datasets, plus collaborations in Kenya, Cambodia, and India for satellite data, cancer detection, and language resources [238-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same Hamburg Declaration summary reports the quantitative achievements Kofler cites – 190 000 trainees, expanded AI building blocks, diagnostics and datasets for the listed countries [S15].
MAJOR DISCUSSION POINT
Quantified progress of Hamburg Declaration
A
Arundhati Bhattacharya
4 arguments158 words per minute1779 words673 seconds
Argument 1
Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya)
EXPLANATION
Arundhati stresses that large corporations must adopt self‑regulatory measures and ethical standards to avoid heavy-handed external regulation, ensuring technology benefits users responsibly.
EVIDENCE
She describes Salesforce’s self-regulatory approach, noting its “humane and ethical use of technology” office and the need for self-regulation to prevent strict regulator intervention [280-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions note the importance of corporate self-regulation through offices of humane and ethical technology use to avoid heavy-handed regulation [S15][S27][S28].
MAJOR DISCUSSION POINT
Corporate self‑regulation
DISAGREED WITH
Bärbel Kofler, Nakul Jain
Argument 2
Policy makers must ensure ethical infrastructure, privacy and inclusive policies (Arundhati Bhattacharya)
EXPLANATION
Arundhati argues that policymakers must create ethical frameworks, protect data privacy, and build inclusive infrastructure so that AI adoption improves lives without exploitation.
EVIDENCE
She states that policy makers must ensure ethical infrastructure, privacy protection, and that offerings are not exploitative, emphasizing the role of government in providing right infrastructure and safeguarding privacy [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy-maker responsibilities for inclusive governance, privacy safeguards and ethical AI infrastructure are emphasized in the session on building inclusive governance [S29].
MAJOR DISCUSSION POINT
Policy responsibility for ethical AI
AGREED WITH
Robert Opp, Bärbel Kofler, Nakul Jain, Speaker 1
Argument 3
Salesforce’s democratization agenda, skilling millions, 1 % profit/product/time pledge (Arundhati Bhattacharya)
EXPLANATION
Arundhati outlines Salesforce’s commitment to democratize technology through a 1 % pledge of profit, product, and employee time, and through large‑scale skilling programmes that have created millions of certified “Trailblazers”.
EVIDENCE
She details the 1 % pledge (profit, product, time) and notes that Salesforce has trained 3.9 million Trailblazers in India, the second largest cohort after the US, and that the company actively skilling people and providing products to the nonprofit sector [96-104][101-104].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Salesforce’s 1-1-1 model (1 % profit, product, employee time) and the training of 3.9 million “Trailblazers” in India are documented in the discussion of corporate democratization efforts [S29].
MAJOR DISCUSSION POINT
Democratization through corporate pledges and skilling
DISAGREED WITH
Bärbel Kofler, Nakul Jain
Argument 4
Self‑regulation, co‑innovation, and skilling missions through industry bodies (Arundhati Bhattacharya)
EXPLANATION
She highlights the importance of industry‑led self‑regulation, co‑innovation with customers, and participation in national skilling missions and academic partnerships to broaden AI impact.
EVIDENCE
She mentions Salesforce’s self-regulatory office, co-innovation with customers, participation in the National Skilling Mission, internships with AICTE, and collaborations with apex bodies to reach colleges and communities [280-288].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session highlights industry-led self-regulation, co-innovation with customers and participation in national skilling missions as key to broadening AI impact [S27][S28].
MAJOR DISCUSSION POINT
Industry‑driven co‑innovation and skilling
N
Nakul Jain
7 arguments170 words per minute1535 words539 seconds
Argument 1
AI assurance and regional evaluation hubs (Nakul Jain)
EXPLANATION
Nakul proposes establishing regional AI assurance hubs that would provide standardized evaluation frameworks, enabling solutions to be assessed and transferred across countries.
EVIDENCE
He suggests creating regional evaluation hubs to set common yardsticks for AI solutions, noting the difficulty of evaluating across borders and the need for such collaboration [298-301].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A dedicated segment on “Ensuring Safe AI – Monitoring Agents to Bridge the Global Assurance Gap” proposes regional AI assurance hubs to standardise evaluation across borders [S30].
MAJOR DISCUSSION POINT
Regional AI evaluation infrastructure
Argument 2
Institutional mechanisms to embed AI solutions in ministries (Nakul Jain)
EXPLANATION
He argues that embedding AI applications must be planned from day one, with institutional mechanisms that integrate solutions into existing government programmes and ministries.
EVIDENCE
He describes the need for institutional mechanisms to embed AI from day one, citing examples of education reading-fluency tools and health TB projects that required government ownership, data availability, and integration with existing systems [149-164].
MAJOR DISCUSSION POINT
Embedding AI in government structures
AGREED WITH
Robert Opp, Bärbel Kofler, Arundhati Bhattacharya, Speaker 1
Argument 3
Education and health pilots that required government, technical partners and NGOs (Nakul Jain)
EXPLANATION
Nakul shares concrete pilots in education and healthcare that succeeded only through multi‑stakeholder collaboration among government, technical partners, and NGOs.
EVIDENCE
He details an oral-reading-fluency education pilot in Gujarat that involved government data, technical partners, and teacher capacity-building, and a tuberculosis health product that involved ICMR for evaluation, illustrating the need for cross-sector collaboration [152-168].
MAJOR DISCUSSION POINT
Multi‑stakeholder pilots in education and health
Argument 4
Wadwani AI as a convener, scaling solutions and hand‑holding for impact (Nakul Jain)
EXPLANATION
Nakul describes Wadwani AI Global as a convening entity that brings together governments, private sector, and NGOs to ensure AI solutions move from labs to field impact, providing hand‑holding and capacity‑building.
EVIDENCE
He states that Wadwani AI sees itself as a convener that works with governments, provides advisory, capacity-building, and product development, and ensures solutions are not confined to labs but reach the field [173-175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nakul Jain’s role at Wadwani AI Global as a convener that supports governments, private sector and NGOs to move AI solutions from labs to field is described in the summit overview [S18].
MAJOR DISCUSSION POINT
Convener role for impact‑driven AI
DISAGREED WITH
Bärbel Kofler, Arundhati Bhattacharya
Argument 5
LLMs often unsuitable for low‑resource settings; traditional ML works better (Nakul Jain)
EXPLANATION
Nakul observes that large language models are often impractical in low‑resource environments due to infrastructure constraints, whereas traditional machine‑learning models are more deployable and effective.
EVIDENCE
He explains that in low-resource settings with basic mobile phones and limited internet, traditional ML models have performed better, while LLMs face deployment challenges, leading to a shift toward small, task-specific models [363-365].
MAJOR DISCUSSION POINT
Suitability of ML vs LLM in low‑resource contexts
DISAGREED WITH
Robert Opp, Arundhati Bhattacharya
Argument 6
Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
EXPLANATION
He calls for a global repository or marketplace where AI solutions, playbooks, and governance frameworks can be shared, facilitating cross‑border deployment and scaling.
EVIDENCE
He points out the lack of a global repository, proposing a marketplace that includes shared solutions, playbooks, governance frameworks, and talent pools to help startups deploy tools internationally [293-298].
MAJOR DISCUSSION POINT
Global AI solution marketplace
DISAGREED WITH
Bärbel Kofler, Arundhati Bhattacharya
Argument 7
Regional AI assurance hubs to standardise evaluation across countries (Nakul Jain)
EXPLANATION
He reiterates the need for regional hubs that would provide consistent evaluation criteria, enabling AI solutions to be assessed and transferred between countries.
EVIDENCE
He again mentions the opportunity to create regional evaluation hubs that would set common evaluation yardsticks for AI solutions across borders [298-301].
MAJOR DISCUSSION POINT
Standardised regional AI evaluation
S
Speaker 1
5 arguments139 words per minute853 words367 seconds
Argument 1
Greening AI and trustworthy AI platforms (Speaker 1)
EXPLANATION
Speaker 1 emphasizes the need to make AI environmentally sustainable and introduces a trustworthy platform that helps engineers build responsible AI.
EVIDENCE
He mentions the development of the “Trusty Platform” to evaluate and calibrate AI, and notes work on greening AI through resource-aware AI initiatives [270-274].
MAJOR DISCUSSION POINT
Sustainable and trustworthy AI
Argument 2
National AI task‑force, AI Centres of Excellence, and compute strategies (Speaker 1)
EXPLANATION
He describes India’s national AI task‑force, the establishment of AI Centres of Excellence, and strategies to address compute gaps between the Global North and South.
EVIDENCE
He recounts participation in a task-force led by the principal scientific advisor, the creation of AICOEs with IIT Kanpur, and proposes repurposing legacy hardware and exploring new hardware like quantum valleys to bridge compute disparities [200-219].
MAJOR DISCUSSION POINT
National AI infrastructure and compute
Argument 3
Cross‑sector collaborations for AI diagnostics, climate data, and language resources (Speaker 1)
EXPLANATION
Speaker 1 outlines collaborations across countries and sectors delivering AI diagnostics, climate‑related satellite data for farmers, and multilingual language datasets.
EVIDENCE
He cites projects in Kenya (satellite data for farmers), Cambodia (cervix cancer diagnostics), and India (language datasets supporting AI frameworks) [250-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The multistakeholder partnership summary lists cross-country projects delivering AI diagnostics, satellite climate data for farmers and multilingual language datasets in Kenya, Cambodia and India [S15].
MAJOR DISCUSSION POINT
Cross‑sector AI applications
Argument 4
Need for industry‑specific, task‑specific AI and resolution of data silos (Speaker 1)
EXPLANATION
He argues that generic LLMs are insufficient; instead, industry‑specific, task‑focused AI solutions are needed, and data silos must be addressed to enable high‑quality data collection.
EVIDENCE
He discusses fragmented data silos, the need for extensive sensor infrastructure for air-pollution monitoring, and the importance of task-specific AI and agentic AI to connect enterprise systems [192-199][353-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion points stress fragmented data silos and the necessity for industry-specific, task-focused AI solutions to unlock high-quality data, echoing the speaker’s argument [S15][S26].
MAJOR DISCUSSION POINT
Task‑specific AI and data integration
Argument 5
TCS’s Trusty Platform, open‑data ecosystem and greening AI initiatives (Speaker 1)
EXPLANATION
Speaker 1 highlights TCS’s contributions, including the Trusty Platform for responsible AI, building an open‑data ecosystem, and initiatives to reduce AI’s carbon footprint.
EVIDENCE
He mentions the Trusty Platform for evaluating AI, the focus on greening AI through resource-aware approaches, and TCS’s endorsement of the Hamburg Declaration [265-274].
MAJOR DISCUSSION POINT
TCS responsible AI tools
A
Audience Member 1
1 argument128 words per minute43 words20 seconds
Argument 1
Question on comparative value of LLMs versus traditional ML (Audience Member 1)
EXPLANATION
The audience member asks for clarification on the added value of large language models compared with traditional machine‑learning approaches.
EVIDENCE
The question is phrased: “what is the value being added in comparison to LLMs versus traditional ML?” [311-313].
MAJOR DISCUSSION POINT
LLM vs. traditional ML value
A
Audience Member 2
1 argument139 words per minute43 words18 seconds
Argument 1
Need to unify fragmented initiatives for greater impact (Audience Member 2)
EXPLANATION
The audience member asks how competing, fragmented initiatives can be coordinated to achieve a shared purpose and larger impact.
EVIDENCE
The question states: “There are fragmented initiatives… they are competing. So how do we bring them together for a shared purpose?” [317-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session on multistakeholder partnerships calls for collective action and coordination of fragmented AI initiatives under the Hamburg Declaration framework [S15].
MAJOR DISCUSSION POINT
Coordination of fragmented initiatives
A
Audience Member 3
1 argument183 words per minute211 words69 seconds
Argument 1
Concerns about SaaS valuation, democratisation and value creation for small enterprises (Audience Member 3)
EXPLANATION
The audience member seeks insight on how SaaS business models will evolve, whether democratization will benefit small enterprises, and how valuation trends affect them.
EVIDENCE
The question asks about the future of SaaS, valuation impacts, and how to create value for small enterprises in the context of technology democratization [326-333].
MAJOR DISCUSSION POINT
SaaS valuation and democratization
Agreements
Agreement Points
All speakers stress the need for responsible AI governance and legal/policy frameworks to prevent widening equity or power gaps.
Speakers: Robert Opp, Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain, Speaker 1
AI equity gap and need for responsible use (Robert Opp) Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Policy makers must ensure ethical infrastructure, privacy and inclusive policies (Arundhati Bhattacharya) Institutional mechanisms to embed AI solutions in ministries (Nakul Jain) National AI task‑force, AI Centres of Excellence and compute strategies (Speaker 1)
The panel repeatedly highlighted that without clear governance, legal frameworks and coordinated policy action AI risks deepening existing inequities; governments, industry and academia must jointly create enforceable commitments and standards [1-5][49-58][124-129][149-164][200-203].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the fit-for-purpose AI governance discussions at the IGF and aligns with policy-lever recommendations for bridging the AI divide presented in recent multistakeholder forums [S59][S47].
Multi‑stakeholder collaboration is essential for deploying AI responsibly and at scale.
Speakers: Robert Opp, Bärbel Kofler, Nakul Jain, Arundhati Bhattacharya, Speaker 1
Framework and governance, important factors to have in the multi‑stakeholder partnerships (Robert Opp) Multistakeholder engagement is required from all parts of society (Bärbel Kofler) Need for a multi‑stakeholder ecosystem that brings expertise around the technology (Nakul Jain) Co‑innovation with customers and participation in industry‑led skilling missions (Arundhati Bhattacharya) Cross‑sector collaborations for AI diagnostics, climate data and language resources (Speaker 1)
All participants agreed that AI solutions only succeed when governments, private firms, civil society and academia work together from the outset, sharing data, expertise and implementation pathways [83-84][38-40][147-152][280-288][250-257].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder partnerships are repeatedly highlighted, for example in the UN-led AI ecosystem session on power gaps [S48] and the Digital Public Infrastructure report emphasizing stakeholder involvement in policy formulation [S63][S66].
Capacity building and large‑scale training are critical to democratize AI benefits.
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain, Speaker 1
Trained 190 000 people, exceeding the Hamburg Declaration target (Bärbel Kofler) Salesforce’s 1 % pledge and 3.9 million “Trailblazers” skilling programme (Arundhati Bhattacharya) Teacher capacity‑building as part of education pilots (Nakul Jain) Need for vocational and university training, and skill development for all users (Speaker 1)
The panelists cited concrete numbers and programmes that illustrate a shared commitment to up-skilling millions of users, teachers and professionals to ensure AI is usable and inclusive [238-241][96-104][101-104][165-166][75-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity constraints were identified as a bottleneck in AI deployment for developing economies [S55], while the U.S. “worker-first” AI agenda and broader talent-development strategies stress large-scale training as essential for democratization [S64][S65].
Open‑source data, open‑data ecosystems and shared resources are necessary to close the AI divide.
Speakers: Bärbel Kofler, Speaker 1, Arundhati Bhattacharya
Investing in open‑source and making datasets publicly available (Bärbel Kofler) Building an open‑data ecosystem and addressing fragmented data silos (Speaker 1) Democratizing technology by providing product access to non‑profits and NGOs (Arundhati Bhattacharya)
All three highlighted that open-source tools, open datasets and free product access are key levers to make AI usable across regions and sectors [80-81][247-257][192-199][95-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source approaches are advocated by META’s open LLM release [S56], by digital public-goods advocates promoting sovereignty over proprietary tools [S68], and by policy recommendations to default to open source in public procurement [S70].
Similar Viewpoints
Both warned that AI’s current trajectory risks widening existing inequities and called for concrete governance mechanisms to prevent a widening AI equity/power gap [1-5][49-58].
Speakers: Robert Opp, Bärbel Kofler
AI equity gap and power gap Need for responsible AI governance
Both emphasized that governments should set the legal and policy foundations while the private sector adopts self‑regulation to ensure ethical AI deployment [124-129][280-283].
Speakers: Arundhati Bhattacharya, Bärbel Kofler
Policy makers must ensure ethical infrastructure, privacy and inclusive policies Government must create enabling environment, legal frameworks and open‑source resources
Both advocated for dedicated evaluation mechanisms—regional hubs or platforms—that provide standardized, trustworthy assessment of AI solutions before scaling [298-301][270-274].
Speakers: Nakul Jain, Speaker 1
AI assurance and regional evaluation hubs Trusty Platform for responsible AI and evaluation
Unexpected Consensus
Alignment of government and private‑sector leaders on self‑regulation combined with formal, measurable commitments.
Speakers: Bärbel Kofler, Arundhati Bhattacharya
Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Hamburg Declaration commitments with concrete, measurable duties (Bärbel Kofler)
While governments typically push for external regulation, Bärbel highlighted that the Hamburg Declaration requires concrete, self-imposed duties, and Arundhati argued that large corporations must self-regulate to avoid heavy-handed external rules; both converged on a hybrid model of self-regulation backed by enforceable commitments [280-283][235-241].
POLICY CONTEXT (KNOWLEDGE BASE)
Both government and human-rights representatives endorsed private-sector self-regulation within broader regulatory frameworks, and global AI standards bodies report high consensus on collaborative self-regulation [S51][S52].
Creation of a global marketplace/repository for AI solutions and playbooks.
Speakers: Nakul Jain, Speaker 1
Creation of a global repository of solutions, shared playbooks, governance frameworks (Nakul Jain) Building an open‑data ecosystem and open‑source resources (Speaker 1)
Both identified the lack of a centralized platform for sharing AI tools, methodologies and data as a barrier; Nakul proposed a marketplace, while Speaker 1 called for an open-data ecosystem, indicating a shared vision for a global, reusable AI resource hub [293-298][192-199].
Overall Assessment

The panel displayed strong consensus on four pillars: (1) the necessity of robust, multi‑stakeholder governance frameworks to prevent AI‑driven inequities; (2) the centrality of collaborative partnerships across government, industry, academia and civil society; (3) the urgency of large‑scale training and capacity‑building programmes; and (4) the importance of open‑source data and shared resources to democratize AI. These convergences suggest a solid foundation for collective action and signal that future policy and programme design can build on shared commitments such as the Hamburg Declaration.

High consensus – most speakers reiterated the same themes with concrete examples, indicating a unified stance that can drive coordinated implementation of responsible AI for sustainable development.

Differences
Different Viewpoints
Who should bear the primary responsibility for governing and deploying AI for development
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Wadwani AI as a convener, scaling solutions and hand‑holding for impact (Nakul Jain)
Kofler argues that governments must lead by creating legal frameworks, open-source resources and training programmes to close the power gap [55-58][61-63][75-80]. Arundhati stresses that large corporations must self-regulate, adopt ethical standards and work with policy makers to ensure responsible AI deployment, highlighting Salesforce’s 1 % pledge and skilling initiatives [280-283][96-104][101-104]. Nakul positions his organisation as a neutral convener that brings together governments, private firms and NGOs, proposing marketplaces and evaluation hubs rather than a single lead actor [173-175][298-301]. All agree on the need for multi-stakeholder action but disagree on which actor should drive the process.
POLICY CONTEXT (KNOWLEDGE BASE)
IGF and UN-IDO sessions stress shared responsibility among governments, private sector, and civil society, emphasizing science-based, multi-stakeholder governance models [S59][S61][S63].
Preferred method for closing the AI equity / power gap
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Salesforce’s democratization agenda, skilling millions, 1 % profit/product/time pledge (Arundhati Bhattacharya) Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
Kofler points to structural imbalances – only 17 % of venture capital and 0.1 % of data-centre capacity are in the Global South – and calls for legal frameworks, open-source investment and government-led training to close the gap [55-58][80-81]. Arundhati proposes corporate-driven democratization through large-scale skilling (3.9 million Trailblazers) and a 1 % pledge of profit, product and employee time to make technology accessible [96-104][101-104]. Nakul suggests building a global repository/marketplace of AI solutions, playbooks and talent pools to enable cross-border deployment, thereby addressing the gap through shared resources rather than direct investment [293-298].
POLICY CONTEXT (KNOWLEDGE BASE)
Speakers at AI Impact Summits and multistakeholder forums identified proactive policy levers, infrastructure investment, and capacity-building as key methods to narrow the equity and power gap [S46][S47][S48].
Technical suitability of large language models (LLMs) versus traditional machine‑learning in low‑resource contexts
Speakers: Nakul Jain, Robert Opp, Arundhati Bhattacharya
LLMs often unsuitable for low‑resource settings; traditional ML works better (Nakul Jain) AI can have a powerful impact on sustainable development in a positive way (Robert Opp) AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved (Arundhati Bhattacharya)
Nakul reports that in environments with basic mobile phones and limited internet, traditional ML models are more deployable, while LLMs face infrastructure constraints and often cannot be used, prompting a shift to smaller, task-specific models [363-365]. By contrast, Opp and Arundhati speak of AI’s broad transformative potential without distinguishing model types, implying that LLMs are part of the solution for large-scale challenges [1-5][121-123]. This creates a disagreement on the practicality of LLMs for development-focused AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses highlight LLM limitations for low-resource languages and the need for human moderation, as documented in content-moderation risk studies and performance critiques of current AI systems [S53][S54][S57].
Open‑source versus proprietary/partner‑driven AI solutions
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
Kofler emphasizes investment in open-source AI to make data sets and tools freely available [80-81]. Arundhati describes Salesforce’s model of providing proprietary products to the nonprofit sector while ensuring access and training, relying on corporate licences rather than open-source distribution [98-104]. Nakul proposes a shared marketplace that could include both open and proprietary components, aiming to aggregate solutions for cross-border use [293-298]. The speakers differ on whether openness or controlled proprietary sharing is the optimal path to widespread AI adoption.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate is reflected in META’s open-source LLM strategy [S56], government advocacy for open digital public goods [S68], and U.S. policy promoting open-source by default in public procurement [S70].
Unexpected Differences
Contrasting views on the practicality of large language models in development contexts
Speakers: Nakul Jain, Robert Opp, Arundhati Bhattacharya
LLMs often unsuitable for low‑resource settings; traditional ML works better (Nakul Jain) AI can have a powerful impact on sustainable development in a positive way (Robert Opp) AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved (Arundhati Bhattacharya)
While Opp and Bhattacharya express broad optimism about AI’s transformative potential, Nakul points out concrete technical limitations of LLMs in low‑resource environments, favouring traditional ML models. This technical clash was not anticipated given the generally hopeful tone of the other speakers.
POLICY CONTEXT (KNOWLEDGE BASE)
Some experts point to bias, data-skew, and performance gaps of LLMs in low-resource settings, while others argue for human-centric approaches until technology matures [S53][S54][S57].
Open‑source emphasis by a government minister versus a corporate leader’s reliance on proprietary platforms
Speakers: Bärbel Kofler, Arundhati Bhattacharya
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya)
Kofler explicitly promotes open-source investment as a means to democratise AI [80-81], whereas Arundhati’s strategy centres on providing proprietary Salesforce products and a 1 % pledge to the nonprofit sector, suggesting a different philosophy for achieving accessibility. The divergence between open-source advocacy and proprietary-driven democratisation was not foreseen.
Overall Assessment

The panel shows substantial consensus that AI can drive sustainable development and that multi‑stakeholder collaboration is essential. However, there are clear disagreements on who should lead the governance effort, the preferred mechanisms for closing the AI equity gap, the technical suitability of LLMs in low‑resource settings, and whether open‑source or proprietary approaches best achieve democratisation.

Moderate – while all participants share the overarching goal of responsible, inclusive AI, they diverge on strategic pathways and technical choices. These differences could affect policy design, funding allocations and partnership models, potentially slowing coordinated action unless reconciled.

Partial Agreements
All three agree that multi‑stakeholder collaboration is essential for responsible AI and that capacity building is required, but they diverge on the mechanism: Kofler favours government‑led legal frameworks and open‑source; Arundhati favours corporate self‑regulation, skilling and product access; Nakul favours a neutral convener role with marketplaces and evaluation hubs [47-53][61-63][75-80][280-283][96-104][173-175][298-301].
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Wadwani AI as a convener, scaling solutions and hand‑holding for impact (Nakul Jain)
All agree that large‑scale training/skilling is crucial for AI adoption. Kofler cites a government commitment to train 160 000 people (exceeded to 190 000) [238-241]; Arundhati highlights Salesforce’s 3.9 million Trailblazers programme [101-104]; Nakul stresses the need for a global repository to help startups scale solutions internationally, which would also require extensive skill development [293-298]. The disagreement lies in who should fund and deliver the training.
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Government must create enabling environment, training, and data access (Bärbel Kofler) Salesforce’s democratization agenda, skilling millions, 1 % profit/product/time pledge (Arundhati Bhattacharya) Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
All concur that ethical AI deployment requires oversight, but differ on the oversight model: Kofler calls for government‑mandated legal frameworks and open‑source transparency; Arundhati promotes corporate self‑regulation to pre‑empt heavy regulation; Nakul proposes independent regional AI assurance hubs to standardise evaluation across borders [280-283][298-301][80-81].
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) AI assurance and regional evaluation hubs (Nakul Jain) Power gap, need for legal framework and open‑source resources (Bärbel Kofler)
Takeaways
Key takeaways
AI can accelerate sustainable development, but without responsible governance it risks widening equity and power gaps. A multi‑stakeholder approach—government, private sector, civil society, academia, and international organisations—is essential for inclusive AI ecosystems. Governments must provide legal frameworks, data access, training, and infrastructure (e.g., energy, compute, sensing) to enable equitable AI deployment. Private‑sector firms can democratize technology through skilling programmes, open‑source tools, and self‑regulation, but need supportive policy and partnership. AI assurance, regional evaluation hubs, and trustworthy‑AI platforms are needed to certify impact and mitigate risks. Greening AI and responsible‑by‑design practices are becoming integral parts of AI strategies. Traditional ML often outperforms large language models (LLMs) in low‑resource settings; task‑specific, lightweight models remain important. The Hamburg Declaration provides concrete, measurable commitments (training, AI building blocks, diagnostics, datasets) and serves as a model for collective action.
Resolutions and action items
Germany’s Federal Ministry for Economic Cooperation and Development (BMZ) fulfilled and exceeded its Hamburg Declaration commitment to train 160,000 people, achieving 190,000 trained. BMZ delivered 12 AI building blocks for climate action, 30 AI diagnostics, and 55 open datasets, with pilot projects in Kenya, Cambodia, and India. Salesforce continues its 1 % profit/product/time pledge, expands the Trailblazer community (3.9 M members in India), and partners with national skilling missions and AICTE for AI education. Wadwani AI Global will act as a convener, creating a global repository/marketplace for AI solutions, shared playbooks, and talent pools. Tata Consultancy Services (TCS) endorsed the Hamburg Declaration, launched the Trusty Platform for responsible‑AI evaluation, and is collaborating with Carnegie Mellon on AI‑trust research. Proposal to repurpose legacy hardware and develop a “quantum valley” in India to address compute disparities between Global North and South. Call for the establishment of regional AI assurance/evaluation hubs to standardise impact assessment across countries. Invitation for additional organisations to endorse the Hamburg Declaration via the QR‑code link.
Unresolved issues
How to operationalise a global marketplace for AI solutions and shared playbooks, including mechanisms for licensing, deployment, and revenue sharing. Specific design and governance of regional AI assurance hubs – standards, funding, and coordination remain undefined. Ways to effectively unify fragmented initiatives and enabling services without stifling healthy competition. Detailed guidance on when to use LLMs versus traditional ML in low‑resource environments; scalability and deployment pathways are still open questions. Long‑term financing and sustainability models for open‑source AI building blocks and data resources. Responses to audience queries about SaaS valuation trends and the impact of AI on small enterprises were only partially addressed.
Suggested compromises
Private sector self‑regulation combined with government‑led frameworks to avoid heavy‑handed regulation while ensuring ethical standards. Embedding AI solutions within existing government programmes (e.g., education, health) rather than creating parallel pilots. Balancing competition and collaboration: encouraging consortia for shared infrastructure while allowing market competition for innovation. Using lightweight, task‑specific ML models where LLMs are impractical, thereby matching technology to resource constraints. Leveraging both open‑source AI components and proprietary innovations to broaden access while sustaining commercial incentives.
Thought Provoking Comments
It’s not an innovation gap, it’s a power gap. Venture capital is 17 % in the Global South which holds over 90 % of the world’s population, and data centre capacity is only 0.1 % there.
Highlights structural inequities that underlie AI adoption, reframing the problem from lack of technology to concentration of power and resources.
Shifted the conversation from abstract benefits of AI to concrete systemic barriers, prompting later speakers to discuss concrete measures (e.g., training, open‑source, infrastructure) to address the power imbalance.
Speaker: Bärbel Kofler
The financial‑inclusion programme succeeded only after technology (UIDAI, mobile networks, UPI) enabled direct subsidies, turning empty bank accounts into cash‑flow assets that banks would lend against.
Provides a vivid, data‑driven case study showing how technology can transform poverty alleviation, linking AI potential to real‑world outcomes and underscoring the role of policy and infrastructure.
Reinforced the need for government‑backed frameworks and sparked the discussion on responsible deployment, leading to deeper talk about policy makers’ responsibilities and the importance of skilling.
Speaker: Arundhati Bhattacharya
Technology is the easiest part; the real challenge is building a multi‑stakeholder ecosystem from day one that institutionalises the solution, embeds it in existing government processes, and provides monitoring and hand‑holding for users.
Moves the focus from building AI models to the surrounding institutional and operational context, emphasizing that impact depends on governance, not just tech.
Created a turning point toward concrete examples of successful partnerships (education reading‑fluency tool, TB health‑care), and set the stage for later suggestions about marketplaces and evaluation hubs.
Speaker: Nakul Jain
We have already trained 190,000 people (exceeding our 160,000 target), opened 12 AI building blocks, delivered 30 AI diagnostics and 55 datasets, and are working on satellite‑data services for farmers in Kenya and cervical‑cancer detection in Cambodia.
Demonstrates measurable progress from the Hamburg Declaration, turning a high‑level policy document into tangible outcomes.
Provided concrete evidence that collective commitments can be operationalised, encouraging other panelists to reference their own implementation efforts and reinforcing the call for more measurable pledges.
Speaker: Bärbel Kofler
We should create a global repository/marketplace of AI solutions and regional evaluation hubs so that a startup in India can sell a tool in Ethiopia and have clear, harmonised evaluation criteria.
Identifies a practical infrastructure gap that hampers scaling of responsible AI across borders, proposing a concrete mechanism to accelerate diffusion.
Introduced a new topic—cross‑border solution marketplaces—that broadened the discussion from national pilots to a global ecosystem, prompting other speakers to think about standardisation and shared governance.
Speaker: Nakul Jain
Large corporations need a self‑regulatory function for humane and ethical use of technology; otherwise regulators will impose heavy‑handed rules that could stifle innovation.
Frames corporate responsibility as a proactive safeguard against future regulation, linking ethical practice to business sustainability.
Shifted the tone toward pre‑emptive governance, influencing the later dialogue about private‑sector roles, skilling missions, and the balance between competition and collaboration.
Speaker: Arundhati Bhattacharya
In low‑resource settings, traditional ML models often outperform large language models because of bandwidth, device, and power constraints; we are moving toward small, purpose‑built AI rather than general‑purpose LLMs.
Challenges the prevailing hype around LLMs by grounding the conversation in the realities of deployment in the Global South.
Redirected the technical discussion to suitability of AI approaches for development contexts, reinforcing the earlier point about power gaps and influencing the audience’s perception of what AI solutions are feasible.
Speaker: Nakul Jain
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the dialogue from abstract optimism about AI to a nuanced examination of systemic inequities, concrete implementation challenges, and actionable solutions. Bärbel Kofler’s framing of the ‘power gap’ set the problem definition; Arundhati’s financial‑inclusion story illustrated the transformative potential when policy and technology align; Nakul Jain’s emphasis on ecosystem design and his marketplace proposal provided a roadmap for scaling impact; and the Hamburg Declaration updates supplied proof that collective commitments can translate into measurable outcomes. Together, these comments reshaped the conversation, prompting participants to focus on governance, infrastructure, and cross‑sector collaboration rather than technology alone, and they anchored the panel’s recommendations in real‑world examples and actionable next steps.

Follow-up Questions
How can a global repository or marketplace of AI solutions, playbooks, and governance frameworks be created to enable startups to sell and deploy tools across countries (e.g., from India to Ethiopia)?
A shared platform would reduce barriers to scaling AI for development and facilitate cross‑border collaboration, addressing the current difficulty of international deployment.
Speaker: Nakul Jain
What would be the design and governance model for regional AI assurance/evaluation hubs that can provide consistent evaluation criteria for AI solutions across different countries?
Standardized evaluation would help ensure that AI tools meet local regulatory and ethical standards, enabling smoother adoption in multiple regions.
Speaker: Nakul Jain
What are the technical and economic requirements for building affordable, high‑quality, high‑volume, high‑velocity sensing infrastructure (e.g., air‑pollution sensors) to feed AI systems?
Data quality is a bottleneck for AI; research is needed to develop cost‑effective sensor networks that can generate reliable data at scale.
Speaker: Sachin Loda
How can legacy hardware be repurposed effectively for AI workloads in the Global South to narrow the compute gap?
Leveraging existing hardware could provide a pragmatic path to increase AI compute capacity without the expense of the latest equipment.
Speaker: Sachin Loda
What opportunities does emerging quantum hardware (e.g., the proposed ‘quantum valley’ in India) present for sustainable AI applications?
Exploring quantum‑enabled AI could unlock new efficiencies and reduce energy consumption, aligning with greening‑AI goals.
Speaker: Sachin Loda
What metrics and impact assessments are needed to evaluate the effectiveness of the AI building blocks, AI diagnostics, and data sets released under the Hamburg Declaration?
Quantifying outcomes will determine whether the declared commitments translate into real‑world sustainable development benefits.
Speaker: Bärbel Kofler
How can open‑source AI resources and open data ecosystems be expanded to close the power gap between the Global North and South?
Open resources can democratize access to AI technology, mitigating inequities in venture capital and data‑center distribution.
Speaker: Bärbel Kofler
What governance frameworks are required to ensure equitable AI deployment in the Global South, addressing the identified AI equity and power gaps?
Effective policies are needed to prevent AI from widening existing socioeconomic disparities.
Speaker: Bärbel Kofler
What is the comparative performance and suitability of traditional machine‑learning models versus large language models (LLMs) in low‑resource, low‑connectivity environments?
Understanding which approaches work best in constrained settings will guide technology choices for impact‑focused deployments.
Speaker: Nakul Jain
How can fragmented AI initiatives and enabling services be coordinated into a shared purpose to avoid competition and increase collective impact?
Co‑ordination mechanisms are needed to align diverse projects, maximize resource use, and achieve larger development outcomes.
Speaker: Audience Member 2 (directed to Arundhati Bhattacharya)
What strategies are needed to mitigate potential job displacement (e.g., teachers) caused by AI adoption and to provide appropriate hand‑holding and capacity‑building at the field level?
Ensuring that AI augments rather than replaces workers is crucial for sustainable adoption and social acceptance.
Speaker: Nakul Jain
What role should self‑regulation play for large tech firms versus external regulatory oversight to ensure ethical AI deployment?
Balancing internal governance with public regulation can prevent heavy‑handed interventions while maintaining responsible innovation.
Speaker: Arundhati Bhattacharya
How effective are large‑scale skilling programs (e.g., Salesforce’s Trailblazer community) in creating AI‑ready workforces and driving technology adoption in developing economies?
Evaluating skilling outcomes will inform how best to build human capital for AI‑driven development.
Speaker: Arundhati Bhattacharya
What methodologies can be employed to measure and reduce the environmental footprint of AI systems (greening AI), including resource‑aware model design and energy consumption tracking?
Quantifying AI’s carbon impact is essential for aligning AI development with sustainability goals.
Speaker: Sachin Loda (TCS)
What concrete indicators can track progress of collective action on responsible AI after the Hamburg Declaration, and how can they be operationalized across stakeholder groups?
Defining clear, measurable indicators will enable monitoring of commitments and guide future scaling efforts.
Speaker: Robert Opp (question to Bärbel Kofler)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.