Multistakeholder Partnerships for Thriving AI Ecosystems
20 Feb 2026 15:00h - 16:00h
Multistakeholder Partnerships for Thriving AI Ecosystems
Session at a glance
Summary
This discussion focused on the critical role of multi-stakeholder partnerships in developing and deploying responsible AI for sustainable development, featuring perspectives from government, private sector, and implementation organizations. The panel was moderated by Robert Opp from the UN Development Program and included representatives from Germany’s Federal Ministry for Economic Cooperation and Development, Salesforce, Wadwani AI Global, and Tata Consultancy Services.
Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits are accessible to all, noting significant gaps in venture capital distribution and data center resources between the Global North and South. She stressed the need for investment in skills training and open-source solutions to prevent AI from widening existing inequalities. Arundhati Bhattacharya from Salesforce highlighted how technology democratization enabled India’s successful financial inclusion program, demonstrating that adoption occurs naturally when technology improves people’s lives, but requires proper policy frameworks and infrastructure to ensure ethical implementation.
Nakul Jain from Wadwani AI Global shared practical examples of successful AI deployments in education and healthcare, emphasizing that building technology is the easiest part while creating sustainable institutional mechanisms and partnerships is the real challenge. He described their role as conveners who facilitate collaboration between governments, technical partners, and field-level implementers. Dr. Sachin Loda from TCS discussed the importance of sensing infrastructure and data ecosystems, highlighting collaborative initiatives like quantum valleys and responsible AI evaluation platforms.
The discussion concluded with promotion of the Hamburg Declaration on AI for Sustainable Development, which requires signatories to make concrete, measurable commitments rather than just endorsing principles. The panelists agreed that effective AI deployment for development requires coordinated action across all sectors of society, with each stakeholder bringing unique capabilities while working toward shared objectives.
Keypoints
Major Discussion Points:
– Multi-stakeholder partnerships are essential for responsible AI development: The discussion emphasized that no single entity – whether government, private sector, or civil society – can address AI challenges alone. Each stakeholder brings unique capabilities, with governments providing frameworks and governance, private sector offering innovation and scaling, and organizations like Wadwani AI serving as conveners and integrators.
– Addressing the AI equity gap and power imbalances: Speakers highlighted significant disparities in AI access, with only 17% of venture capital reaching regions representing 90% of the global population, and only 0.1% of data center capacity located in the Global South. The focus was on democratizing AI technology rather than just innovating it.
– Moving from principles to concrete, measurable actions: The Hamburg Declaration on AI for Sustainable Development was presented as a model for translating high-level commitments into specific, trackable outcomes. Examples included Germany’s commitment to train 160,000 people (achieving 190,000) and creating AI building blocks for climate action.
– Real-world implementation challenges and successes: Panelists shared specific examples of successful AI deployments, such as India’s financial inclusion program using biometric identification and mobile networks, and educational AI tools in Gujarat that required collaboration between government, technical partners, and implementation organizations.
– Infrastructure and capacity building as foundational requirements: Discussion covered the need for sensing infrastructure, data accessibility, compute resources, and skills training. Emphasis was placed on creating enabling environments where innovation can flourish, including open-source approaches and digital public infrastructure.
Overall Purpose:
The discussion aimed to explore how different stakeholders can collaborate effectively to ensure AI development serves sustainable development goals while being equitable and responsible. The session sought to move beyond theoretical frameworks toward practical strategies for implementing responsible AI at scale, using the Hamburg Declaration as a concrete example of collective action.
Overall Tone:
The tone was constructive and solution-oriented throughout, with speakers building on each other’s points rather than debating. It began with high-level policy perspectives and gradually became more technical and specific as panelists shared real-world experiences. The atmosphere remained collaborative and optimistic, with speakers demonstrating genuine enthusiasm for the potential of responsible AI partnerships while acknowledging significant challenges that need to be addressed.
Speakers
Speakers from the provided list:
– Robert Opp – Representative from the UN Development Program, moderator of the panel discussion
– Bärbel Kofler – Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, member of the Bundestag since 2004, champion of human rights and development
– Arundhati Bhattacharya – Chairperson and CEO of Salesforce South Asia, former first woman chairperson of the State Bank of India, leader with over four decades of experience in digital transformation
– Nakul Jain – CEO and Managing Director of Wadwani AI Global, mission-driven technology leader working on AI solutions for underserved communities in the global south
– Speaker 1 – Chief Scientist and Head of Research at Tata Consultancy Services (identified as Dr. Sachin Loda based on context), leader in cybersecurity and privacy research, heads work on trustworthy AI and quantum resilience
– Audience Member 1 –
– Audience Member 2 –
– Audience Member 3 – Undergraduate student of economics at University of Delhi
Additional speakers:
None – all speakers mentioned in the transcript are accounted for in the provided speakers names list.
Full session report
This comprehensive discussion on multi-stakeholder partnerships for responsible AI development brought together diverse perspectives from government, private sector, and implementation organisations to explore how artificial intelligence can serve sustainable development goals whilst addressing equity challenges. Moderated by Robert Opp from the UN Development Programme, the panel featured Dr. Bärbel Koffler from Germany’s Federal Ministry for Economic Cooperation and Development, Arundhati Bhattacharya from Salesforce South Asia, Nakul Jain from Wadwani AI Global, and Dr. Sachin Loda from Tata Consultancy Services.
The AI Equity Challenge: From Innovation Gap to Power Gap
Robert Opp opened the discussion by highlighting UNDP’s concern that without responsible deployment, AI could exacerbate existing inequalities rather than serve sustainable development goals. This framing set the stage for Dr. Koffler’s crucial reframing of the challenge: “It’s not an innovation gap, it’s a power gap.” She provided stark evidence of this disparity, noting that only 17% of global venture capital reaches regions representing over 90% of the world’s population, while merely 0.1% of data centre capacity exists in the Global South.
This perspective challenged common assumptions about developing countries lacking innovative capacity. Instead, Dr. Koffler argued that innovative people and ideas exist globally, but access to enabling environments—including funding, infrastructure, and markets—remains concentrated in the Global North.
Government’s Role: Creating Enabling Frameworks
Dr. Koffler outlined the government’s responsibility for creating frameworks and legal structures that make AI advantages accessible to all citizens. This includes ensuring smallholder farmers can utilise AI tools, citizens can interact with government services in their own languages, and doctors in remote areas can access advanced diagnostic capabilities.
Government investment extends to skills training, vocational education, and university research programmes that connect academic work with small and medium-sized enterprises. She emphasised ensuring research outcomes, datasets, and AI tools remain available through open-source approaches to prevent concentration of benefits among large players.
Private Sector Democratisation: The India Financial Inclusion Case Study
Arundhati Bhattacharya demonstrated how technology democratisation can transform populations when properly implemented. Drawing from her experience leading India’s largest bank during digital transformation, she articulated a fundamental principle: “Any improvement in technology, if it is not really democratised, then it doesn’t really have an impact.”
Her detailed case study illustrated this principle in action. India’s financial inclusion programme struggled for 15 years until technology enablers—biometric identification through Aadhaar and widespread mobile network coverage—made it possible to reach 600,000 villages directly. This allowed the government to transfer subsidies directly to beneficiaries, eliminating intermediaries who previously captured 87% of intended benefits.
The subsequent introduction of the Universal Payment Interface (UPI) enabled even small shopkeepers to accept digital payments, creating cash flow records that made them eligible for formal credit. This transformed loan terms from 10% per month to 7% per year, demonstrating how technological infrastructure can create transformative economic opportunities.
Bhattacharya emphasised that adoption occurs naturally when technology genuinely improves people’s lives, noting that “adoption is not going to be a problem” when people see real benefits in their daily lives.
Corporate Responsibility and Self-Regulation
From the private sector perspective, Bhattacharya described Salesforce’s “1-1-1 model,” contributing 1% of equity, products, and employee time to community initiatives. The company has trained 3.9 million “Trailblazers” in India—people skilled in Salesforce technology—representing the second-largest such community globally after the United States.
She advocated for corporate self-regulation through dedicated offices for humane and ethical technology use, arguing that proactive self-governance is preferable to heavy-handed external regulation that might stifle innovation.
Implementation Realities: Beyond Technology Development
Nakul Jain provided crucial insight about AI deployment: “Building technology, at least for an application organisation like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes tedious for us to get done.” This observation highlighted that successful deployment depends more on institutional mechanisms and partnerships than technological sophistication.
Jain shared specific examples of successful collaborations. In education, Wadwani AI Global developed an oral reading fluency assessment tool for students in partnership with the Gujarat government, which provided data access, policy ownership, and integration with existing educational systems. Rather than creating standalone applications, they embedded AI capabilities within workflows teachers already used.
Healthcare partnerships required collaboration with evaluation agencies like the Indian Council of Medical Research (ICMR) from project inception, ensuring success criteria and evaluation parameters were established from day one. This proved essential for building trust in AI systems impacting human health and safety.
Technical Infrastructure and Practical Considerations
Dr. Sachin Loda emphasised that effective AI deployment requires comprehensive “sensing infrastructure” as part of digital public infrastructure. Using air pollution monitoring as an example, he explained that better AI outcomes depend on deploying sufficient sensors across regions, making them cheaper and more effective, then using AI to analyse resulting data streams.
Both Jain and Loda challenged assumptions about Large Language Models versus traditional machine learning approaches. Jain noted that in resource-constrained environments—where mobile phones are basic and internet connectivity limited—traditional ML models often work better than LLMs and are more practically deployable.
Loda reinforced this perspective, explaining that whilst LLMs represent significant advancement, industry-specific and context-specific solutions still require substantial development work, particularly given data fragmentation within enterprises.
Concrete Action Through the Hamburg Declaration
Dr. Koffler emphasised moving beyond performative commitments through the Hamburg Declaration on AI for Sustainable Development: “We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps.”
The German ministry exceeded their specific commitments: training 190,000 people instead of 160,000, delivering 15 AI building blocks for climate action instead of 12, and creating 55 datasets instead of 30. These translated into concrete projects including satellite data analysis for Kenyan farmers, cervix cancer detection systems in Cambodia, and support for India’s Bhashini project to include multiple subcontinental languages in AI frameworks.
Future Challenges and Opportunities
The discussion identified critical gaps requiring future partnerships. Jain highlighted the need for a global repository and marketplace of AI solutions enabling startups in one country to deploy innovations in others. Currently, significant barriers prevent a startup in India from selling solutions in Ethiopia, despite having potentially relevant tools.
This extends to evaluation and quality assurance, where the absence of regional AI evaluation hubs creates uncertainty about how solutions assessed in one country might perform in different contexts. Jain proposed creating shared playbooks, governance frameworks, and talent pools that could be leveraged across borders.
Addressing Market Concentration Concerns
An audience member raised concerns about whether AI democratisation would genuinely benefit small enterprises or primarily advantage large technology companies. Bhattacharya addressed this by emphasising that sustainable value creation depends on user adoption and practical utility rather than market capitalisation. She advised focusing on work satisfaction and improving people’s living standards rather than being driven by valuation metrics.
Multi-Stakeholder Coordination Model
The panel revealed sophisticated understanding of how different stakeholders must collaborate whilst maintaining distinct roles. Governments create enabling frameworks and digital public infrastructure. Private sector organisations bring scaling capabilities and innovation capacity whilst embracing social responsibility. Implementation organisations serve crucial convening functions, facilitating collaboration and ensuring technological capabilities translate into practical impact.
Academic institutions provide evaluation capabilities and technical expertise that build trust in AI systems. International organisations contribute coordination mechanisms and platforms for collective action.
Conclusion
This discussion demonstrated that responsible AI for sustainable development requires moving beyond theoretical frameworks toward practical, measurable collaboration. The panel’s insights revealed that whilst AI technology development may be straightforward, creating institutional mechanisms and implementation ecosystems for sustainable impact represents the real challenge.
The reframing from innovation gaps to power gaps suggests solutions must address structural inequalities in resource access rather than simply promoting technology transfer. Evidence from successful implementations like India’s financial inclusion programme demonstrates that democratised technology can transform populations when supported by appropriate frameworks and partnerships.
The Hamburg Declaration model of concrete commitments over abstract principles provides a pathway for translating intentions into measurable outcomes. The path forward requires continued collaboration across stakeholders, with each maintaining distinct roles whilst working toward shared objectives of equitable, responsible AI deployment that genuinely serves sustainable development goals.
Session transcript
From the perspective of the UN Development Program, certainly we see a concern with what is happening in the development and adoption of AI in that there’s no question and we are very convinced that AI can have such a powerful impact on sustainable development in a positive way. It can help us really close some of those gaps that we see in persistent development challenges. However, we know that the way that it is evolving now is not equitable. And if we do not have a kind of commitment to responsible use of AI, we fear that the AI equity gap could actually get worse or inequality could get worse. And more so, it also can be harmful in some cases if we’re not applying AI responsibly.
So we want to get into then the point about… How do we actually address some of those challenges? How do we get some of the… measures in place or the kinds of commitments, principles that we need to have as an overall community, especially in the international development community, but beyond in terms of private sector and others as well. In terms of what are our commitments to making sure that we are deploying and using AI in a responsible way and building AI ecosystems. So this does tie in with the process that has been a feature of previous AI summits as well as the Hamburg Sustainability Conference that is hosted every year in the city of Hamburg and sponsored by the government of Germany with other partners like UNDP in the city of Hamburg.
And as part of that process, we have introduced a declaration on AI for sustainable development, the Hamburg Declaration. And so we’ll talk about that a little bit in the context of this conversation after exploring some of the kind of opening thoughts around multistate. Thank you very much. in general. So, I’m going to start off and introduce our panelists and then we’ll go through a couple of rounds of questions. Be thinking about your questions as well because I think that we’ll have time to go to you as well for a Q &A and so happy to get your thoughts and get some interaction with the panel as part of this session. So, with that, I have a very distinguished panel to introduce to you and it’s a real pleasure to have them here today.
I would like to introduce, sitting on my left, Dr. Barbel Koffler, who is the Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development. She’s a long -standing champion of human rights and development, a member of the Bundestag since 2004, and she plays a key role in shaping Germany’s global development economy. Thank you. Yes, exactly. We are also honored to have Ms. Arundhati Bhattacharya, who’s the chairperson and CEO of Salesforce South Asia, a transformative leader with over four decades of experience. She made history as the first woman chairperson of the State Bank of India, where she led one of the country’s most significant digital transformation journeys. She oversees a huge part of Salesforce business in this region and beyond, shaping how technology partnerships drive innovation skills and inclusive growth.
We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven technology leader. He works to advance AI solutions for underserved communities across the global south. And his team has done 40 deployments, of AI reaching more than 150 million people in use cases such as health care, healthcare, agriculture, and education. And we’re also joined by Dr. Sachin Loda, who’s the chief scientist and head of research at Tata Consultancy Services. He’s a leader in cybersecurity and privacy research. He heads Tata Consultancy Services’ work on trustworthy AI, quantum resilience, cloud de -risking, and privacy by design. And that is all about translating cutting -edge research into real -world, award -winning innovation.
Please join me in welcoming our panelists. Okay, so let us start. Dr. Kofler, I’d like to start with you, please. You know, you represent the government perspective here on this panel. What do you think are the difficulties with the distinct roles of government, but also some of the other players like… private sector, civil society and international organizations, the roles that they play in building AI ecosystems that are both innovative and inclusive.
I used this one in the meantime. Thank you for the question and thank you for having me also as a representative of the government. But I’m very happy to be on a panel with scientists, with academia and with private sector because at the end of the day, artificial intelligence and how we shape really the future with that is depending on multistakeholder engagement. There has to be an engagement from all parts of society and also including civil society because it’s a broad outreach in our societies we have to undertake. So that’s why I think it’s very important to discuss it. You were pointing out challenges and advantages of artificial intelligence. And I think there is the role.
I think there is the role of governments coming in. because, yes, there are tremendous advantages of artificial intelligence. In my sphere of politics, we discuss about how we can use that for detect diseases better through AI, cancer in various topics. We discuss about climate change and how we can make prediction applicable for everybody on the ground. We discuss about administration and the advantages in administration. AI could offer. But at the end of the day, all those hopes will come into reality if we shape that in a framework, in a legal framework, in a framework we coordinate with partners around the globe and within society, which makes all those advantages accessible for everybody, for all parts of society, for any citizen to address its government through AI conversation in its own language, for example, to smallholder farmers if they get the chance to make use of it, to doctors in remote areas who can really then detect with the new technology diseases or whatever.
So we have to close the gap. And I would say it’s not an innovation gap, it’s a power gap. Because innovative people are existing around the globe. Ideas are created around the globe in all spheres of society. So I’ve been here on a panel or at a session, this brilliant young Indian startup, yeah, how would I call it, people, I don’t know, I forgot the English word, sorry. This brilliant people who are creating startups. And that is possible all over the globe, but the environment has to be there. And if you look how venture capital, for example, is distributed, and we know that, I think, it’s 17 % of venture capital only is in those parts of the world who are presenting more than 90 % of the people.
So we feel a power gap at the end of the day. If you look about where you have data center resources, it’s even worse. Global North, it’s almost all the data centers available capacity and only 0 .1 % is somewhere in the Global South. So there is a big gap and we have to overcome that gap. We have to close that gap. And I think that’s something governments have to do also and where they utilize to create an environment. If it’s coming to energy consumption, if it’s coming to an enabling environment for research, if it’s coming to skill training for everybody. I learned in one of the sessions that we should change our mindset and everybody should get a job.
And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. But to do so, we have to invest in the mindset and in the skills of all users. We have to work on vocational training and university training to make research, education, and the needs of even small and medium -sized enterprises heard and linked together to bring small and medium -sized enterprises into the position to participate also from that new technology.
Not only the big players. So all those things need framework and need governance. And we have to make sure that the outcome, the research, the results, the data sets are available for everybody. That’s why we are also investing in open source. It’s something we very much are aligned with the Indian ideas because if we don’t do so, the first thing, the advantages are
Okay, now it’s working. So framework and governance, important factors to have in the multi -stakeholder partnerships. Arundhati, maybe let me turn to you with a very similar question, but taking this from the private sector perspective. How do you think the responsibility should be distributed between the different stakeholders? And what does industry or private sector companies, tech companies in this case, uniquely bring to the table? And where is partnership essential?
Thank you and good morning, every one of you. So I have been very fortunate in the fact that I have worked with two organizations. The one that I worked with initially was the State Bank of India, which is the largest bank in India and also the bank that really and truly spearheaded the financial inclusion program of the current government, which is the PM Chandan Yojana. And while doing that, I realized that any improvement in technology, if it is not really democratized, then it doesn’t really have an impact. If you want it to have an impact, you need to democratize technology. That’s the first thing. The second thing is currently I work at Salesforce. And Salesforce, again, it was set up with the intention that business is a platform for change.
We have what we call a one -by -one -by -one mission, which is we contribute 1 % of our profits or equity. 1 % of our products and 1 % of our time to the community, to the non -profit community. Of course, in India, it is 2 % of the profits because that’s what the law demands. And while doing that, we ensure that the non -profit sector not only has access to our products but also knows how to use them and use them to the best of their abilities. Along with that, we also do a lot of work in skilling various people. So we call people who basically are trained in the sales force technology as trailblazers.
And India has 3 .9 million of them, the second largest up to the US. And this is a community that has been literally nurtured by us. So it’s not only a question of ensuring that we are getting our products, out to the community, to the non -profit sector, to the various enterprises where we obviously market our products to them and then help them implement it in their organizations. But we are doing a lot of work on the skilling front as well because we feel that is something that’s very, very essential. Without that, India will not really be able to take the advantage of all that technology brings to it. But one thing is very clear in our heads.
And that is that a populous nation like India can never really have the standard of living that it deserves unless technology is a part of the play. We tried this financial inclusion initiative for 15 long years before we actually were successful in 2014. And the reason for that was technology. Because by that time, the unique identification authority which gave us our biometric units, you know, marker to enable us to get the unique identification number. And that’s what we’re doing. that helped us do the KYC and second the spread of the mobile networks enabled us to actually approach the 600 ,000 villages which is India before this we never had that connectivity now it was because of this that these people then got accounts 97 % of these accounts were without any balances and a lot of people asked us was this merely a ploy was this merely you know some kind of action without any meaning but very soon we found that those accounts started getting money because the government could then target the subsidies they were giving to below poverty line people directly to the citizens without having to go through any middlemen in fact some politician at some point of time had said that only 13 paise after for every one rupee of subsidy actually goes to the person who needs it Here, when the money started flowing directly into the account, the whole of the rupee was going to the person who needed the subsidy.
Now, over and above that, we then had the UPI, which is the Universal Payment Interface. And what did that do? That enabled even small customers and small shopkeepers to start actually taking money in a digital form. And because they were taking money in the digital form, these accounts of theirs, which had been opened, started showing a cash flow. And the moment an account has cash flow, bankers then become interested in lending you money because they know that there is a cash flow to sort of back it up, to enable you to repay it. So here is a person who was taking a loan of, say, 2 ,000 rupees, paying 10 % a month, suddenly becomes eligible for that 2 ,000 rupees at 7 % a year.
Imagine the difference it can make in the life of that person. And this is all… All on account of TikTok. AI is not going to be anything different. AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved. And in order to solve it, we therefore need to understand people are eager to take it. They will take the technology because it improves their lives. But in order for them to take it, it’s up to the policy makers to make those interventions to ensure that they are not being taken for a ride. That whatever is being offered to them is being offered in an ethical manner. That we are doing the right, we are creating the right kind of infrastructure to enable them to actually be able to access it.
That the right to privacy of their data is properly maintained. So I think the policy makers, the people who are there, who are conducting all of this, they have a far bigger responsibility. Because adoption is not something which is a policy. Adoption is not a problem in India. Adoption will happen. as soon as people realize that it’s helping them in their regular lives adoption is not going to be a problem there’s not going to be any pessimism on adoption and if you go to the floor where the expo is you’ll actually see people how interested they are in finding out how it’s going to improve their lives but it is people who make the policies people who make the infrastructure available people who actually make the initiatives available who need to take the responsibilities and companies like ours we need to take the responsibility as to how we skill the people enable them to understand what is good for them and what is not good for them make them understand that there is both good and bad and they need to choose the better they should not be taken for a ride so all of us have a role to play and I think we need to be aware of those roles and we need to play them well thank you thank you so much
so actually those Those two answers complement each other so nicely because, as you were saying, Dr. Koffler, there’s the governance and the framework, and this is key to the Indian experience as well. Government sets down the framework, the rails, but then private sector has such an important role in scaling, in innovating, in helping people be successful and interact with that. So I’m going to turn now to Nakul. And a question now because, you know, your Wadwani AI Global has done a lot of work in the space of multi -stakeholder partnerships and the rollout of AI for specific use cases. From your experience, where have multi -stakeholder partnerships already demonstrated tangible impact in terms of advancing responsible AI for development?
And what are the conditions that helped ensure these collaborations? Translated into sustained impact rather than… and just remaining solved.
Thank you for that question, Rob. Also, thank you for having me here. I hope this is working. It’s working, but it won’t be working. All right, okay. Yeah, no, thank you, everyone, for being here. So, you know, my answer will be from my experience of deploying some of these solutions on field and what has worked for us, what has not worked for us. Now, what we have realized is that building technology, at least for an application organization like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi -stakeholder ecosystem that can bring in their expertise, who will take care of everything that sits around the technology.
The ecosystem that works is the one that ensures from day one that we are able to do everything that we need to do. What will be the institutional mechanism? of getting something done. How will the solution, the use case be institutionalized in the ecosystem, how will it be embedded within the framework we are speaking about, within the department ministry we are talking about, within the problem area and the people who will be using it. That cannot be an afterthought, that has to be something that needs to come in from day one. And I’ll give you a few examples, some of the solutions that we were able to deploy at scale and what helped them work.
So we built a solution in education around oral reading fluency, essentially to assess students on their reading abilities. Technology part, like I said, not very difficult to do. But this is not something that Wadwani AI Global could have done alone. This required a collaboration between government, ownership from government, and what Arunati also mentioned, that it is required for policy makers to own this problem. We as an organization, came in as a facilitator. But the government of Gujarat was the one who led this entire initiative, who made sure that data is available, who helped us find the right partners who could then be leveraged to annotate some of this data. The other thing, rather than trying to create this solution in silos, what we also started thinking about is where will it get embedded, right?
Tomorrow, if this has to be used, and there are tons and tons of applications that are already being pitched to government for education, pitched to teachers, how do you ensure that they actually get adopted? And one way to do that was to understand how is government currently running their programs. So we started working with government and their technical partner to work on this together. So rather than thinking about this as a solution that we have created and we give you an app, we started figuring out how can whatever we develop plug into the existing system. . in classrooms, wherever they were, and then how do you ensure that there is a monitoring mechanism established so that the government is now fully capable of not just following up on the adoption, but also to figure out if there’s a genuine impact of technology.
All of this was only possible because we had this collaboration between the government with the technical partner who government was working with us being the AI partner. And the partners who were working with government at school levels to help them programmatically, to help teachers build capacity on this, to help teachers understand it, to also counsel teachers, because there’s also a constant threat about AI trying to replace jobs. So there’s also that handholding that is required at field level. And as a tech organization, we could not have done that. So you need that partner as well. Moving to a slightly different example around healthcare. And we have done a lot of product. in tuberculosis and you know some of the things remain common around your government partnership data collection through them and you know your programmatic partnership but what additional support we got was from icmr how can we start thinking about evaluating this from day one because health being a much more sensitive area right directly impacting the lives you know when some of our decisions give outcome but some of our models give those decisions how do you ensure that evaluation and the success criteria is not an afterthought so you don’t just work with the government you also work with the agency who will eventually evaluate it so you are finally following your from the first day itself how you should be optimizing for what parameters and what results so these are some of the examples robert i can think of where you know technology government ecosystem partners played a role came together and delivered something
thank you nicole and just a quick follow up on that so Wadwani AI Global is how would you describe the space that you occupy in that? You’re not government, you’re not exactly private sector you’re kind of an integrator, right?
Yeah no absolutely, I think what we would like to call ourselves is a convener of technology, especially for social good, so our role essentially is to ensure there is an impact, leveraging artificial intelligence who all need to participate is something that we work with partners along we work with governments, whether it requires capacity building whether it requires advisory to the government, whether it requires actual product development we help government with everything because what we have also realized is while government is well intent, there is a need for a lot of hand holding because they have priorities and artificial intelligence intelligence, while now it’s become a buzzword, they still need a lot of hand -holding in order to make sure that some of these solutions just don’t end up living in labs and some of these rooms like this, but actually go on field.
So our role becomes even more important to make sure that everyone who needs to participate comes along, participates, and ensures that there is an actual impact on field. Thank you.
And the reason I asked that is because also to go over to Sachin, you know, there are, you know, kind of the roles of, you know, private sector innovation, and there’s the government role. The international organizations like UNDP have also bring a kind of multilateral and global perspective on some of these things. But, you know, companies like Tata Consultancy also serve kind of in the middle there as well. And I’d love your perspectives on that same question, which is, you know, where are you seeing multi -stakeholder partnerships resulting in tangible and tangible results? And I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question.
I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great I think that’s a great question. I think that’s a
to contextualize it’s not just true for help it’s true for any domain you look at and and yes data is fragmented siloed that’s true even within enterprise it’s of course true for at a bigger scale I personally believe that bringing this data kit together is important to create open data ecosystem but I believe problem is lies somewhere else the problem lies in the sensing infrastructure that you need to put together to be able to have high quality high volume high velocity data I take example of for example I mean air pollution monitoring right just think of that you probably could do a much better job if you had sufficient number of sensors deployed across the region of interest probably they are going to be large in number.
You will have to think of, you know, how do you make it cheaper, faster, better. And having done that, it will also depend on, you know, how you could analyze, process and sort of derive insights. That’s where AI will come in, right? So I think government can play a very big role in sort of catalyzing this and enabling sort of, you know, putting together a great sensing infrastructure could be thought of, you know, part of the digital public infrastructure. On top of that, private and public entities could innovate, academicians and inventor community could innovate. So what are we doing here? So we are. So Indian government actually has been very proactive on the overall responsible AI mission.
I think. I was part of the task force that was set up by. principal scientific advisor to Indian government, Professor K. Vijay Raghavan on responsible AI. This happened about eight years back. And we kind of tried to study the implications of AI and the responsible facet of AI in Indian context then. And following that, a lot of mission projects were launched. So as part of that, I have seen now there are different AICOEs supported by government. So we are part of AICOE for sustainability in collaboration with IIT Kanpur. And it’s going live. In fact, some of you might have attended a session by IIT Kanpur team yesterday on this very same topic, where we are looking at a lot of interesting problems in overall sustainability domain.
So that’s just data. I just want to make, So if you just mention your other two points super quickly. Yeah, yeah. So I’ll just mention about compute, which is, again, a very important facet. There is a big gap between what is available in global north versus global south. I have just two ideas to share. One is we should think of how we could repurpose the legacy hardware. And you would see that a lot of high -tech companies have innovated along that. It’s not necessary that you need to have current latest hardware. You could do a lot of innovations around it. Two, of course, you should explore the new hardware that is becoming available. So TCS, IBM, and Andhra government have come together to create a quantum valley in India, which could open ways for that.
So I think these are some quick remarks I thought I’d make.
Great. No, I mean, and the depth is incredible, right? And you’re talking about these very… Very concrete collaboration. that are making this, you know, a possibility. Okay, I think we’ll go straight into just a quicker second round and then we’re going to go for some comments from you. And I want to bridge this now into, you know, we’ve been talking about multi -stakeholder partnerships and how there are different roles across different parts and stakeholders that have different roles to play. But what’s clear is that we also need collective action. And so I wanted just to kind of think about, okay, how do we move to kind of getting overall kind of collective action in these spaces for responsible use of AI that kind of gives that sort of global framing around it.
And as I mentioned at the beginning, one of the things that we have been doing with the, as part of the Hamburg Sustainability Conference and the Hamburg Declaration, on responsible AI for sustainable development. is really looking at this space of collective action. So a couple of years ago, we did the first Hamburg Declaration. We have about 50 organizations that have endorsed the declaration. It has a number of principles related to it around responsible use of AI. And Dr. Koffler, I’m going to turn back to you to ask you a question especially about that process, which is since the adoption of the Hamburg AI Declaration, what tangible progress have we seen and what are the concrete actions that you think are now required to move from the higher level principles into sustained and scaled impact?
Well, thank you for mentioning the concrete action because that’s actually what really it is all about. We were coming up with that idea on Hamburg Sustainability Conference and on that paper, not because we want to create another system, but because we want to create another paper or we want to invite to another conference. We really want to come up with commitment every stakeholder has to undertake when they are signing that memorandum. And so my ministry also was signing it, and we had concrete, tangible, measurable duties we have to fulfill, commitments we have to fulfill. I would love to give you a few examples. In the sake of time, I’m trying to be very short and very brief.
For example, one of the commitments was training people. We were committing to train 160 ,000 people within one year. We fulfilled that already, and that makes me very proud because we were trained already 190 ,000. So that is the first step we were committing. We were committing to open up AI building blocks, 12 AI building blocks, especially for using AI to climate action. We are now by a number of 15. And we were also committing 30 AI diagnostics. We are now by a number of 15. and we achieved 55 data sets. So that’s what it’s all about. We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something.
We want to come up really with concrete steps and that’s what my ministry did until now. There are examples all over the world we are working with partners. We are working, for example, in Kenya. There is about analyzing satellite data which makes them available for farmers. We are working with Cambodia where it’s about medicine to detect cervix cancer, for example. We are working with the Indian government. Your government is doing a fantastic project including languages of the subcontinent, analyzing, we were just talking about it, analyzing the data sets for many languages which have to be included in the overall AI framework. So we are supporting Bershini by nine data sets. We were supporting the collection.
So that’s what it’s all about. at the end of the day, and that will be my last sentence. I invite everybody in the room, and maybe you spread the thought, to join Hamburg Declaration, because we started that, and it’s continuing. Government should be part, private sector should be part, civil society should be part, academia should be part of that, and come up with concrete, measurable, tangible commitments which could be fulfilled or can be fulfilled then. So you’re
I’m glad you mentioned that, because the QR code that you see on the screen here, it gives you more information. If you would like to endorse the Hamburg Declaration, just go to the website there. We’re getting short on time for audience interaction here, but Sachin, really quickly, TCS endorsed the Hamburg Declaration last year. What? What progress have you seen since then?
Yeah, so I’ll just quickly point out… to some activities we have been doing since then. So as I said, we are part of responsible AI deliberation in India and elsewhere. And we have launched a big program around it, where Carnegie Mellon is now our academic partner. And we are also talking to some Indian academics. As you rightly said, these are very hard problems, and this requires significant collaborations across sectors. We are also building our own technology to be able to evaluate, calibrate, and sort of help AI engineers build more responsible AI. It’s called Trusty Platform. More details we can share offline. We are also very keen on greening of AI, and that requires… a lot of resource -aware AI.
work. So this is again very significant and that’s something we are on.
Fantastic. And you just mentioned greening AI. That’s one of the key principles of the declaration. Arundhati and Nakul, I’ll turn back to you now for some quick thoughts before we go for some Q &A. Where do you think as we look ahead, we’ve talked about the Hamburg Declaration as one sort of platform, but where do you see the strongest opportunities for new types of partnerships going forward? Things that maybe don’t exist yet but that should? Or what do you think is the next wave of this?
Well, you know, as an organization, Salesforce from 2014 has had an office of humane and ethical use of technology. So this is something that I think most large corporates have to have a self -regulatory feature. This is important because unless you self -regulate you will have regulators coming down with a very heavy hand at some point of time or the other and that is not something you want if you want to keep innovating. So that’s something I think everybody, all of us should look at. And having said that, the other ways in which we can actually take part with the stakeholders is to continue to co -innovate with many of our customers, those who are interested, to take forward our participation in many of these forums that are there.
For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry body. We also do internships in partnership with AICT, which is the All India Council of Technical Education. So there are very many apex bodies that are willing to give us space for innovation. For the private sector to come in and be part of the initiative. and the advantage of going through them is they have far better reach to the colleges, to the communities that enables us to actually bring our products directly to them. So I think if we look at all of these taking them together, that really makes the story much better and much stronger.
Thank you very much. Nakul, same question to you.
So in the interest of time I’ll try to be quick. Two things that we feel is missing in the ecosystem one is this global repository of solutions that we are trying to work on and this is not just like a DPG platform or a DPI platform but even that can include private sector. If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now. So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared covers? governance ecosystem frameworks has a talent pool that could be leveraged who worked on these solutions.
So I think that’s one thing that is currently missing. The other is around AI assurance. So is there an opportunity for us to create regional evaluation hubs, which are, you know, for a given particular region, because I see a lot of organizations struggle within country to get solutions evaluated. Now, imagine a situation where we are trying to take solution from one country to other, but there is no clear guidelines around how the evaluation, you know, yardsticks would differ from one country to another. So that could be another area of collaboration between, you know, various organizations who can set this up.
Great ideas from you both on very concrete things. Okay, so we have time for a couple of questions from Okay, all right, we’ve got a lot of hands. I’m going to try. We’re going to take a couple of questions. We’re going to take a couple of questions at the same time. And then we’ll just go back. for one more round from the panelists for answers. So I think we’ll start here, and then I saw another hand here. We’ll try to do three quick ones. Okay, so one, two, three.
My question is to Mr. Robert and Nakul. So I want to ask is, in comparison to LLMs, your chat GPTs, Geminis, what is the value being added in comparison to LLMs versus traditional ML? I mean, your classical improvements over functions, et
Okay. Then there was a question over here. There are two questions there. Yes, gentleman in the blue, and then beside him.
My question is to Mr. Bhadwani. You talked about bringing together people. There are fragmented initiatives. There are fragmented enabling services. But they are competing. So how do we bring them together for a shared purpose? And… And to create… a bigger impact.
Okay. Hi, sir. Hi, ma ‘am. I’m current undergraduate student of economics at University of Delhi. My question to you, ma ‘am Bhattacharya, first of all, I would like to understand from your perspective, having served in the private and in the public sector, how do you see all this debate that is around that SaaS as a service will be dead as a result of all this, but the context window, the context you have will actually be the most beneficial. So what will actually happen might, one might say that the valuation will be doubled by all these companies who are actually in this space. So in that context, if I ask you that how do you create value for small enterprises that’s happened?
For TCS, for example, the valuation is around $100 billion. Cursor, which is a very new company, the valuation is $30 billion. So an employee count of 300 is creating so much value, and that’s at 600 ,000 employee count is creating that value. So in that context, with the democratization of software and tech actually happen or is it just the big tech that benefit out of it? There will obviously be the benefit of people here and there but in the larger context,
Okay, so some very technical questions here. Arundhati, do you want to start with that one first about how to create value for small companies?
Yeah, so you know, over here, I don’t think we are talking merely about the value something creates on a particular day. And by the way, those values, they keep sliding up and down. Nobody really knows who’s the winner in the race till the race is over and the race is far from over. Okay, so towards that extent, I would ask you not to be too, you know, not to be too pressured or too influenced by things like SaaS is dead and the entire valuation will come to only these few companies and things like that. End of the if there are no users, what valuation will this company have? You need users. Now who creates users?
Users are created by many many processes and methods and only just having an LLM is not going to give you all of that. So a company when it actually works, there are workflows, there are governance rules, there are auditability requirements, there is observability, there is everything in over there and therefore just to think about one particular development by some particular company making that kind of a statement, I think at this point of time is a little too ahead of its time. Things will shake themselves out. Having said that, it’s also true that whether it be TCS, whether it be Salesforce, whether it be State Bank of India or whether it be any other company, the ways in which we do our work is going to change.
The ways in which people take in technology and that influences their life. will change and companies that will be amenable to the change companies that will take advantage of the change are the ones that are going to sustain are the ones that are going to grow up in value but my sincere request to you because you’re a youngster is don’t try to look at market cap while trying to create your company do the right things the market cap will follow okay it’s not market cap that makes you want to work it’s your work and your satisfaction and being able to get something that really and truly helps people improve their standards of living improve the way they actually stay in this world that is what should drive you not the market cap numbers
and you wanted to add quickly to that
yeah I just want to add to that think of LLM as something that is worldwide Okay. And that’s a great advancement of AI. But you need something that is industry wise, something that is company wise, something that is context wise, task wise, right? We are far from that. And there are lots of challenges. I mean, just before this, Arundhati and I were discussing the kind of data silos that exist even within an enterprise, right? So we are looking at AI and agentic AI in particular is something that is going to bring more connectedness within enterprise. There is significant amount of work to be done. But you imagine now this combined intelligence, right? Where you have different agents coordinating and working together will probably help us get more.
So some of parts could be more than one is very likely scenario. There are challenges to get it done. Okay. But nature is, the nature of work may change, but there is plenty of work coming at us. LLM has just solved part of the problem
Nakul there were a couple questions about how do we bring people together for shared purpose and specific question I think it was LLM versus ML is that what you’re saying yeah do you want to take this sorry we’ve got just a few minutes before we have to close the session so just
so LLM versus traditional ML in our experience what we have realized is in a lot of places specifically because we work in impact sector a lot of work we do is for ground level users resource is a big challenge right you’re talking about situations where the cell mobile phones are very basic internet is a challenge adoption is an issue in those scenarios we have often learned that traditional ML models have worked much better to serve the purpose and large language models have not been able to serve the purpose and in a lot of cases not even deployable right so there are genuine technical issues because we have of infrastructure because of low resource settings, which have not allowed LLMs to sort of reach where we would want them to reach, right?
Second, in such cases, small language models or small AI is what we’ve been now moving towards to have it serve very specific purpose rather trying to give general intelligence to, you know, some of these users on field, right? So that’s a very quick answer I could have just gone on. To answer your question, sir, I don’t think we are trying to eliminate competition. I think the way we should be thinking about this, that there are different ways organizations can collaborate, right? Some of them could collaborate based on geographical synergies. Some of them can collaborate based on expertise, you know, synergies. Some of them can collaborate based at what part of the life cycle of this entire AI deployment they can collaborate on that.
And absolutely, there will be consortia that will be created. It will still continue to compete with each other. And that’s a good thing to ensure that there’s also quality in the… but the idea is to foster collaboration where those synergies are possible, at least we start with that.
Thank you. Sorry, sir, we’re going to have to close the panel because we have literally 10 seconds left on the counter. I just want to say thank you very much to our panelists. We have gone from highest level government frameworks to questions about competitive landscape and how small companies can exist or can prosper, but the through line to all of this is that it doesn’t happen in silos. It cannot happen with only one stakeholder of society. This has to be multistakeholder and we have to commit to collective action. So I again refer to the QR code if you’re interested in the Hamburg Declaration that has more information on how to endorse that. Please join me in thanking our panelists for the excellent insights.
And thanks to all of you and good rest of summit. you. Thank you. Thank you. Thank you.
Robert Opp
Speech speed
138 words per minute
Speech length
1956 words
Speech time
845 seconds
AI’s dual potential
Explanation
Robert Opp stresses that AI can be a powerful driver for sustainable development, but also warns that without responsible use it could exacerbate inequality. He highlights both the opportunities to close development gaps and the risks of harmful outcomes.
Evidence
“From the perspective of the UN Development Program, certainly we see a concern with what is happening in the development and adoption of AI in that there’s no question and we are very convinced that AI can have such a powerful impact on sustainable development in a positive way.” [2]. “And more so, it also can be harmful in some cases if we’re not applying AI responsibly.” [3]. “It can help us really close some of those gaps that we see in persistent development challenges.” [4].
Major discussion point
Responsible AI Governance and Equitable Development
Topics
Artificial intelligence | Information and communication technologies for development | Social and economic development
Need for multistakeholder coordinated action
Explanation
Opp points out that building innovative yet inclusive AI ecosystems requires clear roles for governments, the private sector, civil society and international organisations, and that collective multistakeholder commitment is essential.
Evidence
“What do you think are the difficulties with the distinct roles of government, but also some of the other players like… private sector, civil society and international organizations, the roles that they play in building AI ecosystems that are both innovative and inclusive.” [6]. “This has to be multistakeholder and we have to commit to collective action.” [24].
Major discussion point
Multi‑Stakeholder Partnerships as Drivers of Impact
Topics
Artificial intelligence | Internet governance | The enabling environment for digital development
Progress from the Hamburg Declaration
Explanation
Opp asks for concrete outcomes after the first Hamburg Declaration and notes that the discussion is moving from high‑level frameworks to measurable impact on the ground.
Evidence
“And I’m going to turn back to you to ask you a question especially about that process, which is since the adoption of the Hamburg AI Declaration, what tangible progress have we seen and what the concrete actions that you think are now required to move from the higher level principles into sustained and scaled impact?” [61]. “So a couple of years ago, we did the first Hamburg Declaration.” [62].
Major discussion point
Progress and Actions Stemming from the Hamburg Declaration
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Question on LLM vs traditional ML value
Explanation
An audience member asks whether large language models add value compared with traditional machine‑learning approaches, highlighting uncertainty about the best AI tools for development contexts.
Evidence
“So I want to ask is, in comparison to LLMs, your chat GPTs, Geminis, what is the value being added in comparison to LLMs versus traditional ML?” [101].
Major discussion point
Key Challenges and Gaps in the Global AI Ecosystem
Topics
Artificial intelligence
Bärbel Kofler
Speech speed
160 words per minute
Speech length
1225 words
Speech time
458 seconds
Closing the power gap through governance and skills
Explanation
Kofler argues that the main barrier is a “power gap” rather than an innovation gap, calling for legal frameworks, governance, open‑source investment and extensive skills development to level the playing field.
Evidence
“And I would say it’s not an innovation gap, it’s a power gap.” [19]. “So all those things need framework and need governance.” [20]. “But to do so, we have to invest in the mindset and in the skills of all users.” [23]. “That’s why we are also investing in open source.” [26]. “We have to work on vocational training and university training to make research, education, and the needs of even small and medium‑sized enterprises heard and linked together to bring small and medium‑sized enterprises into the position to participate also from that new technology.” [40].
Major discussion point
Responsible AI Governance and Equitable Development
Topics
The enabling environment for digital development | Artificial intelligence | Capacity development
Hamburg Declaration concrete commitments
Explanation
Kofler details the specific deliverables pledged under the Hamburg Declaration, including AI building blocks, diagnostics, datasets and large‑scale training programmes.
Evidence
“We were committing to open up AI building blocks, 12 AI building blocks, especially for using AI to climate action.” [49]. “For example, one of the commitments was training people.” [63]. “And we were also committing 30 AI diagnostics.” [89]. “We are now by a number of 15. and we achieved 55 data sets.” [90].
Major discussion point
Progress and Actions Stemming from the Hamburg Declaration
Topics
Artificial intelligence | Capacity development
Power gap due to VC and data‑center concentration
Explanation
Kofler highlights the stark imbalance in venture‑capital flows and data‑center capacity between the Global North and South, describing it as a “power gap” that limits AI benefits for many countries.
Evidence
“Global North, it’s almost all the data centers available capacity and only 0 .1 % is somewhere in the Global South.” [94]. “And if you look how venture capital, for example, is distributed, and we know that, I think, it’s 17 % of venture capital only is in those parts of the world who are presenting more than 90 % of the people.” [96]. “So we feel a power gap at the end of the day.” [98].
Major discussion point
Key Challenges and Gaps in the Global AI Ecosystem
Topics
Artificial intelligence | The enabling environment for digital development | Financial mechanisms
Arundhati Bhattacharya
Speech speed
158 words per minute
Speech length
1779 words
Speech time
673 seconds
Democratizing technology and privacy safeguards
Explanation
Bhattacharya stresses that impact requires democratized AI tools and strong privacy protections, positioning the private sector as a catalyst for inclusive adoption.
Evidence
“If you want it to have an impact, you need to democratize technology.” [21]. “That the right to privacy of their data is properly maintained.” [35]. “1 % of our products and 1 % of our time to the community, to the non‑profit community.” [71]. “And Salesforce, again, it was set up with the intention that business is a platform for change.” [73].
Major discussion point
Responsible AI Governance and Equitable Development
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The digital economy
Salesforce 1% profit/time model and skilling
Explanation
She outlines Salesforce’s commitment to allocate 1 % of product revenue, time and equity to nonprofit causes and to train a large community of “Trailblazers”, demonstrating private‑sector contribution to capacity building.
Evidence
“1 % of our products and 1 % of our time to the community, to the non‑profit community.” [71]. “So we call people who basically are trained in the sales force technology as trailblazers.” [69]. “We have what we call a one‑by‑one‑by‑one mission, which is we contribute 1 % of our profits or equity.” [80].
Major discussion point
Multi‑Stakeholder Partnerships as Drivers of Impact
Topics
The digital economy | Capacity development | Human rights and the ethical dimensions of the information society
Self‑regulatory bodies and co‑innovation
Explanation
Bhattacharya calls for large corporations to develop internal ethical offices and to deepen co‑innovation with customers and academic partners, reinforcing responsible AI deployment.
Evidence
“So this is something that I think most large corporates have to have a self‑regulatory feature.” [129]. “And having said that, the other ways in which we can actually take part with the stakeholders is to continue to co‑innovate with many of our customers, those who are interested, to take forward our participation in many of these forums that are there.” [134].
Major discussion point
Future Opportunities and Emerging Needs
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Nakul Jain
Speech speed
170 words per minute
Speech length
1535 words
Speech time
539 seconds
Multi‑stakeholder ecosystem from day one
Explanation
Jain emphasizes that a functional AI ecosystem must be built from the outset, embedding solutions within ministries, data pipelines and monitoring mechanisms, and that a dedicated multi‑stakeholder framework is essential.
Evidence
“The ecosystem that works is the one that ensures from day one that we are able to do everything that we need to do.” [45]. “How will the solution, the use case be institutionalized in the ecosystem, how will it be embedded within the framework we are speaking about, within the department ministry we are talking about, within the problem area and the people who will be using it.” [42]. “It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi‑stakeholder ecosystem that can bring in their expertise, who will take care of everything that sits around the technology.” [44].
Major discussion point
Responsible AI Governance and Equitable Development
Topics
Artificial intelligence | Internet governance | Capacity development
LLMs vs traditional ML suitability
Explanation
Jain shares field experience that large language models often fail in low‑resource settings, whereas traditional machine‑learning models are more deployable and effective for impact‑oriented projects.
Evidence
“so LLM versus traditional ML in our experience what we have realized is in a lot of places specifically because we work in impact sector a lot of work we do is for ground level users resource is a big challenge … we have often learned that traditional ML models have worked much better to serve the purpose and large language models have not been able to serve the purpose … not even deployable right so there are genuine technical issues because we have of infrastructure because of low resource settings, which have not allowed LLMs to sort of reach where we would want them to reach, right?” [99].
Major discussion point
Key Challenges and Gaps in the Global AI Ecosystem
Topics
Artificial intelligence | Capacity development
Global AI‑solution marketplace and shared playbooks
Explanation
Jain proposes creating a cross‑border marketplace of AI solutions and shared playbooks to accelerate the diffusion of proven tools across regions.
Evidence
“So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared covers?” [126].
Major discussion point
Future Opportunities and Emerging Needs
Topics
Artificial intelligence | Information and communication technologies for development
Regional AI‑assurance hubs
Explanation
He suggests establishing regional hubs that can evaluate AI solutions consistently, addressing the current lack of evaluation capacity within many countries.
Evidence
“So is there an opportunity for us to create regional evaluation hubs, which are, you know, for a given particular region, because I see a lot of organizations struggle within country to get solutions evaluated.” [106].
Major discussion point
Future Opportunities and Emerging Needs
Topics
Artificial intelligence | Data governance
Wadwani AI Global as convener
Explanation
Jain describes Wadwani AI Global’s role as a convener that partners with governments and technical partners to ensure AI solutions move from labs to real‑world impact, especially for underserved communities.
Evidence
“Yeah no absolutely, I think what we would like to call ourselves is a convener of technology, especially for social good, so our role essentially is to ensure there is an impact, leveraging artificial intelligence who all need to participate is something that we work with partners along we work with governments, whether it requires capacity building whether it requires advisory to the government, whether it requires actual product development we help government with everything because what we have also realized is while government is well intent, there is a need for a lot of hand holding…” [57]. “All of this was only possible because we had this collaboration between the government with the technical partner who government was working with us being the AI partner.” [56].
Major discussion point
Multi‑Stakeholder Partnerships as Drivers of Impact
Topics
Artificial intelligence | Social and economic development
Speaker 1
Speech speed
139 words per minute
Speech length
853 words
Speech time
367 seconds
Greening AI and resource‑aware AI
Explanation
Speaker 1 emphasizes the need for “green” AI, calling for resource‑aware approaches and the development of tools that help engineers build more responsible systems.
Evidence
“We are also very keen on greening of AI, and that requires… a lot of resource‑aware AI.” [8]. “We are also building our own technology to be able to evaluate, calibrate, and sort of help AI engineers build more responsible AI.” [10].
Major discussion point
Responsible AI Governance and Equitable Development
Topics
Environmental impacts | Artificial intelligence
Trusty Platform and academic partnership
Explanation
He announces a partnership with Carnegie Mellon to launch the Trusty Platform, a corporate initiative for trustworthy AI evaluation and responsible AI development.
Evidence
“And we have launched a big program around it, where Carnegie Mellon is now our academic partner.” [85]. “It’s called Trusty Platform.” [86].
Major discussion point
Multi‑Stakeholder Partnerships as Drivers of Impact
Topics
Artificial intelligence | Capacity development
Repurposing legacy hardware and new compute (quantum valley)
Explanation
Speaker 1 calls for repurposing existing hardware and exploring emerging compute options such as a “quantum valley” collaboration between TCS, IBM and a state government to bridge the North‑South compute divide.
Evidence
“One is we should think of how we could repurpose the legacy hardware.” [112]. “Two, of course, you should explore the new hardware that is becoming available.” [113]. “So TCS, IBM, and Andhra government have come together to create a quantum valley in India, which could open ways for that.” [114].
Major discussion point
Key Challenges and Gaps in the Global AI Ecosystem
Topics
Artificial intelligence | Environmental impacts
Audience Member 1
Speech speed
128 words per minute
Speech length
43 words
Speech time
20 seconds
Question on added value of LLMs vs traditional ML
Explanation
The audience member asks for clarification on whether large language models provide additional value compared with traditional machine‑learning models for development use‑cases.
Evidence
“So I want to ask is, in comparison to LLMs, your chat GPTs, Geminis, what is the value being added in comparison to LLMs versus traditional ML?” [101].
Major discussion point
Key Challenges and Gaps in the Global AI Ecosystem
Topics
Artificial intelligence
Audience Member 2
Speech speed
139 words per minute
Speech length
43 words
Speech time
18 seconds
Fragmented initiatives and need for shared purpose
Explanation
The participant points out that many AI initiatives are fragmented and asks how they can be coordinated around a common purpose.
Evidence
“So how do we bring them together for a shared purpose?” [136].
Major discussion point
Future Opportunities and Emerging Needs
Topics
Internet governance | Artificial intelligence
Audience Member 3
Speech speed
183 words per minute
Speech length
211 words
Speech time
69 seconds
SaaS valuation trends and democratization of software
Explanation
The audience member raises concerns about whether democratization of software will mainly benefit large tech firms, referencing SaaS valuation trends and the impact on small enterprises.
Evidence
“So an employee count of 300 is creating so much value, and that’s at 600 ,000 employee count is creating that value.” [78]. “So in that context, with the democratization of software and tech actually happen or is it just the big tech that benefit out of it?” [141]. “We have gone from highest level government frameworks to questions about competitive landscape and how small companies can exist or can prosper, but the through line to all of this is that it doesn’t happen in silos.” [147].
Major discussion point
Future Opportunities and Emerging Needs
Topics
The digital economy | Artificial intelligence
Agreements
Agreement points
Multi-stakeholder collaboration is essential for AI development and deployment
Speakers
– Robert Opp
– Bärbel Kofler
– Arundhati Bhattacharya
– Nakul Jain
Arguments
AI can have powerful positive impact on sustainable development but current evolution is not equitable and could worsen inequality without responsible use
Artificial intelligence’s future depends on multi-stakeholder engagement including government, private sector, civil society, and academia
Private sector brings scaling, innovation, and user success through democratizing technology and extensive skilling programs
Building technology is the easiest part; everything around it requires multi-stakeholder ecosystem with expertise in institutional mechanisms and embedding solutions within existing frameworks
Summary
All speakers agree that successful AI development and deployment requires collaboration across government, private sector, civil society, and academia, with each stakeholder bringing unique capabilities and expertise
Topics
Artificial intelligence | The enabling environment for digital development | Internet governance
Technology democratization is crucial for equitable impact and development
Speakers
– Bärbel Kofler
– Arundhati Bhattacharya
Arguments
Innovation exists globally but there’s a power gap in resource distribution – only 17% of venture capital goes to Global South representing 90% of people, and only 0.1% of data center capacity is in Global South
Technology democratization is essential for impact, as demonstrated by India’s financial inclusion success through biometric identification and mobile networks
Summary
Both speakers emphasize that technology must be made accessible and available to all, not concentrated in the hands of a few, to achieve meaningful development impact
Topics
Closing all digital divides | Information and communication technologies for development | The enabling environment for digital development
Government role is creating enabling frameworks and infrastructure
Speakers
– Bärbel Kofler
– Speaker 1
Arguments
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone
Infrastructure challenges require sensing infrastructure as part of digital public infrastructure, with government catalyzing deployment and private/public entities innovating on top
Summary
Both speakers agree that governments should focus on creating the foundational infrastructure, legal frameworks, and enabling environments that allow other stakeholders to innovate and deploy solutions
Topics
The enabling environment for digital development | Information and communication technologies for development | Artificial intelligence
Concrete, measurable commitments are more important than high-level principles
Speakers
– Bärbel Kofler
– Nakul Jain
Arguments
Hamburg Declaration focuses on concrete, measurable commitments rather than just principles – German ministry exceeded training targets (190,000 vs 160,000) and AI building blocks (15 vs 12)
Successful AI deployment requires government ownership, data availability, embedding solutions in existing systems, and monitoring mechanisms from day one
Summary
Both speakers emphasize the importance of tangible, measurable outcomes and practical implementation over abstract principles or declarations
Topics
Artificial intelligence | Social and economic development | Monitoring and measurement
Traditional ML often more practical than LLMs in resource-constrained environments
Speakers
– Nakul Jain
– Speaker 1
Arguments
Traditional ML models often work better than LLMs in resource-constrained, low-infrastructure settings typical of impact sector work
LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require significant development work
Summary
Both speakers agree that while LLMs represent advancement, traditional ML approaches are often more suitable for practical deployment in resource-constrained settings and specific use cases
Topics
Artificial intelligence | Closing all digital divides | Information and communication technologies for development
Similar viewpoints
Both speakers emphasize that technology is essential for development at scale, particularly in populous nations, and requires careful implementation support to move from concept to practical impact
Speakers
– Arundhati Bhattacharya
– Nakul Jain
Arguments
Populous nations like India cannot achieve deserved standard of living without technology integration
Technology organizations serve as conveners and facilitators, requiring extensive hand-holding with governments to ensure solutions move from labs to field implementation
Topics
Information and communication technologies for development | Social and economic development | Capacity development
All three speakers agree on the importance of responsible AI development, whether through government frameworks, corporate self-regulation, or academic partnerships, emphasizing the need for ethical considerations in AI deployment
Speakers
– Bärbel Kofler
– Arundhati Bhattacharya
– Speaker 1
Arguments
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone
Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation
TCS has launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for evaluating and calibrating AI systems
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development
Both speakers see the need for better coordination and knowledge sharing mechanisms, whether through global platforms or national initiatives, to scale AI solutions and build capacity
Speakers
– Nakul Jain
– Arundhati Bhattacharya
Arguments
Need for global repository and marketplace of AI solutions with shared playbooks, governance frameworks, and talent pools to enable cross-border deployment
Continued co-innovation with customers and participation in national initiatives like skilling missions and technical education partnerships
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Unexpected consensus
Traditional ML superiority over LLMs in practical deployment
Speakers
– Nakul Jain
– Speaker 1
Arguments
Traditional ML models often work better than LLMs in resource-constrained, low-infrastructure settings typical of impact sector work
LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require significant development work
Explanation
Despite the current hype around Large Language Models, both technical experts agree that traditional machine learning approaches are often more practical and effective for real-world deployment, especially in resource-constrained environments. This consensus challenges the prevailing narrative about LLM superiority
Topics
Artificial intelligence | Closing all digital divides | Information and communication technologies for development
Self-regulation as preferable to external regulation
Speakers
– Arundhati Bhattacharya
– Speaker 1
Arguments
Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation
TCS has launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for evaluating and calibrating AI systems
Explanation
Both private sector representatives agree that proactive self-regulation is preferable to external regulation, which is somewhat unexpected given that self-regulation is often viewed skeptically. Their consensus suggests industry recognition of the need for responsible practices
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development
Overall assessment
Summary
The speakers demonstrated remarkable consensus across multiple key areas: the necessity of multi-stakeholder collaboration, the importance of technology democratization, the government’s role in creating enabling frameworks, the preference for concrete commitments over abstract principles, and the practical advantages of traditional ML in many deployment scenarios. There was also strong agreement on the need for responsible AI development and the value of self-regulation in the private sector.
Consensus level
High level of consensus with significant implications for AI governance and development. The agreement across government, private sector, and implementation organizations suggests a mature understanding of AI deployment challenges and a shared vision for responsible, inclusive AI development. This consensus provides a strong foundation for collaborative action and suggests that multi-stakeholder initiatives like the Hamburg Declaration have potential for meaningful impact.
Differences
Different viewpoints
Technology approach for resource-constrained environments
Speakers
– Nakul Jain
– Speaker 1
Arguments
Traditional ML models often work better than LLMs in resource-constrained, low-infrastructure settings typical of impact sector work
LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require significant development work
Summary
Nakul emphasizes that traditional ML models are more practical and deployable in low-resource settings with basic infrastructure, while Speaker 1 focuses on LLMs as foundational but requiring additional layers of specialized solutions. They approach the technical solution from different practical perspectives.
Topics
Artificial intelligence | Closing all digital divides | Information and communication technologies for development
Unexpected differences
Self-regulation versus government frameworks
Speakers
– Arundhati Bhattacharya
– Bärbel Kofler
Arguments
Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone
Explanation
This disagreement is unexpected because both speakers generally support multi-stakeholder approaches, yet they differ on the primary mechanism for ensuring responsible AI – Bhattacharya advocates for corporate self-regulation to maintain innovation freedom, while Kofler emphasizes the need for government frameworks and governance structures
Topics
Human rights and the ethical dimensions of the information society | The enabling environment for digital development | Artificial intelligence
Overall assessment
Summary
The discussion showed remarkably high consensus on goals (responsible AI, equity, multi-stakeholder partnerships) with disagreements primarily on implementation approaches and emphasis rather than fundamental objectives
Disagreement level
Low to moderate disagreement level with high strategic alignment. The disagreements are constructive and complementary rather than conflicting, suggesting different stakeholders can pursue parallel approaches toward shared goals. This indicates a mature, collaborative ecosystem where different actors can contribute their unique strengths without undermining overall objectives.
Partial agreements
Partial agreements
Both agree on the critical importance of skills training and capacity building, but disagree on primary responsibility – Kofler emphasizes government’s role in creating enabling environments and frameworks, while Bhattacharya focuses on private sector’s role in democratization and direct skilling implementation
Speakers
– Bärbel Kofler
– Arundhati Bhattacharya
Arguments
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone
Private sector brings scaling, innovation, and user success through democratizing technology and extensive skilling programs
Topics
Capacity development | The enabling environment for digital development | Information and communication technologies for development
Both identify global inequities in AI access and resources, but propose different solutions – Kofler focuses on addressing fundamental resource distribution gaps through government frameworks, while Jain proposes creating new marketplace infrastructure to facilitate cross-border solution sharing
Speakers
– Bärbel Kofler
– Nakul Jain
Arguments
Innovation exists globally but there’s a power gap in resource distribution – only 17% of venture capital goes to Global South representing 90% of people, and only 0.1% of data center capacity is in Global South
Need for global repository and marketplace of AI solutions with shared playbooks, governance frameworks, and talent pools to enable cross-border deployment
Topics
Closing all digital divides | The enabling environment for digital development | Artificial intelligence
Similar viewpoints
Both speakers emphasize that technology is essential for development at scale, particularly in populous nations, and requires careful implementation support to move from concept to practical impact
Speakers
– Arundhati Bhattacharya
– Nakul Jain
Arguments
Populous nations like India cannot achieve deserved standard of living without technology integration
Technology organizations serve as conveners and facilitators, requiring extensive hand-holding with governments to ensure solutions move from labs to field implementation
Topics
Information and communication technologies for development | Social and economic development | Capacity development
All three speakers agree on the importance of responsible AI development, whether through government frameworks, corporate self-regulation, or academic partnerships, emphasizing the need for ethical considerations in AI deployment
Speakers
– Bärbel Kofler
– Arundhati Bhattacharya
– Speaker 1
Arguments
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone
Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation
TCS has launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for evaluating and calibrating AI systems
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development
Both speakers see the need for better coordination and knowledge sharing mechanisms, whether through global platforms or national initiatives, to scale AI solutions and build capacity
Speakers
– Nakul Jain
– Arundhati Bhattacharya
Arguments
Need for global repository and marketplace of AI solutions with shared playbooks, governance frameworks, and talent pools to enable cross-border deployment
Continued co-innovation with customers and participation in national initiatives like skilling missions and technical education partnerships
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Takeaways
Key takeaways
AI has tremendous potential for sustainable development but current evolution is inequitable, with massive resource gaps between Global North and South (only 17% of venture capital and 0.1% of data center capacity in Global South)
Multi-stakeholder partnerships are essential – no single actor can successfully deploy AI for development alone; government provides framework and governance, private sector scales and innovates, tech organizations serve as conveners
Technology democratization is crucial for impact, as demonstrated by India’s financial inclusion success that reached 600,000 villages through biometric identification and mobile networks
Building AI technology is the easiest part; the challenge lies in institutional mechanisms, embedding solutions in existing systems, capacity building, and ensuring sustained adoption
Concrete commitments with measurable outcomes are more valuable than high-level principles – Hamburg Declaration signatories must deliver specific, quantifiable results
Traditional ML often outperforms LLMs in resource-constrained environments typical of Global South applications
Self-regulation by large corporations through ethical technology offices is preferable to heavy-handed external regulation
Resolutions and action items
Invitation extended for organizations to endorse the Hamburg Declaration with concrete, measurable commitments rather than just signing principles
German ministry committed to and exceeded specific targets: trained 190,000 people (vs 160,000 target), delivered 15 AI building blocks (vs 12 target)
Ongoing concrete projects identified: satellite data analysis for Kenyan farmers, cervix cancer detection in Cambodia, supporting Indian language datasets for Bhashini
TCS launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for AI evaluation
Continued participation in national initiatives like skilling missions and technical education partnerships
Unresolved issues
How to effectively bridge the massive resource gap between Global North and South in AI infrastructure and funding
Lack of global repository and marketplace for AI solutions to enable cross-border deployment from startups in one country to implementation in another
Absence of regional AI evaluation hubs with clear guidelines for solution assessment across different countries
How to bring together competing fragmented initiatives for shared purpose while maintaining healthy competition
Whether democratization of AI technology will truly benefit small enterprises or primarily advantage big tech companies
How to address data fragmentation and silos that exist even within individual enterprises
Scaling challenges for moving AI solutions from labs to field implementation across different contexts
Suggested compromises
Organizations can collaborate based on different synergies (geographical, expertise-based, or lifecycle stage-based) while still maintaining competitive relationships
Focus on repurposing legacy hardware and exploring new hardware innovations to address compute gaps rather than requiring latest technology
Use small language models and task-specific AI instead of general intelligence LLMs for resource-constrained environments
Embed AI solutions within existing government systems and frameworks rather than creating standalone applications
Combine self-regulation by private sector with government framework-setting to balance innovation with responsibility
Create consortia that collaborate in some areas while competing in others to ensure both cooperation and quality through competition
Thought provoking comments
I would say it’s not an innovation gap, it’s a power gap. Because innovative people are existing around the globe. Ideas are created around the globe in all spheres of society… If you look how venture capital, for example, is distributed, and we know that, I think, it’s 17% of venture capital only is in those parts of the world who are presenting more than 90% of the people.
Speaker
Bärbel Kofler
Reason
This comment reframes the entire AI equity discussion by challenging the common assumption that developing countries lack innovation capacity. Instead, it identifies the real issue as unequal access to resources and power structures. The specific statistics about venture capital distribution provide concrete evidence of systemic inequality.
Impact
This comment shifted the conversation from technical solutions to structural inequalities, setting the tone for subsequent discussions about infrastructure, governance frameworks, and the need for deliberate policy interventions to level the playing field.
Any improvement in technology, if it is not really democratized, then it doesn’t really have an impact. If you want it to have an impact, you need to democratize technology… A populous nation like India can never really have the standard of living that it deserves unless technology is a part of the play.
Speaker
Arundhati Bhattacharya
Reason
This insight connects technology democratization directly to societal outcomes and quality of life, moving beyond abstract discussions to concrete human impact. It establishes democratization as a prerequisite for meaningful technological progress rather than an optional add-on.
Impact
This comment provided the philosophical foundation for the subsequent detailed case study of India’s financial inclusion success, demonstrating how democratized technology can transform entire populations’ economic prospects.
Building technology, at least for an application organization like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi-stakeholder ecosystem.
Speaker
Nakul Jain
Reason
This comment challenges the tech-centric view that dominates many AI discussions by revealing that technology development is actually the simplest component. It highlights the critical importance of implementation ecosystems, partnerships, and institutional mechanisms.
Impact
This insight fundamentally reoriented the discussion toward the practical challenges of deployment and scaling, leading to detailed examples of successful multi-stakeholder collaborations and the specific roles each partner must play.
The problem lies in the sensing infrastructure that you need to put together to be able to have high quality high volume high velocity data… government can play a very big role in sort of catalyzing this and enabling sort of, you know, putting together a great sensing infrastructure could be thought of, you know, part of the digital public infrastructure.
Speaker
Sachin Loda
Reason
This comment introduces a sophisticated understanding of AI infrastructure needs, moving beyond simple data collection to the systematic sensing infrastructure required for effective AI systems. It positions this as a public good that governments should provide.
Impact
This technical insight elevated the discussion to consider AI infrastructure as a fundamental public utility, similar to roads or electricity, which influenced subsequent conversations about government roles and public-private partnerships.
We want to come up really with concrete steps… We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps and that’s what my ministry did until now.
Speaker
Bärbel Kofler
Reason
This comment addresses a critical weakness in international cooperation – the tendency toward performative commitments without accountability. It demonstrates how to move from aspirational declarations to measurable outcomes with specific targets and timelines.
Impact
This emphasis on concrete accountability transformed the discussion of the Hamburg Declaration from a theoretical framework to a practical tool for driving real change, encouraging other participants to think about measurable commitments rather than abstract principles.
If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now. So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared governance ecosystem frameworks has a talent pool that could be leveraged?
Speaker
Nakul Jain
Reason
This comment identifies a critical gap in the global AI ecosystem – the lack of mechanisms for cross-border knowledge and solution transfer. It proposes a concrete solution that could dramatically accelerate AI adoption in developing countries by leveraging existing innovations.
Impact
This insight opened up a new dimension of the discussion about creating global infrastructure for AI solution sharing, moving beyond individual country initiatives to think about systematic knowledge transfer mechanisms across the Global South.
Overall assessment
These key comments fundamentally shaped the discussion by challenging conventional assumptions and introducing new frameworks for understanding AI development challenges. Kofler’s ‘power gap’ insight reframed the entire equity discussion, while Bhattacharya’s democratization principle and Jain’s observation about implementation challenges shifted focus from technology creation to deployment ecosystems. Loda’s infrastructure perspective elevated the conversation to consider AI as public utility, and the emphasis on concrete accountability transformed abstract principles into actionable commitments. Together, these insights created a comprehensive framework that moved the discussion from theoretical concerns to practical solutions, emphasizing the critical importance of multi-stakeholder collaboration, systematic infrastructure development, and measurable outcomes in achieving responsible AI for sustainable development.
Follow-up questions
How do we actually address some of those challenges? How do we get some of the measures in place or the kinds of commitments, principles that we need to have as an overall community, especially in the international development community, but beyond in terms of private sector and others as well?
Speaker
Robert Opp
Explanation
This foundational question about implementing responsible AI governance frameworks was posed at the beginning but requires ongoing exploration as the field evolves
How can we create a global repository of solutions that includes both public and private sector innovations, with shared playbooks and governance frameworks?
Speaker
Nakul Jain
Explanation
This addresses the challenge of scaling AI solutions across different countries and contexts, which currently lacks a systematic approach
How can we establish regional evaluation hubs for AI solutions, given that evaluation guidelines differ significantly between countries?
Speaker
Nakul Jain
Explanation
This is critical for ensuring AI solutions can be safely and effectively deployed across different regulatory and cultural contexts
How do we repurpose legacy hardware for AI applications in resource-constrained environments?
Speaker
Sachin Loda
Explanation
This technical challenge is important for bridging the compute gap between Global North and Global South
How can we build comprehensive sensing infrastructure as part of digital public infrastructure to enable better AI applications?
Speaker
Sachin Loda
Explanation
This infrastructure question is fundamental to generating the high-quality data needed for effective AI solutions in areas like environmental monitoring
What is the comparative value being added by LLMs versus traditional ML approaches in development contexts?
Speaker
Audience Member 1
Explanation
This technical question is important for understanding which AI approaches are most effective for development challenges
How do we bring together fragmented and competing initiatives for shared purpose to create bigger impact?
Speaker
Audience Member 2
Explanation
This addresses the coordination challenge in the AI for development ecosystem where multiple organizations work on similar problems
Will democratization of software and technology actually happen, or will only big tech companies benefit from AI advancement?
Speaker
Audience Member 3
Explanation
This question about market concentration and accessibility is crucial for understanding whether AI will truly serve inclusive development goals
How can small and medium-sized enterprises be brought into position to participate and benefit from new AI technology, not just big players?
Speaker
Bärbel Kofler
Explanation
This addresses the equity gap in AI adoption and the need for inclusive economic participation in the AI economy
How do we ensure that research results and datasets are available for everybody through open source approaches?
Speaker
Bärbel Kofler
Explanation
This relates to the fundamental question of knowledge sharing and preventing AI benefits from being concentrated among a few organizations
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

