Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap

Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap

Session at a glanceSummary, keypoints, and speakers overview

Summary

The closing session of the India AI Impact Summit focused on building a trustworthy, responsible AI ecosystem through “AI assurance,” a framework for measuring and communicating the safety and reliability of AI systems [1-3][5]. Organisers highlighted the recent Delhi Declaration as a catalyst for accountability and policy work, and introduced two new papers-one on strengthening the AI assurance ecosystem and another on closing the global assurance divide-to seed further development and implementation [2][7-10][14-23][25-33]. Participants were reminded that robust national AI strategies must pair industrial ambitions with comprehensive assurance measures, especially as the declaration calls for clearer usage-data sharing and multilingual evaluation standards [15-18][24-30][31-34].


Singapore’s Minister Josephine Teo noted that agentic AI has moved from obscurity to widespread deployment, offering productivity gains but also introducing autonomy-related risks that can amplify harm when systems malfunction or oversight erodes [46-55][58-66]. She argued for a shift from reactive regulation to proactive governance, citing Singapore’s sandbox partnership with Google and a living model-governance framework that emphasizes testing, standards, and independent third-party assurance [69-76][97-109][110-112]. Teo concluded that building confidence in agentic systems requires continuous collaboration with industry and global partners to refine these safeguards [111-112].


Moderator Madhu Srikumar defined AI assurance as the independent verification of trustworthiness, likening it to a safety inspection and stressing its relevance to the Delhi Declaration’s multilingual and contextual evaluation commitment [124-131][132-138]. Frederic Werner highlighted the difficulty of translating high-potential AI use cases across regions, emphasizing that standards must embed human-rights, inclusivity, and local relevance to avoid a “global south” gap [145-166][167-176]. Vukosi Marivate added that limited data collection, annotation, and policy capacity in many Global South countries demand locally-driven evaluation frameworks and capacity-building rather than top-down mandates [231-240][242-247].


Owen Larter described agentic systems as autonomous tools that will permeate everyday tasks and argued that interoperable technical protocols-such as Google’s agents-to-agents and universal commerce standards-are essential for safe interaction and scalability [186-204][205-216]. He warned that connecting autonomous agents to sensitive accounts raises security concerns, and noted ongoing work with VirusTotal and internal safety frameworks to scan for malware and assess risks before deployment [222-223]. Larter also stressed the need for affordable, low-compute models to enable widespread testing and third-party assurance, especially for resource-constrained regions [351-354][355-357].


Stephanie Ifayemi outlined PAI’s two papers, which identify six challenge areas-including language diversity, risk-profile differences, and infrastructure bottlenecks-and propose incentives, professionalisation, and a tiered assurance approach to bridge the global divide [255-267][268-276][277-285][286-295][296-301]. She emphasized that north-south collaboration on standards for agents, such as emerging work by NIST and the Center for AI Standards and Innovation, is crucial to ensure that Global South perspectives are not excluded from future attribution and testing frameworks [292-300].


Closing remarks from Natasha Crampton and Chris Meserole reinforced that AI assurance must become an operational, continuous-monitoring discipline, shared across borders and sectors, and called on all stakeholders to contribute to building the necessary infrastructure and standards to realise trustworthy agentic AI worldwide [411-420][425-433][444-452][456-464][465].


Keypoints


Major discussion points


Building an AI-assurance ecosystem for agentic systems – The panel repeatedly stressed that trustworthy agentic AI requires a three-part foundation: rigorous testing of technical robustness, the creation of clear standards, and independent third-party assurance providers. Josephine Teo outlined these pillars and argued they are essential for “building confidence” and for differentiating safe products in the market [96-110].


Closing the global assurance divide – Participants highlighted that current assurance practices are uneven, with major gaps in multilingual evaluation, infrastructure, and risk-profile understanding for the Global South. Stephanie Ifayemi listed six challenge areas (language diversity, risk profiles, infrastructure, etc.) and noted the need for “north-south collaboration” to avoid exclusion [260-269][276-283][292-300]; Vukosi Marivate emphasized limited data-annotation capacity and policy expertise in many low-resource regions [231-240]; Natasha Crampton warned that without deliberate action the shift to agents will “make that divide even worse” [415-418].


Technical standards and interoperability for agents – Industry representatives described concrete work on protocols that let agents communicate with each other and with services (e.g., “agents-to-agents protocol”, “universal commerce protocol”) and stressed that standards are a prerequisite for safe deployment at scale. Owen Larter explained Google DeepMind’s efforts to define such standards and to embed security checks (e.g., malware scanning of downloaded skills) [198-205][222-224]; he also called for affordable, low-compute models to broaden access [351-354].


Collaborative, shared-responsibility model – The discussion repeatedly called for multilateral, public-private partnerships and a professionalised assurance community. Rebecca Finlay framed the Delhi Declaration as a catalyst for “accountability work” [2]; Josephine Teo described Singapore’s “sandbox” with Google and a “live” governance framework [68-76]; Frederic Werner highlighted AI for Good’s inclusive network of UN agencies and NGOs [307-315]; Chris Meserole summed up the need for “shared responsibility” and urged everyone to get involved [452-456].


Overall purpose / goal


The session was convened to launch and contextualise two new Partnership on AI papers on AI assurance, to align the conversation with the newly adopted Delhi Declaration, and to mobilise a global, inclusive effort that equips policymakers, industry, and civil society-especially in the Global South-to develop, test, and govern trustworthy, agentic AI systems.


Tone of the discussion


Opening (0-15 min): Formal and forward-looking, emphasizing the significance of the Delhi Declaration and the need for a robust assurance ecosystem.


Middle (15-35 min): Becomes more technical and urgent, with detailed descriptions of testing, standards, and the risks of autonomous agents, while simultaneously stressing inclusivity and the challenges faced by low-resource regions.


Closing (35-56 min): Shifts to a collaborative, hopeful tone, featuring calls to action, acknowledgment of shared responsibility, and a rallying message to “download the reports, get involved, and roll up our sleeves” to grow the assurance “seed.”


Overall, the conversation moves from setting the agenda, through deep-dive problem-solving, to a unifying call for collective, cross-border effort.


Speakers

Chris Meserole – CEO of FMF; Executive Director of the Frontier Model Forum, focusing on frontier AI safety and security [S3].


Vukosi Marivate – AI researcher and co-founder of Lilapa AI; leads African language NLP initiatives such as Masakane, building AI for Africans by Africans.


Frederic Werner – Chief of Strategic Engagement, International Telecommunication Union (ITU); works on AI governance, standards and AI-for-Good initiatives [S5].


Josephine Teo – Minister for Communications and Information, Singapore; leads Singapore’s AI assurance strategy and government-industry collaborations on agentic AI [S7].


Natasha Crampton – Chief Responsible AI Officer, Microsoft; advocates for operational AI assurance across borders, languages and cultures [S9].


Stephanie Ifayemi – Senior researcher at the Partnership on AI (PAI); co-author of reports on closing the global AI assurance divide. [S12]


Madhu Srikumar – Moderator of the panel; senior leader at the Partnership on AI, involved in AI safety and policy coordination [S14].


Rebecca Finlay – Representative of the Partnership on AI; focuses on AI assurance ecosystems and policy frameworks [S17].


Owen Larter – Senior staff, Google DeepMind (also noted as responsible-AI public policy lead at Microsoft) [S19]; works on agentic AI standards, protocols and safety research.


Additional speakers:


Rameca – Mentioned by Chris Meserole in closing remarks; no further role or title identified in the transcript.


Full session reportComprehensive analysis and detailed insights

Rebecca Finlay opened the closing session by reminding participants that the India AI Impact Summit brings together more than a dozen countries to “unlock innovation through trustworthy, responsible, beneficial AI” [1] and that the recent Delhi Declaration – adopted the day before – provides a timely catalyst for “accountability work” and the development of scientific evidence for policy [2]. She announced that the Partnership on AI (PAI) will launch two new papers that originated at the Paris Action Summit: one on “Strengthening the AI Assurance Ecosystem” and another on “Closing the Global Assurance Divide” [7-10][14-23]. QR-codes for the papers will be displayed immediately after her remarks so attendees can download them on the spot [24-33][25-30].


Madhu Srikumar then defined AI-assurance as “the process of measuring, evaluating, and communicating whether AI systems are trustworthy… a safety inspection, but for AI” [124-130]. She linked this definition to the first Delhi Declaration commitment, which calls on Frontier AI companies to share usage data with progress tracked through 2025 [46-55], and to the second commitment that urges “multilingual and contextual evaluations” to ensure AI works across languages, cultures and real-world conditions [132-138].


Singapore’s Minister Josephine Teo explained that “agentic systems have taken off” since the Paris summit, offering productivity gains but also introducing “new risk” because autonomy can amplify harm when systems malfunction and human oversight erodes [58-66]. She advocated a shift from “reactive regulation” to “proactive preparation”, describing Singapore’s sandbox as a place where the government “eats its own dog food” by testing agents in partnership with Google [69-73]. The sandbox operates under a “live model-governance framework” for autonomous agents [74-78]. Minister Teo outlined a three-pillar assurance model-testing, standards, and independent third-party assurance-as essential for building confidence and giving companies a market differentiator [85-89][97-109][110-112].


Vukosi Marivate (Masakane) reinforced the Global South perspective, observing that “there is likely not as much collection … as in Europe or North America” and that limited data-annotation capacity makes assurance feel “far away” from developers [231-235]. He argued that effective assurance requires “local understanding” and “capacity and capabilities of policymakers”, rejecting a purely top-down approach [236-240][242-247].


Frederic Werner (AI for Good) highlighted the difficulty of translating high-potential AI use cases across regions, noting that “trust is the biggest challenge” and that standards must embed “human-rights, inclusivity” and be adaptable to local contexts [145-166][167-176]. He warned that without such safeguards the promise of AI for Good could falter, especially for the 2.6 billion people still offline [173-176]. Werner also described the summit as the “Davos of AI” [145-166].


Owen Larter (Google DeepMind) described agentic AI as autonomous tools that can achieve goals on behalf of users – for example, arranging a dry-cleaning service without step-by-step instructions [186-190]. He announced concrete technical work on an “agents-to-agents protocol” and a “universal commerce protocol” to enable safe, interoperable communication, likening them to early internet standards such as HTTP [202-209][205-208]. Larter noted that the U.S. Korea Collaboration (US KC) agent-standards initiative is being launched by the U.S. government this week [210-214]. He warned of security risks when agents access sensitive accounts and detailed collaborations with VirusTotal to scan downloaded skills for malware [222-224]. To broaden access, he highlighted the development of low-compute “Flash” models that are “relatively cheap, quite efficient, very, very quick”, intended to lower testing costs for resource-constrained settings [351-357].


Stephanie Ifayemi (PAI) summarised the two papers, identifying six challenge areas that keep the assurance divide open: language diversity, differing risk profiles, infrastructure bottlenecks, incentive structures, professionalisation of assurance practitioners, and the need for a tiered assurance approach [255-262][268-276][277-285][286-295][296-301]. She gave an example of risk-profile priorities: Pacific Island nations focus on environmental impacts, whereas other regions may prioritise privacy or fairness [166-176]. She stressed that “north-south collaboration” is vital so that Global South countries are not left out of emerging standards on agent attribution and identity [292-300]; the Centre for AI Standards and Innovation (NIST/Casey) has released an opportunity to comment on a paper around agent attribution and identity [292-300]. The paper also calls for incentives such as insurance products and accreditation schemes, citing the UK AI Safety Institute’s $100 million inaugural fund as an example [363-376]. Additionally, Ifayemi referenced a PAI paper on real-time failure detection and monitoring of agents[420-423].


All speakers agreed that a comprehensive, global AI-assurance ecosystem-combining rigorous testing, clear standards, and independent verification and embedded from the start of system design-is essential. They also concurred on the importance of multilingual and contextual evaluation to make AI trustworthy across diverse languages and cultures [133-136][24-26][262-267][429-433][452-455]. Finally, they emphasized that global collaboration-among multilateral bodies, governments, industry, and civil society-is required to avoid exclusion of the Global South [140-144][307-314][291-300][433-438][452-455].


Disagreements emerged around implementation. Minister Teo framed assurance as a “strategic competitive advantage” that companies can leverage, whereas Ifayemi and Natasha Crampton argued that assurance should be treated as shared public infrastructure, requiring incentives such as insurance and professional accreditation [85-89][433-438][363-376]. Larter advocated for top-down, universal technical standards (agents-to-agents, universal commerce) to ensure interoperability, while Marivate warned that such standards risk being “top-down” and missing local values unless capacity-building is prioritised [202-209][236-240]. On compute resources, Ifayemi highlighted the massive GPU-hour requirements of current evaluation pipelines [280-282], whereas Larter suggested that the new low-cost Flash models could mitigate these barriers [351-357]; the tension reflects differing views on whether technology alone can close the infrastructure gap.


Key take-aways included:


* Minister Teo’s proactive sandbox approach, positioning the government as an early-adopter and credibility builder [69-73];


* Werner’s reminder that “trust is the biggest challenge” and that standards must embed human-rights;


* Larter’s analogy of agent protocols to early internet standards, providing a concrete roadmap for interoperability [202-209];


* Marivate’s emphasis on local data and policy capacity, underscoring the risk of unsuitable top-down frameworks [231-240];


* Ifayemi’s systematic breakdown of six challenge areas, offering a clear agenda for closing the assurance divide [255-262][292-300];


* Crampton’s assertion that assurance must become an “operational discipline” built into the development lifecycle, with continuous post-deployment monitoring [425-433][420-422].


The panel produced concrete actions and identified unresolved issues. The two PAI papers will be released via QR codes for immediate download and community feedback [14-23]. Singapore will continue operating its agentic-AI sandbox, keeping its governance framework “live” for iterative improvement [69-76]. The Delhi Declaration’s commitment to multilingual evaluation provides a policy anchor for future standards work [25-30]. Google DeepMind will advance interoperable agent protocols, make low-compute Flash models publicly available, and support the US KC standards initiative [202-209][210-214][351-357]. The ITU and other multilateral bodies were urged to facilitate inclusive standards development, capacity-building, and the creation of shared evaluation infrastructure [140-144][307-314]. Open questions remain on designing scalable multilingual benchmarks, ensuring equitable compute access, operationalising tiered assurance that matches risk profiles, governing real-time monitoring and accountability for autonomous agents, and expanding the pool of independent third-party auditors, especially in low-resource regions [262-267][280-282][351-357][420-423][107-109].


In the closing keynote, Natasha Crampton stressed that AI-assurance must move from theory to an “operational discipline” embedded throughout the system development lifecycle, with “continuous monitoring, real-time detection and clear accountabilities” for agentic systems [411-422][425-433]. She called for shared evaluation infrastructure, common taxonomies, and investment in Global South capacity, framing assurance as the foundational infrastructure that will enable trust and adoption of autonomous agents [435-438][440-443]. Chris Meserole echoed this sentiment, summarising three core themes – evolving assurance understanding, global collaboration, and shared responsibility – and issued a final call to download the reports, join the collaborative effort, and treat assurance as core infrastructure[444-452][458-465].


Overall, the session linked the Delhi Declaration’s policy momentum with concrete technical and governance proposals, highlighted the urgency of closing the global assurance divide, and produced a clear roadmap of collaborative actions required to build a trustworthy, inclusive AI ecosystem for the emerging era of autonomous agents.


Session transcriptComplete transcript of the session
Rebecca Finlay

in 19 -ish countries, and we’re all focused on what does it mean to unlock innovation through trustworthy, responsible, beneficial AI. And so, of course, no surprise, gatherings like the one that we’ve had this week are really crucial for the work we do, and with the Delhi Declaration adopted yesterday, this is an even more important moment to build on where we have come from, to lean in, and to really get to work around some of the questions of the accountability work that needs to be done, the scientific evidence that we need to build around frameworks and good policy moving forward. And, of course, it’s extraordinarily important that this is happening in India, that it’s bringing a whole set of voices and perspectives and leadership that is not optional.

At PAI, we believe… We believe that that is fundamental to building a global community committed to this work, and it’s great… to see it in action this week. So thank you all for being here with us. So today we’re going to give you an opportunity to see two of our latest papers. These are papers that were begun out of the Paris Action Summit. And at that time, as we were thinking about moving into action and invasion, we felt that work needed to happen with a good sense of what the assurance ecosystem looked like. So we’ve had working groups underway developing these two new resources. They’ll be up on the screen at some point. You’ll be able to get a QR code and download them.

Feel free to talk to any of us. The first one is Strengthening the AI Assurance Ecosystem. It really looks at telling and helping national policymakers, if you’re building a robust industrial AI strategy, you better have a comprehensive AI assurance strategy as well. And you need to be able to do that. And so we’re going to be talking about that. We need to think about all those actors and what they look like. We’re going to hear about one of the experts, of course, in this as soon as the minister comes to join us. The second piece, which is really important, we think, for this conversation is what does it mean to do AI assurance? globally around the world?

How do we close the divide that exists? What is different about the challenges faced by countries in the Global South versus others? So we’re really hoping that these resources not only are good, substantive contributions to the work that needs to be done, but the idea is to just catalyze, you know, sort of plant a number of seeds across a number of ways in which assurance works so that those can grow and really come to life out of this. And just two quick comments on that. Now that we have half the declaration, and so now we can, as opposed to earlier in the week, start to articulate it, really leaning in with regard to the commitments around, in commitment one, around usage, clarity around usage data, really trying to give some empirical grounding to this work.

In 2025, in our progress report around foundation model, impact. We made exactly this recommendation. We directly called for Frontier AI companies to share usage data. We’ve been tracking progress, and there has been some progress in that regard. So we are delighted to see this particular commitment to come about and to start to see some standards about how that usage data is going to be shared. So we’re very pleased to see that work. We’re also very pleased to see the second commitment around strengthening multilingual and use case evaluations. And you’ll see, if you do download the report on the global assurance divide, that that is clearly a key piece of work that needs to happen. So this afternoon, we are going to give you an extraordinarily expert panel that brings a real diversity of perspectives to this work.

And so we want to take the assurance question and apply it to agents. Because that’s where the world is going. We’re all seeing them in the news every day. We’re seeing them integrated into foundation model systems. So what does it mean? to take what we know about assurance and think about the applications that agents will add to the complexity of that work. So let me begin by introducing our first speaker. She’s probably been one of the most visible ministers this week because of the extraordinary leadership that Singapore has taken when we think about AI assurance. I know you’re going to talk a little bit about that. Such a pleasure to welcome you, Minister Josephine Teo.

She’s going to come and say some words for us before the panel begins. Thank you.

Josephine Teo

Thank you very much, Rebecca, and also very much appreciate Partnership on AI for the invitation. When this series of summits first began in Bletchley, AI agents were not a thing. Nobody was talking about them, even just 12 months ago. When we had the AI Action Summit in Paris, it has barely crept into the conversation at the time. the preoccupation was all around DeepSeq and what it told us about the capabilities that is emerging out of China. But today, as Rebecca correctly identified, agentic systems have taken off. They are increasingly being used and we need to have a better grasp on how to deal with this issue because agentic AI certainly offers transformative possibilities in how we delegate and orchestrate work when deployed strategically.

Agents functions as invaluable teammates, unlocking productivity gains and time savings, which we all want more of. However, I should also add that this autonomy, the very nature of how agents can be helpful to us is autonomy. This autonomy also introduces new risk. The potential for harm increases when systems malfunction and human oversight is normalized. We are no longer present. or at least diminish to a very large extent. The implications may be complex and not fully predictable. So the way my colleagues and I have been thinking about this is that there needs to be a shift. There needs to be a shift in terms of how we might want to rely on reactive regulation to a different kind of stance, which is proactive preparation.

And in Singapore, that’s what we’ve been trying to do. We’ve tried to be proactive about governing the new risks in the era of agentic AI. And I think it starts with the government itself being a leader and not a laggard in using agentic AI. We need to test it. We need to look at how the solutions can not only enhance public service delivery, But we also need to be able to put in place more controls. Government is high risk because the touch point with citizens are very sensitive. No citizen and no government wants to make serious mistakes when they interact with their citizens, telling them things about their health, telling things about their social security, telling them about things to do with their benefits that are not accurate, and having them not just being told but acted upon.

So this need to ensure that we know what we’re doing is a very high one. And the way we are also thinking about it is to try and work with industry. So, for example, between Google and Singapore government, we have a sandbox on agentic AI. It’s one of the ways. We think we can, in a way, eat our own dog food. Try it. You know, does it taste all right? hurt us in a very significant way because if we were not able to do so, I don’t think we have a lot of credibility in terms of how we want to govern agentic AI. But we can’t wait, you know, for the dog food to materialize in its consequences for ourselves.

In the meantime, my colleagues have put together a model governance framework for agentic AI. It is meant to provide practical support to enterprises so that they can also deploy autonomous agents responsibly and to mitigate the risk. We know that this is not a complete solution and this document that we put out has to be a live document. We very much encourage feedback and as a way for us to keep improving the guidance to enterprises. Can I also just add that as we do this work, what is the… meaning and what is the purpose behind it. Ultimately, it is to build confidence in the use of agentic AI systems. And we think that at many levels, this confidence has to be presented, has to be demonstrated to boards of organizations, to customers, to other stakeholders.

And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca talks about comes in. It is an absolutely essential part of building trust over the medium to longer term so that there is a way, a foundation upon which agentic AI systems can be made more readily adopted and available. I should also say that for companies that are thinking about it, and I see Microsoft here, and I’m sure that there are other companies represented. If we are to trust these agentic systems, the safety aspects should not be downplayed. And I would venture to say that a company that is able to give a high assurance on safety will find itself being differentiated from their competitor.

It’s more likely to translate into stronger interest in a product and service. So rather than think of it as something that you are unhappy to comply with, think of it as a strategic competitive advantage. And that is a way I think that will give us the confidence to put it forward. The question, however, is that are we completely without experience in this regard? And the answer is no. In aviation and healthcare, there are a lot of measures being put in place to give assurance to passengers. When we board a plane, we usually expect to arrive. when we visit the hospital, we generally expect to be treated, except for disease conditions that are not yet well understood.

But the trust in these systems have to be built over time, and they don’t come without some assurance being put in place. The question is for AI, and specifically agentic AI, what would be the components? What leads to an assurance ecosystem system that would be robust enough? We think that there are at least three components. The first is that there must be testing. We need some way of making sure that there are technical assessments of the system to make sure that the systems are robust, they are reliable, and they’re safe. And a lot more work needs to be done in this space, developing the testing methodology, building the testing datasets, and also making sure that the testing of agentic systems take into account that these systems are robust.

These systems are going to be much more complex than multi -agents, for example, and it’s not just the output, but the in -between steps, how the reasoning takes place, and what is the orchestration that is being built into the GenTech systems. So that’s the first, testing. Second is that eventually we will need standards. We cannot just define what is good enough. We also need to assure the users that it has met expectations in safety and reliability, and so these are still very early days. Thirdly, we think that this ecosystem cannot do without third -party assurance providers. It’s one thing to claim that your agentic AI system is safe. It’s another thing to have someone attest to the safety of it.

So these could be technical testers, auditors, and they provide independence, augment in -house capabilities, and also help to identify the blind spots, and it’s necessary for us to strengthen this pool as well. So I’m going to stop here. I want to conclude my remarks to say that Singapore is actively building these components. and we welcome conversations with partners and colleagues because we know that we cannot do this alone. So we look forward to discussions in the three panels on how we can meaningfully collaborate on assurance for agentic AI. Thank you very much once again, Rebecca.

Madhu Srikumar

Thank you. Thank you. We’re all here. It’s the end of the conference, and we’re all intact. Thank you so much, everyone, for joining us. Thank you, Minister Teo, for the keynote. One quick note before we dive in. Our panelist, Fred, has a flight to catch, so he’ll need to slip away a few minutes early, but, Fred, we’ll make sure we get your best insights before you escape. No pressure. So we are the last session, so we are standing between you and whatever you have planned right after. So I promise we’ll make this worth it. We have an incredible panel and a lot of ground to cover. So before we get started, what do we mean by AI assurance?

Because you’re going to keep hearing that term quite a bit here. So really put simply, AI assurance is the process of measuring, evaluating, and communicating whether AI systems are trustworthy. Are they safe? Do they work as intended? Can the public actually trust them? So really think of it like a safety inspection, but for AI. You wouldn’t want, you know, you’d want an independent inspector checking a building. Not just the builder saying, trust me, it’s fine. So really, AI assurance is about independent verification, as Minister Teo went over. And why this panel? Why now? So the summit unveiled the New Delhi Frontier AI commitments just yesterday. And the second of those commitments is about strengthening multilingual and contextual evaluations.

So really making sure AI systems work across languages, cultures, and real world conditions. And really, that’s the assurance challenge in a nutshell. And our panel today is about whether we are actually equipped to deliver on that promise globally and not just in a handful of countries. So really, our panelists span the ITU, Google DeepMind, the University of Pretoria, and PAI. So we have the range to actually wrestle with this question. So with that, I’m going to get into our first question for today. Fred, that’s going to be you. ITU has been convening on AI governance through AI for Good and working on standards across borders. So really, when we talk about AI assurance, what does it mean to you, ensuring that these systems are safe and trusted?

And how do we think about assurance when 2 .6 billion people remain offline and may be excluded from the frameworks being designed?

Frederic Werner

Yeah, thanks for that great question, and thanks for having me here. So I think that safe to save is no. There’s a huge shortage of high -potential AI for Good use cases, everything from affordable health care to education for all. food security, disaster response, and also looking at more applications in the physical manifestations of AI that you see in robotics, embodied AI, brain -computer interface technologies. The best part of my job at AI for Good is I see these use cases coming across my desk every day. And I can tell you when we started AI for Good in 2017, it was mainly in PowerPoint slides. They didn’t really exist. But as we got into, say, the 2023 with GenAI, last year, the unofficial theme of AI for Good was the rise of the AI agents, a bit scary, Terminator -like, but that’s what people were talking about.

And we’re really going from sort of the promise to the pilots to the use cases and now scaling. Now, when you’re looking at these use cases, I think one big challenge is trust. How do you trust them? I mean, there’s always the good intention, right? But is that trust there? And also, are they replicable and scalable? And I’ve yet to see, you know, high potential use case developed in Brussels work equally well in Johannesburg and Shenzhen and maybe Panama. Like, it’s just, we haven’t really reached that yet. And if you look at these sort of fast -emerging governance frameworks around the world, whether you’re in the U .S. or EU or China or everything in between, I think there’s a lot of good intentions, a lot of good thinking.

But how do you turn those ambitious words and principles into actions? Because the devil is in the details, and I think standards have details. So when you’re thinking about how do you – especially when you start to get into AI agents and you really – that trust element is becoming ever more critical, how can you bake in a lot of the common sense things that we’ve been talking about all week or even for the past years at AI for Good? Are they trustworthy? Are they verifiable? Are they secure? Are they safe? Are they designed with human rights principles in mind? Are they inclusive? Are people from the global south appetizing? Are they able when we’re drafting and developing these standards?

So these are not always natural reflexes, and at the same time, it’s hard to turn words into action. So one of the tools, I’m not saying it’s the only tool, but I think as these solutions start to scale and businesses start to interact internationally or even internationally, at one point you’re going to need standards, and it’s within those standards that you can kind of bake in those common sense principles that we’ve been all talking about. And I forget the last part of your question. It was really a question about… Oh, connectivity. That was it, yes. …2 .6 billion people who remain offline, yeah. Yeah. Yeah, so, you know, ITU’s mission is connecting the world, and a third of the world is still offline.

And, you know, large parts of the world actually have connectivity, but there’s actually no incentive to connect. So if there’s no content in your local language or dialect or no access to government services or useful applications that are fit for purpose in where you live… you know there’s why would you connect so i i think ai can actually help to remove that friction where you have a lot of bottlenecks for example literacy disabilities again like content in your own language or dialect so i think one thing is closing the connectivity gap but the other thing is actually using ai to remove that friction and the last thing i would say is i think sometimes there’s a comparison where um if you take east africa for example and you have the the mobile payment miracle or revolution with mpeza right you effectively leapfrog decades of infrastructure legacy infrastructure and there may be a kind of optimism that well the same thing could happen with ai in the global south maybe but i don’t think we can take it for granted that if that happens it goes in the right direction it’s not a guarantee that just by putting the tool in the hands of the people that they’re going to create value they’re going to use it responsibly they’re going to use it to solve local challenges build more cohesion and community, but those aren’t for granted.

So I think that whole AI skilling angle of really educating people from grade school to grad school to diplomats and everyone in between, if you don’t address that literacy piece, then it’s just going to be a crapshoot. We’re not sure

Madhu Srikumar

Great. I mean, it’s a good transition. Speaking of standards, Owen, Google DeepMind recently deepened its partnership with the UK AI Security Institute on safety research, so including work on monitoring chain of thought and evaluations. So really from an industry perspective, you know, what does robust AI assurance look like? Where do you think the gaps and opportunities are between what Frontier Labs kind of do internally and what’s needed for broader public trust?

Owen Larter

Yeah, thank you, Madhu. And thank you to Rebecca and Partnership on AI for convening this really important conversation. And a big congratulations to our Indian hosts for a fantastic week at the summit. This week, maybe start talking a little bit about what… agents are, we’re increasingly excited about them at Google DeepMind. They’re essentially more autonomous systems that instead of just following basic instructions can actually achieve goals. So let’s say I want to get my suit dry cleaned on Thursday, instead of taking an AI system and say, find a website for a dry cleaning company, see if it’s open on Thursday, see what the hours are, see if it’s within my budget. You can just say to your agentic system, go find a way to dry clean my suit, make sure it’s being picked up by Friday, and it will go and interact with those different websites and try and find a way to meet your goals.

All kinds of fantastic applications already that we’re seeing right across the economy. We’re using increasingly agentic coding systems at Google and Google DeepMind to do a lot of our coding. So we have our anti -gravity framework, which is fantastic. You can interact with it in normal, natural language and say, build me a website, build me a tracking system to follow a particular bill that I’m interested in, and it will really help you achieve these goals. I think you’ll increasingly see agents used right across the economy as well. I think we’re just in the early years of a new AI enabled agentic economy. I think you will have very normal interactions with agents on a regular basis that will pop up on your phone screen and say, hey, it’s been a few weeks since you bought toothpaste.

Would you like me to go and take care of you and get some more toothpaste for you? You mentioned standards, which I think is going to be a critical part of getting all of this right. There’s a couple of dimensions to the standards. So firstly, we need to create the sort of technical protocols to actually underpin this agentic economy. So we’ve been trying to contribute to this conversation. There is the agents to agents protocol that Google has launched. There’s the universal commerce protocol. This is basically a way of helping agents talk to each other and agents talk to websites so that you have standardized sets of information. An agent will basically come to an agent or an agent will come to a website and say, this is my ID.

These are my capabilities. These are what I’m trying to do. I think in the same way that we developed protocols and standards in the early 90s to underpin the internet like HTTP, like URL, we’re going to have to build these out. There are then also assurance standards, which are related, but I think very important as well. We need to make sure that we’re understanding the capabilities of these systems. We need to keep making progress on how we can test for the risks that they may pose and then work right across society to come up with ways to mitigate that. I think the work that the safety and security institutes are doing around the world is absolutely critical.

So Minister Teo mentioned some of the work that we’re doing in Singapore. The UK Security Institute has been world leading on this. I think this is an area that we’re going to see more from the ACs and KCs right across the world. The US government also, through their KC, launching an agent standards initiative this week as well.

Madhu Srikumar

Great. And if you don’t mind a follow up question, that’s a really important point that you pointed out, that we currently need interoperability. We need agents to flourish. We need to find a new way to kind of imagine this paradigm. But I’m curious if there’s a safety challenge when it comes to agents. Instead. yeah that keeps you up at night

Owen Larter

yeah i think there are definitely risks to be mindful of so i think agent security is something that we should all be thinking a lot about if we’re connecting increasingly autonomous systems into different accounts different email accounts different bank accounts i think we want to be pretty careful about how we do that and come up with superior security protocols and that can be helpful there we’ve actually been doing some work with virus total which is part of the the google security operations team at google to make sure that when certain agentic systems are downloading skills or downloading apps from agentic websites they’re being scanned for malware or vulnerabilities that are being detected so that they can be addressed before people put them onto their their computer i think there’s also a concern that these agentic systems could create new capabilities that could be misused so across the cyber security dimension domain for example i think some of the frameworks that we have already at google deep mind will be helpful here so we have our frontier safety framework which we use to test models before we put them out into the real world.

We think about how those models are going to interact with systems, how they might be parts of agents as we’re doing that work.

Madhu Srikumar

All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that have, you know, started playing around with these systems. But I imagine it’s reaching lay consumers very soon. So, Vukosi, you have built Masakane for African Language NLP. Really building AI for Africans by Africans. When assurance frameworks are designed in the U .S., U .K., or Singapore, how well do they translate to context where the data, the languages, the deployment conditions are completely different? What do we think we’re missing?

Vukosi Marivate

that we do get to understand that it’s a very different thing. My experience has been that there’s likely not as much collection in Europe or North America or annotation as much as is happening now in the global south. But then that also means that it feels like it’s further away, right? It’s not where the developers are. And that then requires more of this conversation in one place. So that, again, there must be kind of a local understanding. The last piece to that is going to be the capacity and the capabilities of then the policymakers in those countries to be able to understand that part. It will not be top -down. I don’t believe that. It will be them understanding whether it’s labor laws, it’s data governance, it’s just monitoring of systems once they’re on.

If there is not that capacity or capability to actually do those things, again, it’s more automated. direction that is not necessarily what the values of those people actually are.

Madhu Srikumar

Those are important words right at the end of the conference, knowing just how much we have to get done here. So Steph, over to you. PAI just released work on closing the global assurance divide, a lot of what Bukosi just mentioned. What are the concrete gaps you’re identifying? Identifying? Is it capacity to conduct third -party evaluations, as Minister Teo mentioned? Is it access to the models being tested, or is it something else? What would it take to really close those gaps?

Stephanie Ifayemi

Awesome. Thanks so much, Maru. And as one of the PAI folks, thanks for being here, everyone. It’s great to see you all. I know it’s a Friday evening, so we’re in between you and cocktails or whatever you have planned, so we very much appreciate it in the last session of the day. So I think it’s such a good question, and I think your question talks about some things that recognize that those challenges aren’t actually just Global South Challenge. I just want to start with the fact that we’ve released two papers. One is on closing the assurance divide, and the other is how we strengthen the global assurance ecosystem generally. And the question of access is one that impacts us all, actually.

In the UK, for example, the Department of Science, Innovation and Technology, I believe that’s what DSET stands for, has made access to models as a means to support insurance a priority for 2026. And so I think that there are a few shared challenges, and I’ll come back to the point around north -south, actually, collaboration in a second. But just thinking about closing the AI insurance divide, we released this paper, and in it we talk about around six challenge areas, from infrastructure to skills. We talk about languages and risk profiles, so the things that you’ve heard about from Vukosi and a lot of the other speakers. So I’ll give you a sense of some of the examples that we have.

So on language, we’re at the India Summit, of course, and India has over, I believe, 120 languages and 19 ,500 dialects. When we think about Africa, we have about… 1 ,500. or 3 ,000 spoken languages in itself. So when we think about benchmarking and evals and designing evals that think about how those systems are deployed in these various contexts, it’s so important to think about languages, and that just generally, I think, demonstrates the complexity of designing evals to meet the needs of this kind of diverse language ecosystem. Rebecca mentioned at the start that we had the declaration, of course, yesterday, and the commitment therefore in the declaration to multilingual evals is really critical. Of course, there’s still a lot of work to determine how do we actually do that in practice in the most effective way, and accounting for that complex and wide language diversity, but that’s one area that we talk about.

The second in terms of closing the assurance divide that we need to account for is risk profile, interestingly. in this paper, we actually interviewed a lot of assurance and safety experts internationally. And one of the things that they mentioned was differences in what they might prioritize when you think about assurance. So when you think about the Pacific Island nations, for example, they would be thinking about assuring for environmental impacts differently than maybe environmental impacts would be considered as important in the US at the moment, for example. Last year, we published a paper on post -appointment monitoring. And in that paper, we talk about sharing kind of data from companies. And one of the points that we talk about is environmental impacts.

And so it’s really interesting that I think in terms of closing the divide, it might the starting point or what you put emphasis on might vary. And that’s important to note as we’re designing things like documentation, description, and so on. And so I think it’s really interesting to see what we’ve kind of focused on. The third I’ll just quickly mention is, of course, infrastructure. I think we’ve probably all heard a lot about this throughout the summit and this idea of what it means to be sovereign and which parts of the stack to prioritize. And that is really, really important. But there are tradeoffs. So in terms of importance, I was looking at a stat that Stanford’s Helm evaluations used over 12 billion tokens and they required 19 ,500 GPU hours alone.

And so when you think about the kind of infrastructural needs, it’s so it creates barriers for a lot of countries in the global south. But I was at an interesting roundtable, actually, that even Carnegie was convening. And we were talking about the fact that how do you balance assurance needs? Where do you start from across the value chain? So at the moment, a lot of the discussion is kind of upstream. Right. We need to have that infrastructure in place. That’s the point that we need to start with. But how do you do that in parallel and how much of that resource should be put into other foundational tools for assurance, such as documentation artifacts, which is another area that we focus on a lot at PAI.

And so I think there will be a lot of questions around how do you weigh up all these challenges, again, knowing that even kind of the G7 countries, the UK AI Safety Institute started with an inaugural $100 million alone. So that prioritization and balancing is going to be important. The last thing I’ll say, coming back to agents, and I will talk about this a bit more, is the North -South collaboration is a real opportunity as we think about agents. And it’s important that global South countries aren’t always playing catch up. I think that’s a point that has come through for me from the summit, which is that NIST or the Casey, so the Center for AI Standards and Innovation.

And this is almost like a test for me of kind of saying. These names of these institutions through this panel. But they just announced a few days ago that they’re going to be working on standardizing work around agents, including that they’ve released an opportunity to comment on a paper around agent attribution and agent identity, I believe, which is really interesting. And there’s, of course, a lot of push for countries to collaborate. And you see a lot of the safety institutes collaborating on questions around assuring agents in the global north. But how do we ensure that global south countries aren’t missing from that? That will have implications for how we attribute agents, how we test agents.

And we shouldn’t just assume, again, whilst those upstream points and infrastructure is important, that in parallel, they’re ultimately part of these kind of thinking ahead questions and frameworks.

Madhu Srikumar

Great. So I’m going to take the moderator’s prerogative and have us do a rapid fire. And by rapid fire, I mean every answer is a minute and 30 seconds, which, let’s be honest, is fairly rapid for AI policies. I’m going to start with Fred because I’m more nervous about your flight than perhaps you are. So a minute and 30 seconds. What role should multilateral institutions like ITU play in making globally inclusive AI assurance happen?

Frederic Werner

Yes, I think AI for Good has a pretty ambitious goal, right? It’s simply put, it’s to unlock AI’s potential to serve humanity. Pretty big. But we can’t do it alone and no one can. It’s not one country and not one institution, not one NGO. That’s why we have 50 plus UN sister agencies as part of AI for Good, but also making great efforts to bring as many diverse voices to the table from the global south, from NGOs, from civil society. It’s always been extremely open. I like to think of it as the Davos of AI, but instead of being very exclusive, it’s extremely inclusive, right? So I think that’s a bit of a philosophy behind AI for Good.

You know, I think the AI, it’s just moving so quick. So the focus has always been on practical applications, practical solutions. But in doing that, you can tease out the next generation of standards, of policy recommendations, of collaboration and partnerships around the world. So I like to think that in the doing, you have the learning, right? And it’s not just about talking. And that’s what AI for Good has always been all about.

Madhu Srikumar

Thank you. That was incredible. You have 56 seconds left. So, yeah, I’m going to move us ahead to Vukosi. So Singapore’s aim is test once and comply globally. So from a Global South perspective, what would make that interoperability real rather than a form of exclusion?

Vukosi Marivate

Yeah, that’s a hard one. I think going back to I think the other thing that’s come out of a lot of the sessions here has been on the evaluations and how evaluations are used. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. And I think that’s a really important thing. because either on one side it’s going to take you a lot of resources to actually either put up the evaluation to be so all -encompassing on the other side to run it is going to be a lot but then when it comes down to the user which I think was our second panel that I was in this week and you’re trying to think about personalization if you’re going down to an individual what experience do they actually have and how do you get to there?

There will be some more high level safety things that will likely come out and people will be working on that and maybe that’s what I’m thinking Singapore is trying to go for but then when we’re getting to what the individual experience is given that you have the stochastic systems you don’t know what is going to happen necessarily. I know we’re trying to do that but we don’t really know what’s going to happen at the individual experience and we can’t remodel all of that. It’s going to require that again you you do have closer to where the user might be things on what actually that experience was. So one of the hats I wear is I’m a co -founder of Lilapa AI, an AI startup.

And there you will be doing more testing towards, hey, we are serving this client. We’re serving them in this way. And then you’re trying to then go in and say, where is your data coming from? What is the use cases? What are we testing for in terms of their operational kind of requirements? It would necessarily not be just one. But, yes, what you might want is

Madhu Srikumar

Yeah, that’s a great point. Assurance needs to be globally decentralized. Owen, given everything we have discussed, what’s one commitment Frontier Labs should make on assurance that would actually move the needle?

Owen Larter

yeah good question um i think there’s a question of access to the technology which is important here i think it’s one of the big themes of this conference certainly one of the things that i’ll be taking away so you think the the multilingual part of this is really important understanding respecting local cultures that’s important if you’re going to have a good product and if it’s going to be used broadly um we’ve been investing in gemini for some time now to make it better more representative across different languages we have partnerships that we’re doing here in india including with the iit bombay to to help improve performance across various different indic languages it’s also really important on the safety and security front as well to have benchmarks that are available in different languages fantastic work that ml commons are doing on this front that we’re that we’re pleased to support the other bit of access that i think is really important is having things that are quick and cheap enough for everyone to use one of the things of agentic systems is that they’re actually pretty compute intensive to use we have a range of models that we have developed and bringing to market at google deep mind including our very quick flash models which are relatively cheap, quite efficient, very, very quick.

We think these can play a really important role in powering agentic systems. It’s also going to be really important if we’re going to do effective and rigorous testing of these systems because that could be very compute intensive as well. So thinking about that access piece is something we all need to keep doing. And it’s not an easy question, really. I mean, to do it safely and ensuring that third party assurance providers consider the security questions at hand. And it’s an open

Madhu Srikumar

So, Stephanie, no bias at all since we’re both at PAI, but I wanted to give you the final word. What concrete outcomes do you think we want to see from the global AI assurance work in the next 12 months? What would success look like?

Stephanie Ifayemi

So, Owen, now that you said your one point, by the way, can hold you accountable against this delivering on the access question. But. I think we in the two papers, we talk about the need to kind of build a robust assurance ecosystem. And one of those things is changing incentives. So funny enough, another session this week, there was a question about whether we have differences in the way we’re talking about safety over the last few years and whether that we still have those divergences of whether we’ve converged. And there are a few themes that we’ve actually converged on, which is nice. And I think assurance is one of them. And this week, a lot of the discussions we’ve had are in some of those incentive areas like insurance to support assurance.

And so what does that look like? How do we drive new incentives or put some of these structures in place to drive a kind of more mature and robust ecosystem? I think that’s going to be really important. The second is professionalization. There are a lot of questions around how do you trust the assurer? And so how do we ensure that we’re thinking about the skills? What does accreditation look like for assurance organizations or individuals? And so and that will help, I think, questions around kind of access. And so that’s a kind of second piece. But hopefully, I think what we’re what we’re hoping to do. And that’s just because this is also about agents. I think that some of those foundational questions haven’t yet been resolved.

And so I’m hoping that we can move the dial to start thinking about how do you apply that to some of these future questions. So just to shout you out, Madhu. Madhu is the brains behind our safety work. And she came up with a paper on real time failure detection and monitoring of agents. And what I really like about that paper is it talks about a kind of tiered approach to assurance as well. So when you think about agent deployments, do you need to be thinking about assurance based on the risks or the stakes at hand? So is it in the financial services sector? Is it in making about making medical decisions? So how do you tie it as close to the use case and the risks?

And that needs to be also linked to reversibility. What’s the possibility around reversibility of actions and the consequences of that? And then third, we have affordances. What are the kind of affordances you give to the agents? How much autonomy do they have? And so how do you design an assurance ecosystem with all of these different components in mind and a kind of tiered approach? And the more that we can advise, you know, the USKC and a lot of policymakers who clearly are trying to make decisions in this area, I think that’s what success would look like for us.

Madhu Srikumar

This was totally not planned. Steph plugging our work here, but I can’t imagine a better note to end on this. It’s a field wide challenge, but I just want to emphasize the field wide opportunity. No, you know, no one single organization can get this right. So hopefully that’s a helpful reminder as we end with this summit and move on to the next iteration. So thank you, everyone. Hope you have a great. safe flight back home. Fred, that’s tonight for you. And for a closing keynote, I’m going to welcome Natasha Crampton, who’s a Chief Responsible AI Officer at Microsoft. And post that, we’ll hear from Chris, who’s the CEO of FMF. Thanks, everyone. Do you want to give it?

Okay, so we’re going to get mementos. Sorry, you might want to come back. You don’t want to miss this. Thank you very much.

Natasha Crampton

Thanks so much, Madhu, and to all of our panellists for what was, I think, a very rich and grounded and also at times humorous discussion. Thank you. One of the things that came across clearly for me today is that we need AI assurance to no longer just be a theoretical exercise, but we actually need to build it into an operational discipline. And that’s a discipline that really needs to work across borders, across languages and cultures, and I think increasingly across agentic systems, systems that don’t just generate outputs but actually take action. I heard this panelist focus on the fact that assurance is pretty uneven today. It’s often strongest where there’s access to compute and data and evaluation infrastructure, and weakest where those things are scarce.

And as several of our panelists emphasized, if we don’t address that gap deliberately, the shift towards AI agents is only going to make that divide even worse. Rather than closing it. When I think about the nature of assurance, I think with agentic systems, it does need to change in its emphasis somewhat. Pre -deployment testing has always been necessary for all types of systems, and so too has post -deployment testing, of course. But post -deployment testing in an agentic world takes on an even greater level of importance, in my view. When systems can plan and they can chain actions, they can interact with tools, they can adapt over time, assurance really has to move towards continuous monitoring, real -time detection, and clear accountabilities for when interventions need to take place.

That can be quite a hard technical problem, but it’s also a governance challenge. So I know that PAI is known for convening communities of not just thinkers, but also doers. And so I wanted to leave everyone with a couple of ideas of implications that really follow from some of the insights that we heard today. The first is that it’s really important that we build assurance into systems as part of the system development lifecycle. And we don’t just seek to bolt it on at the end. So that means that we need to design systems so that they can be observed and audited and constrained in practice, not just in policy documents. Second, assurance has to be interoperable.

We heard Prime Minister Modi speak yesterday about building in India and delivering to the world. That, I think, is absolutely an aspiration that we should strive towards. But that can only work if we have evidence. Evaluation methods and documents and signals of risk that are usable across regions. Thank you. and adaptable to local languages, cultures, and deployment realities. Third, assurance has to be shared. No single company or government or institution can do this alone. And that’s especially true for agents, given how pervasive they are expected to become across the economy. We need shared evaluation infrastructure, shared taxonomies, and shared investment in capacity, particularly in the global south. So for me, this is why organizations like the Partnership on AI, as well as the many collaborators that have come here together in this week’s India AI Impact Summit, as well as open engagement across the community to make sure that we get this right.

It’s a really foundational area for collaboration for all of us. Now, my view is that if we do get assurance, and by right, I mean it needs to be global and inclusive and also dynamic. I think it really does become an enabler of trust and adoption, as Minister Teo said, not a break on progress. One of the key things that I think we need to do as a community is really to treat assurance as infrastructure, infrastructure that we need to build together and put into practice together. Thanks very much.

Chris Meserole

Well, what a phenomenal session from the opening and closing keynotes to a really rich and dynamic panel. I cannot think of a better way to close out what has been an extraordinarily rich and dynamic summit as well. I have the impossible task of trying to summarize everything that was just said here. So if you’ll bear with me, I’ll just offer kind of three core themes that seem to jump out to me. One is that we need to evolve and mature our understanding of assurance. There’s a lot of reference to agents here, the kind of coming prospect of multi -agent environments as well. We need from evals to mitigations, we need to have a better kind of an evolving understanding of how to do assurance.

Second, and probably more importantly, we also heard a lot about assurance as a global effort. Here I love Steph’s point about the need for greater north -south collaboration. There’s a lot of discussion from Fred and others about the need for global standards and harmonizing those standards and making them interoperable. And then there was also a lot of reference to some of the new institutions that we’ve evolved to enable that global dialogue to happen, whether it’s the institution that was announced literally just before this session an hour ago for the kind of global network or the international network of ACs that have also been kind of revitalized recently as well. And then kind of the last point that really jumped out at me was the assurance as a shared responsibility.

And, Fikosi, I love the point about kind of assurance as a bottom -up effort, and I think it’s one that, you know, we all have a role to play here, regardless of which sector you are in, regardless of what aspect of assurance you’re taking part in, there’s a role for all of us. So with that, I’m going to leave you with just one kind of final call to action, and that is to get involved, right? You know, if we want this technology to be safe and secure and trusted, we all have a role to play. So download the reports, very important thing. Download the great reports that have just come out on this topic. Get involved.

Look at the work that PAI and others are doing as well, and become a part of the conversation about how we’re going to take this amazing technology, but really make sure that it’s safe and secure and that we have a way to trust it. You know, in the opening remarks, Rameca, kind of used this great metaphor of the seed, right? Like one of the goals of the reports that they put out and the conversation in this panel. was to try and plant the seed about, you know, to watch kind of assurance grow. So I guess the parting thought I would give you is to say let’s all kind of roll up our sleeves and get to work and make sure that the seed grows.

So with that, thank you. And thank you as well for our panelists and speakers. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The India AI Impact Summit brings together more than a dozen countries to “unlock innovation through trustworthy, responsible, beneficial AI”.”

The knowledge base notes that the summit involves “19-ish countries” and emphasizes unlocking innovation through trustworthy, responsible, beneficial AI, confirming the claim about a multi-country gathering and the stated mission [S13].

Confirmedhigh

“The Delhi Declaration was adopted the day before the closing session.”

S13 explicitly states that the Delhi Declaration was adopted “yesterday,” matching the report’s timing reference.

Additional Contextmedium

“The second Delhi Declaration commitment urges “multilingual and contextual evaluations” to ensure AI works across languages, cultures and real‑world conditions.”

S77 highlights a commitment to strengthen multilingual and contextual evaluations, especially for Global South contexts, providing additional detail on the focus of that commitment. S23 also discusses multilingual AI as a bridge to inclusive access, adding nuance to the commitment’s intent.

Additional Contextlow

“QR‑codes for the papers will be displayed immediately after remarks so attendees can download them on the spot.”

S75 describes the use of QR codes on presentation slides at the summit to allow participants to scan and obtain more information, confirming that QR codes are employed for on‑site content access.

External Sources (82)
S1
Setting the Rules_ Global AI Standards for Growth and Governance — I think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s min…
S2
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — And I think… similar with some of the controls that might need to be kind of used to manage some of the risks if there…
S3
Setting the Rules_ Global AI Standards for Growth and Governance — I’m Chris Meserole,. I’m the executive director of the Frontier Model Forum. Our mission is to advance Frontier AI safet…
S5
AI for Good Technology That Empowers People — -Frederick Werner- Chief of Strategic Engagement Department at ITU (International Telecommunication Union)
S6
Closing remarks — – **Frederic Werner**: Event coordinator/organizer (coordinates with Secretary General, manages event logistics and anno…
S7
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S9
Multi-stakeholder Discussion on issues about Generative AI — Natasha Crampton:So, I’m Natasha Crankjian from Microsoft. I’m incredibly optimistic about AI’s potential to help us hav…
S10
Towards a Safer South Launching the Global South AI Safety Research Network — – Mr. Abhishek Singh- Ms. Natasha Crampton- Ms. Chenai Chair – Ms. Natasha Crampton- Dr. Rachel Sibande
S11
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten – Natasha Crampton- Particip…
S12
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — – Natasha Crampton- Stephanie Ifayemi – Stephanie Ifayemi- Vukosi Marivate
S13
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Speakers:Josephine Teo, Owen Larter, Natasha Crampton, Stephanie Ifayemi Speakers:Josephine Teo, Stephanie Ifayemi Spe…
S14
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — – Natasha Crampton- Madhu Srikumar- Chris Meserole- Stephanie Ifayemi
S15
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Speakers:Natasha Crampton, Madhu Srikumar, Chris Meserole, Stephanie Ifayemi
S16
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had e…
S17
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — – Natasha Crampton- Rebecca Finlay- Frederic Werner
S18
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Speakers:Natasha Crampton, Rebecca Finlay, Frederic Werner
S19
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Owen Later:Fantastic. Hello, is this on? Can people hear me? Excellent, thank you. Good morning, everyone. My name is Ow…
S20
Policy Network on Artificial Intelligence | IGF 2023 — Moderator – Prateek:Good morning, everyone. To those who have made it early in the morning, after long days and long kar…
S21
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Summary:Marivate argues that Singapore’s ‘test once, comply globally’ vision requires significant localization for indiv…
S22
Smart Regulation Rightsizing Governance for the AI Revolution — Wilkinson argues that while complete global consensus on AI governance won’t happen due to current geopolitical tensions…
S23
How Multilingual AI Bridges the Gap to Inclusive Access — Bedir expanded Current AI’s focus beyond language to broader cultural preservation, recognizing that culture encompasses…
S24
Announcement of New Delhi Frontier AI Commitments — Evidence:Explicit mention of ‘especially in the global south’ in relation to multilingual and contextual evaluations Ev…
S25
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By working together, these stakeholders can collaboratively develop effective strategies to combat misinformation and pr…
S26
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Artificial intelligence | Building confidence and security in the use of ICTs | Monitoring and measurement Just as cars…
S27
WS #283 AI Agents: Ensuring Responsible Deployment — Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interact…
S28
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Audience questions, particularly from students, highlighted the need for better education about policy formulation proce…
S29
The Declaration for the Future of the Internet: Principles to Action — Stakeholder involvement in policy settings was another theme that intertwined in the discussions. The need for broader s…
S30
Global challenges for the governance of the digital world — Additionally, SDG 17, which calls for the enhancement of global partnerships to achieve sustainable development, is hind…
S31
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Mothibi Ramusi: Thank you very much. Good afternoon. I think from our side, I’m just going to use the South African cont…
S32
Any other business /Adoption of the report/ Closure of the session — Colombia has showcased its dedication to furthering gender equality, affirming its commitment to integrating a gender pe…
S33
Ad Hoc Consultation: Friday 9th February, Morning session — Costa Rica has also positively affirmed the adequacy of the original wording of 53H, recognising its effectiveness in ad…
S34
Opening of the session — Although the overall tone from Ecuador has been positive, the delegation exercised a neutral stance regarding certain su…
S35
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And as several of our panelists emphasized, if we don’t address that gap deliberately, the shift towards AI agents is on…
S36
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Crampton argues that while pre-deployment testing remains necessary, the shift toward agentic AI systems that can plan, …
S37
Promoting policies that make digital trade work for all (OECD) — It asserts that there should be a focus on promoting innovation and the adoption of advanced technologies to enhance int…
S38
Agentic AI in Focus Opportunities Risks and Governance — “It’s more of a bottom -up, grassroots approach than a top -down.”[85]. “We are very focused on helping industry.”[6]. “…
S39
Adoption of the agenda and organization of work — Essentially, this article encourages countries to harmonise their laws with global human rights standards. However, whil…
S40
The potential of technical standards to either strengthen or undermine human rights and fundamental freedoms in case of artificial intelligence systems and other emerging technologies — A neutral stance is maintained in the discourse advocating for the integration of human rights impact assessments within…
S41
WSIS Plus 20 Review: UN General Assembly High-Level Meeting – Comprehensive Summary — High level of consensus with significant implications for global digital governance. The broad agreement across governme…
S42
International Standards: A Commitment to Inclusivity — Despite previous internal debates over open versus closed strategies in the nascent stages of technology evolution, expe…
S43
Importance of Professional standards for AI development and testing — The disagreement level is moderate but significant for practical implementation. While speakers generally agree on the n…
S44
Policymaker’s Guide to International AI Safety Coordination — Impact:This analogy provided a tangible framework that other participants could relate to, moving the discussion from th…
S45
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96 — Cooperation and collaboration between the Global North and Global South in technical standards bodies should embrace an …
S46
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S47
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S48
Democratizing AI Building Trustworthy Systems for Everyone — Thank you so much. And you just touched on, you mentioned ML Commons and you touched about culturally sensitive. And it’…
S49
Announcement of New Delhi Frontier AI Commitments — Evidence:Explicit mention of ‘especially in the global south’ in relation to multilingual and contextual evaluations Ev…
S50
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S51
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Minister Teo outlines the three key components necessary for a robust AI assurance ecosystem. Technical testing ensures …
S52
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Artificial intelligence | Building confidence and security in the use of ICTs | Monitoring and measurement Just as cars…
S53
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Just as cars have standardized fuel economy ratings and crash test results that help consumers make informed decisions, …
S54
WS #283 AI Agents: Ensuring Responsible Deployment — Carter outlined Google’s approach to safeguards, emphasizing user control through granular data access preferences, huma…
S55
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Amandeep Singh Gill emphasizes the need for a safeguards framework for digital public infrastructure (DPI) due to the ri…
S56
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Natalie Cohen: Yeah, I think this issue of trust is key. One thing the OECD does is a driver of trust in government surv…
S57
Al and Global Challenges: Ethical Development and Responsible Deployment — Donny Utoyo:and online safety vulnerability, especially for women and children. As AI rapidly transform our lives digita…
S58
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S59
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S60
Keynote-Jeet Adani — Overall Tone:The tone was consistently aspirational, patriotic, and strategic throughout. Jeet Adani maintained a confid…
S61
Keynote-Jeet Adani — The tone was consistently aspirational, patriotic, and strategic throughout. Jeet Adani maintained a confident, visionar…
S62
Leveraging AI4All_ Pathways to Inclusion — Language and Low‑Resource Context Challenges
S63
Interim Report: — 4. This technology cries out for governance, not merely to address the challenges and risks but to ensure we harness its…
S64
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S65
Open Forum #73 The Need for Regulating Autonomous Weapon Systems — This panel discussion focused on the challenges and risks posed by autonomous weapons systems and the urgent need for in…
S66
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S67
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S68
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S69
Building Trusted AI at Scale – Keynote Anne Bouverot — Overall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and ap…
S70
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Thank you for inviting me to this important summit. It is an honor to be here in India at this pivotal moment for global…
S71
Keynote Adresses at India AI Impact Summit 2026 — And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t…
S72
WS #211 Disability & Data Protection for Digital Inclusion — Fawaz Shaheen: . . Yes, I think it’s working now. Thank you so much. We’ll just start our session now. Welcome to …
S73
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — The central presentation focused on the Harlem Declaration, described as an international commitment to promote ethical …
S74
WS #25 Multistakeholder cooperation for online child protection — Gladys O. Yiadom: . Can the online moderator share her screen? Full screen, please. Thank you. So the firs…
S75
Pre 3: Exploring Frontier technologies for harnessing digital public good and advancing Digital Inclusion — Charlotte Gilmartin: Thank you very much. I’m just going to share my screen and show the slides. Because I only have fiv…
S76
Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Explanatory Report) — 84.          This provision points to the potential role to be played by standards, technical specifications, assurance …
S77
Announcement of New Delhi Frontier AI Commitments — -Sam: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified 3. …
S78
WS #323 New Data Governance Models for African Nlp Ecosystems — Ochola Viola emphasised that “community ownership should be legally entrenched with operationalised mechanisms to reach …
S79
WS #35 Unlocking sandboxes for people and the planet — 3. European Union: Katerina Yordanova discussed the European context, particularly the AI Act’s sandbox requirements. Sh…
S80
Building inclusive global digital governance (CIGI) — In Africa, sandboxes are being implemented to tailor regulations to the cultural, legal, and technological diversities i…
S81
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Regulatory sandboxes allow for testing the extent to which the existing legal framework works in practice. Removing res…
S82
Pre 9: Discussion on the outcomes of the Global Multistakeholder High Level Conference on Governance of Web 4.0 and Virtual Worlds — Audience: Sorry, Tatiana Tropina, Internet Society. I would still like to follow up with a question about the concept of…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Rebecca Finlay
2 arguments166 words per minute801 words289 seconds
Argument 1
Advocacy for a comprehensive AI assurance ecosystem to support national AI strategies and policy alignment (Rebecca Finlay)
EXPLANATION
Rebecca stresses that national AI strategies must be paired with robust AI assurance strategies to ensure responsible deployment. She links this to the recent Delhi Declaration and the need for scientific evidence and good policy frameworks.
EVIDENCE
She explains that policymakers need a comprehensive AI assurance strategy alongside industrial AI strategies, noting that this is essential for robust national AI policies [15-18]. She also points to the Delhi Declaration commitments that provide empirical grounding for such work, highlighting progress on usage-data standards and multilingual evaluation commitments [24-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Delhi Declaration’s commitments on usage-data sharing and multilingual evaluation are linked to building a robust AI assurance ecosystem, as highlighted by Rebecca in the discussion and documented in [S4] and [S13].
MAJOR DISCUSSION POINT
AI Assurance Ecosystem
AGREED WITH
Josephine Teo, Stephanie Ifayemi, Natasha Crampton, Chris Meserole
Argument 2
Provision of new resources (papers, QR codes) to catalyze work on AI assurance across the community (Rebecca Finlay)
EXPLANATION
Rebecca announces the release of two new papers developed from the Paris Action Summit and makes them available via QR codes. The resources are intended to seed further work on AI assurance worldwide.
EVIDENCE
She informs the audience that two papers will be displayed on screen, downloadable through QR codes, and invites participants to discuss them with the team [7-13]. She also notes that these resources aim to catalyze seeds of assurance work across multiple pathways [23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Two new papers were displayed on screen with QR codes for download, a detail reported in the session transcript and referenced in [S13] and [S4].
MAJOR DISCUSSION POINT
AI Assurance Ecosystem
M
Madhu Srikumar
2 arguments146 words per minute1068 words436 seconds
Argument 1
Definition of AI assurance as independent measurement, evaluation, and communication of trustworthiness (Madhu Srikumar)
EXPLANATION
Madhu defines AI assurance as a systematic process that measures, evaluates, and communicates the trustworthiness of AI systems. She likens it to an independent safety inspection that verifies claims made by developers.
EVIDENCE
She states that AI assurance is the process of measuring, evaluating, and communicating whether AI systems are trustworthy, safe, and work as intended, comparing it to an independent inspector checking a building rather than the builder’s word [124-132].
MAJOR DISCUSSION POINT
AI Assurance Ecosystem
Argument 2
Emphasis on multilingual and contextual evaluations to ensure AI works across languages and cultures (Madhu Srikumar)
EXPLANATION
Madhu highlights the second New Delhi Frontier AI commitment, which calls for strengthening multilingual and contextual evaluations. She argues that this is essential for AI systems to be reliable across diverse linguistic and cultural settings.
EVIDENCE
She references the summit’s unveiling of the New Delhi Frontier AI commitments and notes that the second commitment focuses on multilingual and contextual evaluations to ensure AI works across languages, cultures, and real-world conditions [133-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The New Delhi Frontier AI commitments specifically call for strengthening multilingual and contextual evaluations, as noted in [S24] and reiterated in the discussion summary [S13].
MAJOR DISCUSSION POINT
Global Inclusivity and the Assurance Divide
AGREED WITH
Rebecca Finlay, Stephanie Ifayemi, Natasha Crampton, Vukosi Marivate
J
Josephine Teo
3 arguments148 words per minute1271 words513 seconds
Argument 1
Three‑pillar model: testing, standards, and third‑party assurance providers (Josephine Teo)
EXPLANATION
Minister Teo outlines a three‑component model for AI assurance: rigorous technical testing, the development of standards, and independent third‑party assurance providers. She argues that all three are needed to build trustworthy agentic AI.
EVIDENCE
She enumerates the three pillars-testing, standards, and third-party assurance providers-explaining that testing ensures robustness, standards define “good enough,” and third-party attestations provide independence and identify blind spots [97-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Minister Teo outlined the three essential components-technical testing, standards development, and independent third-party assurance providers-mirroring the description in [S13] and [S4].
MAJOR DISCUSSION POINT
AI Assurance Ecosystem
AGREED WITH
Rebecca Finlay, Stephanie Ifayemi, Natasha Crampton, Chris Meserole
Argument 2
Proactive government approach: sandbox testing with industry and a live model governance framework for agents (Josephine Teo)
EXPLANATION
Singapore adopts a proactive stance by creating a sandbox partnership with Google to trial agentic AI and by publishing a live model governance framework that can evolve with feedback. This approach aims to test and refine controls before large‑scale deployment.
EVIDENCE
She describes a sandbox collaboration with Google that lets Singapore “eat its own dog food” by testing agents in a controlled environment [69-73] and mentions a live model governance framework that is continuously updated based on stakeholder feedback [74-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The sandbox partnership with Google and the live model governance framework were highlighted in the session and are echoed in the transcript [S13] and in the broader discussion of regulatory sandboxes in [S25].
MAJOR DISCUSSION POINT
Governance and Safety of Agentic AI
Argument 3
Assurance as a strategic competitive advantage for companies that can demonstrate high safety (Josephine Teo)
EXPLANATION
Minister Teo argues that companies offering high‑assurance safety for their agentic AI will differentiate themselves in the market, turning compliance into a strategic advantage rather than a burden.
EVIDENCE
She states that a company able to provide high assurance on safety will be differentiated from competitors and likely see stronger product interest, framing assurance as a strategic competitive advantage [85-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Minister Teo argued that high-assurance safety can differentiate companies in the market, a point documented in both [S13] and [S4].
MAJOR DISCUSSION POINT
Incentives, Professionalization, and Collaborative Responsibility
AGREED WITH
Stephanie Ifayemi
S
Stephanie Ifayemi
5 arguments177 words per minute1576 words532 seconds
Argument 1
Identification of six challenge areas (language, risk profile, infrastructure, incentives, professionalization, tiered assurance) for closing the assurance divide (Stephanie Ifayemi)
EXPLANATION
Stephanie outlines six key challenge areas that must be addressed to close the global AI assurance divide: language diversity, differing risk profiles, infrastructure constraints, incentive structures, professionalization of assurance roles, and the need for tiered assurance approaches.
EVIDENCE
She lists the six challenge areas-language, risk profile, infrastructure, incentives, professionalization, and tiered assurance-drawing on the paper that identifies these as barriers to closing the assurance divide [259-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI assurance divide report lists six key challenge areas, as cited in the discussion and supported by [S13] and [S4].
MAJOR DISCUSSION POINT
AI Assurance Ecosystem
AGREED WITH
Rebecca Finlay, Josephine Teo, Natasha Crampton, Chris Meserole
Argument 2
Infrastructure barriers (compute, GPU hours) that limit assurance activities in low‑resource settings (Stephanie Ifayemi)
EXPLANATION
Stephanie points out that the massive compute requirements for large‑scale evaluations create prohibitive barriers for many low‑resource countries, hindering their ability to participate in assurance activities.
EVIDENCE
She cites Stanford’s Helm evaluations that used over 12 billion tokens and required 19,500 GPU hours, illustrating how infrastructure demands can block participation from Global South nations [280-282].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
An example of the compute barrier is Stanford’s Helm evaluation requiring over 12 billion tokens and 19,500 GPU hours, referenced in [S13].
MAJOR DISCUSSION POINT
Global Inclusivity and the Assurance Divide
Argument 3
Importance of north‑south collaboration to prevent exclusion in emerging agent standards (Stephanie Ifayemi)
EXPLANATION
Stephanie stresses that collaboration between Global North and South is essential to ensure that emerging agent standards do not exclude Southern stakeholders, highlighting recent initiatives by standards bodies that invite comments from all regions.
EVIDENCE
She references recent work by NIST/CAISI on agent attribution and identity standards, noting the need for global participation and warning against assuming Global South countries will be left out of these processes [291-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for north-south collaboration is emphasized in the session summary and corroborated by statements in [S13] and [S4].
MAJOR DISCUSSION POINT
Global Inclusivity and the Assurance Divide
AGREED WITH
Madhu Srikumar, Frederic Werner, Natasha Crampton, Chris Meserole
Argument 4
Need for harmonized, interoperable evaluation methods and evidence that can be used across regions (Stephanie Ifayemi)
EXPLANATION
Stephanie argues that to achieve global AI assurance, evaluation methods must be standardized and interoperable so that results are comparable and usable across different jurisdictions and contexts.
EVIDENCE
She notes that while multilingual evaluation is critical, the practical implementation remains a challenge, and she calls for balanced prioritization across the value chain, from upstream infrastructure to downstream documentation artifacts [266-270] and [284-289].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for standardized, interoperable evaluation methods across jurisdictions are documented in the discussion and in [S13].
MAJOR DISCUSSION POINT
Standards, Interoperability, and Multilateral Roles
Argument 5
Changing incentives (e.g., insurance mechanisms) and establishing professional accreditation for assurance practitioners (Stephanie Ifayemi)
EXPLANATION
Stephanie highlights the need to reshape incentives—such as through insurance schemes—and to professionalize assurance work via accreditation, thereby strengthening the ecosystem and encouraging broader participation.
EVIDENCE
She discusses how incentives like insurance are emerging, and stresses the importance of professionalization, accreditation, and tiered assurance to build a mature ecosystem [363-376].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion of new incentive models such as insurance schemes and the push for professional accreditation are reflected in [S13] and [S4].
MAJOR DISCUSSION POINT
Incentives, Professionalization, and Collaborative Responsibility
N
Natasha Crampton
4 arguments136 words per minute637 words279 seconds
Argument 1
Call to embed assurance into the system development lifecycle rather than as an after‑thought (Natasha Crampton)
EXPLANATION
Natasha urges that assurance should be built into AI systems from the start, integrated into the development lifecycle, rather than added later as an after‑thought. This ensures observability, auditability, and enforceable constraints.
EVIDENCE
She states that assurance must be built into systems as part of the development lifecycle, not bolted on at the end, so that systems can be observed, audited, and constrained in practice [425-428].
MAJOR DISCUSSION POINT
AI Assurance Ecosystem
AGREED WITH
Rebecca Finlay, Josephine Teo, Stephanie Ifayemi, Chris Meserole
Argument 2
Continuous post‑deployment monitoring and real‑time detection as essential for agentic systems (Natasha Crampton)
EXPLANATION
Natasha emphasizes that for agentic AI, post‑deployment testing, continuous monitoring, and real‑time detection are crucial because agents can act autonomously and adapt over time, creating new risk vectors.
EVIDENCE
She explains that while pre-deployment testing remains necessary, post-deployment testing in an agentic world takes on greater importance, requiring continuous monitoring, real-time detection, and clear accountability for interventions [420-422].
MAJOR DISCUSSION POINT
Governance and Safety of Agentic AI
Argument 3
Assurance as shared global infrastructure to enable trust and adoption worldwide (Natasha Crampton)
EXPLANATION
Natasha frames assurance as a shared piece of global infrastructure that must be collaborative, interoperable, and inclusive, enabling trust and adoption of AI across regions, especially in the Global South.
EVIDENCE
She calls for shared evaluation infrastructure, taxonomies, and investment in capacity, particularly for the Global South, asserting that no single entity can deliver assurance alone [433-438].
MAJOR DISCUSSION POINT
Global Inclusivity and the Assurance Divide
Argument 4
Shared responsibility across governments, industry, and civil society; need for joint evaluation infrastructure and taxonomies (Natasha Crampton)
EXPLANATION
Natasha stresses that assurance must be a collective effort involving governments, industry, and civil society, requiring shared evaluation tools, common taxonomies, and coordinated capacity building.
EVIDENCE
She reiterates that assurance is a shared responsibility, highlighting the need for shared evaluation infrastructure, taxonomies, and investment in capacity, especially for the Global South [434-438].
MAJOR DISCUSSION POINT
Incentives, Professionalization, and Collaborative Responsibility
C
Chris Meserole
3 arguments135 words per minute534 words236 seconds
Argument 1
Emphasis on evolving the understanding of assurance to keep pace with agentic systems (Chris Meserole)
EXPLANATION
Chris argues that assurance concepts must evolve rapidly to address the complexities introduced by agentic AI, moving beyond static standards to dynamic, adaptable frameworks.
EVIDENCE
He notes the need to evolve and mature our understanding of assurance, especially given the rise of multi-agent environments, and calls for ongoing development of standards and policies [448-452].
MAJOR DISCUSSION POINT
AI Assurance Ecosystem
AGREED WITH
Rebecca Finlay, Josephine Teo, Stephanie Ifayemi, Natasha Crampton
Argument 2
Call for multilateral institutions to lead inclusive AI assurance efforts and ensure global participation (Chris Meserole)
EXPLANATION
Chris calls on multilateral bodies to take the lead in fostering inclusive AI assurance, ensuring that standards and frameworks are globally representative and that all regions can participate.
EVIDENCE
He explicitly states the need for multilateral institutions to lead inclusive AI assurance efforts and to guarantee global participation [452-455].
MAJOR DISCUSSION POINT
Standards, Interoperability, and Multilateral Roles
Argument 3
Call to action for all stakeholders to get involved, download reports, and contribute to building the assurance ecosystem (Chris Meserole)
EXPLANATION
Chris ends with a rallying cry for participants to download the newly released reports, engage with ongoing work, and actively contribute to building a robust AI assurance ecosystem.
EVIDENCE
He urges the audience to download the reports, get involved, and join the conversation to ensure AI is safe, secure, and trustworthy [458-462].
MAJOR DISCUSSION POINT
Incentives, Professionalization, and Collaborative Responsibility
F
Frederic Werner
2 arguments180 words per minute1021 words339 seconds
Argument 1
Trust and replicability challenges for AI use cases across diverse regions; need for standards that embed human‑rights and inclusivity (Frederic Werner)
EXPLANATION
Frederic highlights that AI use cases often lack trust and are not easily replicable across different locales, stressing the need for standards that incorporate human‑rights principles and inclusivity.
EVIDENCE
He discusses challenges of trust, replicability, and scalability of AI for Good use cases across regions, and calls for standards that embed safety, security, human-rights, and inclusivity [164-168].
MAJOR DISCUSSION POINT
Global Inclusivity and the Assurance Divide
Argument 2
Role of multilateral bodies (ITU, UN agencies) in fostering inclusive, practical standards and facilitating global collaboration (Frederic Werner)
EXPLANATION
Frederic describes AI for Good’s inclusive approach, leveraging over 50 UN sister agencies and a broad set of stakeholders to develop practical standards and policy recommendations that are globally relevant.
EVIDENCE
He explains that AI for Good works with 50+ UN agencies, bringing diverse voices from the Global South, NGOs, and civil society to develop practical solutions and standards, likening it to an inclusive “Davos of AI” [307-314].
MAJOR DISCUSSION POINT
Standards, Interoperability, and Multilateral Roles
O
Owen Larter
2 arguments201 words per minute1152 words342 seconds
Argument 1
Need for technical protocols (agents‑to‑agents, universal commerce) and assurance standards to enable safe agent interactions (Owen Larter)
EXPLANATION
Owen outlines the development of technical protocols—agents‑to‑agents and universal commerce—that will allow agents to communicate securely, and stresses the parallel need for assurance standards to evaluate risks.
EVIDENCE
He describes Google’s agents-to-agents protocol and universal commerce protocol as ways for agents to exchange IDs, capabilities, and intents, comparing them to early internet standards like HTTP and URLs, and notes the importance of accompanying assurance standards [202-209].
MAJOR DISCUSSION POINT
Governance and Safety of Agentic AI
Argument 2
Highlighting security risks of autonomous agents (malware scanning, misuse) and the need for robust security protocols (Owen Larter)
EXPLANATION
Owen points out that autonomous agents can introduce security vulnerabilities, such as downloading malicious code, and describes collaborations with security teams to scan and mitigate these risks.
EVIDENCE
He mentions work with VirusTotal to scan agentic system downloads for malware and vulnerabilities, and notes concerns about agents being misused, emphasizing the need for strong security protocols [222-223].
MAJOR DISCUSSION POINT
Governance and Safety of Agentic AI
V
Vukosi Marivate
1 argument178 words per minute562 words189 seconds
Argument 1
Language scarcity, local data, and policy capacity gaps in the Global South; call for bottom‑up, locally‑driven assurance (Vukosi Marivate)
EXPLANATION
Vukosi emphasizes that the Global South faces limited language resources, data collection, and policy capacity, arguing that assurance frameworks must be locally driven rather than top‑down.
EVIDENCE
He notes the limited collection and annotation of languages in the Global South, the need for local understanding, and the importance of policymakers’ capacity to interpret labor laws, data governance, and system monitoring, rejecting a top-down approach [231-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vukosi’s concerns about language resources, data, and policy capacity gaps, and the call for locally-driven assurance, are echoed in the session transcript and supported by [S13] and [S4].
MAJOR DISCUSSION POINT
Global Inclusivity and the Assurance Divide
AGREED WITH
Madhu Srikumar, Rebecca Finlay, Stephanie Ifayemi, Natasha Crampton
Agreements
Agreement Points
All speakers emphasized the need for a comprehensive, global AI assurance ecosystem that includes testing, standards, and independent verification.
Speakers: Rebecca Finlay, Josephine Teo, Stephanie Ifayemi, Natasha Crampton, Chris Meserole
Advocacy for a comprehensive AI assurance ecosystem to support national AI strategies and policy alignment (Rebecca Finlay) Three‑pillar model: testing, standards, and third‑party assurance providers (Josephine Teo) Identification of six challenge areas (language, risk profile, infrastructure, incentives, professionalization, tiered assurance) for closing the assurance divide (Stephanie Ifayemi) Call to embed assurance into the system development lifecycle rather than as an after‑thought (Natasha Crampton) Emphasis on evolving the understanding of assurance to keep pace with agentic systems (Chris Meserole)
The panel collectively agreed that building a robust AI assurance ecosystem is essential, requiring rigorous testing, development of standards, and independent third-party assurance, and that this ecosystem must be integrated from the start of system design and continuously evolved. [15-18][97-109][259-295][425-428][448-452]
POLICY CONTEXT (KNOWLEDGE BASE)
The call for a global AI assurance ecosystem aligns with calls for investment in testing infrastructure and international standards highlighted in OECD and UN policy discussions, and reflects the emphasis on professional standards and human-rights impact assessments in standard-setting bodies [S44][S42][S43][S40].
Multilingual and contextual evaluation is critical for trustworthy AI across diverse languages and cultures.
Speakers: Madhu Srikumar, Rebecca Finlay, Stephanie Ifayemi, Natasha Crampton, Vukosi Marivate
Emphasis on multilingual and contextual evaluations to ensure AI works across languages and cultures (Madhu Srikumar) Advocacy for a comprehensive AI assurance ecosystem … including multilingual evaluation commitments (Rebecca Finlay) Identification of language diversity as a key challenge area for closing the assurance divide (Stephanie Ifayemi) Assurance must be interoperable and adaptable to local languages, cultures, and deployment realities (Natasha Crampton) Language scarcity, local data, and policy capacity gaps in the Global South; call for bottom‑up, locally‑driven assurance (Vukosi Marivate)
All highlighted that AI assurance must address language diversity and contextual relevance, noting the Delhi Declaration’s multilingual commitment and the practical barriers posed by limited resources in many languages. [133-136][24-26][262-267][429-433][231-240]
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on multilingual and contextual evaluation is supported by recent ML Commons work on culturally sensitive benchmarks and explicit references to the need for such evaluations especially in the Global South [S48][S49].
Global collaboration and north‑south partnership are essential to avoid exclusion in AI assurance standards and practices.
Speakers: Madhu Srikumar, Frederic Werner, Stephanie Ifayemi, Natasha Crampton, Chris Meserole
Definition of AI assurance … emphasizing multilingual and contextual evaluations (Madhu Srikumar) Role of multilateral bodies (ITU, UN agencies) in fostering inclusive, practical standards (Frederic Werner) Importance of north‑south collaboration to prevent exclusion in emerging agent standards (Stephanie Ifayemi) Assurance as shared global infrastructure … especially for the Global South (Natasha Crampton) Call for multilateral institutions to lead inclusive AI assurance efforts (Chris Meserole)
The participants concurred that inclusive, multilateral approaches are needed, with active participation from Global South stakeholders to shape standards, evaluation methods, and capacity building. [140-144][307-314][291-300][433-438][452-455]
Continuous post‑deployment monitoring and real‑time detection are vital for agentic AI systems.
Speakers: Josephine Teo, Natasha Crampton
The question is for AI, and specifically agentic AI, what would be the components? … testing, standards, third‑party assurance (Josephine Teo) Continuous post‑deployment testing … real‑time detection … clear accountabilities for interventions (Natasha Crampton)
Both stressed that beyond pre-deployment testing, agentic AI requires ongoing monitoring and rapid response mechanisms to manage autonomous actions. [420-422][420-422]
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of continuous post-deployment monitoring and real-time detection for agentic AI is echoed in panel discussions stressing the shift toward agentic systems and the necessity of ongoing monitoring [S35][S36][S44].
Assurance can serve as a strategic competitive advantage for companies that demonstrate high safety standards.
Speakers: Josephine Teo, Stephanie Ifayemi
Assurance as a strategic competitive advantage for companies that can demonstrate high safety (Josephine Teo) Changing incentives (e.g., insurance mechanisms) and professional accreditation to encourage high‑assurance practices (Stephanie Ifayemi)
Both highlighted that robust assurance not only mitigates risk but also differentiates firms in the market, suggesting incentive structures should reward high-assurance performance. [85-89][363-376]
POLICY CONTEXT (KNOWLEDGE BASE)
Positioning assurance as a competitive advantage corresponds with analyses that open, interoperable standards can expand markets and consolidate ecosystems, offering strategic benefits to early adopters [S42].
Similar Viewpoints
Both argue that third‑party assurance and professional accreditation are essential components of a mature AI assurance ecosystem. [97-109][371-376]
Speakers: Josephine Teo, Stephanie Ifayemi
Three‑pillar model: testing, standards, and third‑party assurance providers (Josephine Teo) Identification of six challenge areas … including professionalization and tiered assurance (Stephanie Ifayemi)
Both emphasize the development of technical standards and protocols as foundational to safe agentic AI deployment. [202-209][103-106]
Speakers: Owen Larter, Josephine Teo
Need for technical protocols and assurance standards to enable safe agent interactions (Owen Larter) Three‑pillar model: testing, standards, and third‑party assurance providers (Josephine Teo)
Both identify language diversity as a primary barrier to inclusive AI assurance and call for multilingual evaluation frameworks. [133-136][262-267]
Speakers: Madhu Srikumar, Stephanie Ifayemi
Emphasis on multilingual and contextual evaluations … (Madhu Srikumar) Language diversity identified as a key challenge area (Stephanie Ifayemi)
Both stress that global standards must be shaped by, and responsive to, local contexts and capacities, especially in the Global South. [307-314][231-240]
Speakers: Frederic Werner, Vukosi Marivate
Role of multilateral bodies … fostering inclusive standards (Frederic Werner) Language scarcity and local capacity gaps; need for bottom‑up assurance (Vukosi Marivate)
Unexpected Consensus
Industry representatives (Google DeepMind) and government officials both advocated for open, interoperable standards for agents, despite typical competitive tensions.
Speakers: Owen Larter, Josephine Teo
Need for technical protocols (agents‑to‑agents, universal commerce) and assurance standards (Owen Larter) Three‑pillar model emphasizing standards as a core component (Josephine Teo)
It is notable that a private sector technologist and a government minister converged on the necessity of open standards and protocols for agentic AI, indicating a shared view that interoperability outweighs competitive secrecy. [202-209][103-106]
POLICY CONTEXT (KNOWLEDGE BASE)
The joint advocacy for open, interoperable standards by industry and governments reflects broader calls for common AI standards and definitions to enable global interoperability [S46][S42][S45].
Both the UN‑linked AI for Good initiative and the private sector (Google DeepMind) highlighted the importance of real‑time, post‑deployment monitoring for agents.
Speakers: Frederic Werner, Natasha Crampton
AI for Good emphasizes practical standards and continuous learning (Frederic Werner) Continuous post‑deployment monitoring and real‑time detection for agents (Natasha Crampton)
While Frederic focused on inclusive standards, he also underscored the need for practical, ongoing oversight, aligning with Natasha’s call for real-time monitoring-an alignment not explicitly anticipated given their different institutional roles. [307-314][420-422]
POLICY CONTEXT (KNOWLEDGE BASE)
The shared emphasis by the UN-linked AI for Good initiative and private sector on real-time monitoring aligns with the panel’s focus on post-deployment oversight for agentic AI systems [S35][S36][S41].
Overall Assessment

The panel displayed a strong, cross‑sectoral consensus that AI assurance must be comprehensive, standards‑driven, and globally inclusive, with particular emphasis on multilingual evaluation, continuous monitoring of agentic systems, and the creation of incentive structures that reward high‑assurance performance.

High consensus: most speakers, from government, industry, academia, and multilateral organizations, reiterated overlapping themes, indicating a shared commitment to building a robust, interoperable, and inclusive AI assurance ecosystem. This broad agreement suggests that forthcoming policy initiatives and technical work are likely to receive coordinated support across stakeholder groups.

Differences
Different Viewpoints
How to create market incentives for AI assurance versus treating assurance as a shared global public good
Speakers: Josephine Teo, Natasha Crampton, Stephanie Ifayemi
Assurance as a strategic competitive advantage for companies that can demonstrate high safety (Josephine Teo) [85-89] Assurance must be a shared global infrastructure that no single entity can deliver alone (Natasha Crampton) [433-438] Need to change incentives (e.g., insurance mechanisms) and professionalize assurance practitioners (Stephanie Ifayemi) [363-376]
All three speakers agree that AI assurance is essential, but Josephine frames it as a market differentiator that companies should pursue for competitive gain, while Natasha argues it should be built as a shared, interoperable public infrastructure, and Stephanie focuses on creating systemic incentives such as insurance and professional accreditation to drive adoption. The disagreement lies in whether assurance should be driven primarily by market competition or by collective, policy-led mechanisms. [85-89][433-438][363-376]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between market incentives and treating assurance as a public good is discussed in OECD policy notes on digital trade that stress private-sector involvement and the benefits of open standards for market expansion [S37][S42].
Top‑down technical standardisation for agentic AI versus bottom‑up, locally driven assurance capacity
Speakers: Owen Larter, Vukosi Marivate
Need for technical protocols (agents-to-agents, universal commerce) and assurance standards to enable safe agent interactions (Owen Larter) [202-209] Assurance frameworks must be locally driven, with capacity building for policymakers; not a top-down approach (Vukosi Marivate) [236-240]
Owen advocates developing universal technical standards and protocols to ensure agents can interoperate safely, implying a top-down, globally uniform approach. Vukosi counters that assurance must be rooted in local understanding and policy capacity, warning against top-down imposition. The clash is between a globally uniform technical solution and a locally adapted, capacity-building approach. [202-209][236-240]
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over top-down versus bottom-up standardisation is highlighted by calls for grassroots, industry-driven approaches contrasted with centralized technical standardisation efforts [S38][S42][S43].
How to overcome compute‑intensive infrastructure barriers for AI assurance
Speakers: Stephanie Ifayemi, Owen Larter
Infrastructure barriers (e.g., 12 billion tokens, 19,500 GPU hours) create prohibitive costs for low-resource settings (Stephanie Ifayemi) [280-282] Development of cheap, efficient flash models to make agentic systems more accessible and reduce compute costs (Owen Larter) [351-354]
Stephanie highlights the massive compute requirements of current evaluation pipelines as a blocker for Global South participation. Owen proposes that newer, low-cost flash models can mitigate these barriers, suggesting a technology-driven solution. The disagreement centers on whether the primary solution is to invest in cheaper models or to address the systemic infrastructure gap. [280-282][351-354]
Unexpected Differences
Optimism about global standards embedding human‑rights versus skepticism about local applicability
Speakers: Frederic Werner, Vukosi Marivate
AI for Good can embed safety, security, human-rights, and inclusivity into standards (Frederic Werner) [164-168] Local data, language, and policy capacity gaps mean top-down standards may not reflect local values; need bottom-up approach (Vukosi Marivate) [236-240]
Frederic expresses confidence that global standards can incorporate human-rights and be inclusive, while Vukosi warns that without local capacity and contextual understanding, such standards risk being misaligned with Global South realities. This tension between confidence in universal standards and concern over local relevance was not anticipated given the overall consensus on inclusivity. [164-168][236-240]
POLICY CONTEXT (KNOWLEDGE BASE)
Optimism about embedding human-rights in global AI standards is balanced by concerns over national legal alignment and applicability, as noted in UN and OECD analyses of harmonising laws with human-rights frameworks [S39][S40][S41].
Reliance on proprietary, low‑cost models versus concerns about equitable access to compute resources
Speakers: Owen Larter, Stephanie Ifayemi
Google DeepMind’s cheap flash models can make agentic AI affordable and support assurance testing (Owen Larter) [351-354] Compute-intensive evaluations (e.g., 19,500 GPU hours) create barriers for low-resource countries (Stephanie Ifayemi) [280-282]
Owen suggests that proprietary, efficient models will solve access issues, whereas Stephanie highlights systemic compute inequities that cannot be solved solely by cheaper models, indicating an unexpected clash between a technology-centric fix and a broader infrastructure equity perspective. [351-354][280-282]
Overall Assessment

The panel shows strong consensus on the necessity of AI assurance, multilingual evaluation, and inclusive governance. However, substantive disagreements emerge around the primary mechanism to achieve assurance—market‑driven incentives versus shared public infrastructure, top‑down technical standardisation versus locally driven capacity building, and whether technological shortcuts (cheap models) can offset deep compute inequities. These divergences reflect differing priorities among government, industry, and civil‑society actors.

Moderate to high. While participants align on goals, the varied strategic approaches (competitive advantage, shared infrastructure, standards vs local capacity, technology‑centric solutions) indicate potential friction in policy coordination and implementation, especially between high‑resource actors and Global South stakeholders.

Partial Agreements
All speakers concur that AI assurance is essential and must be systematic, but they differ on the primary pathway: Rebecca stresses policy alignment; Josephine emphasises testing and third‑party providers; Stephanie outlines structural challenges and professionalisation; Natasha pushes for lifecycle integration; Chris calls for multilateral leadership. The shared goal is a robust assurance ecosystem, yet the routes proposed vary. [15-18][124-132][97-109][259-295][425-428][452-455]
Speakers: Rebecca Finlay, Madhu Srikumar, Josephine Teo, Stephanie Ifayemi, Natasha Crampton, Chris Meserole
Need for a comprehensive AI assurance ecosystem to support national AI strategies (Rebecca Finlay) [15-18] Definition of AI assurance as independent measurement, evaluation, and communication of trustworthiness (Madhu Srikumar) [124-132] Three-pillar model: testing, standards, third-party assurance (Josephine Teo) [97-109] Six challenge areas (language, risk profile, infrastructure, incentives, professionalisation, tiered assurance) to close the assurance divide (Stephanie Ifayemi) [259-295] Assurance must be embedded in the system development lifecycle, not bolted on (Natasha Crampton) [425-428] Call for multilateral institutions to lead inclusive AI assurance (Chris Meserole) [452-455]
All agree on the importance of multilingual, locally relevant assurance, but differ on implementation: Josephine promotes a government‑led sandbox and framework; Vukosi stresses building local policy capacity; Stephanie calls for collaborative north‑south standard‑setting. The disagreement is on who should drive the multilingual assurance effort. [133-136][69-73][231-240][291-300]
Speakers: Madhu Srikumar, Josephine Teo, Vukosi Marivate, Stephanie Ifayemi
Multilingual and contextual evaluations are critical for global AI trust (Madhu Srikumar) [133-136] Sandbox partnership with Google to test agents and live governance framework (Josephine Teo) [69-73][74-78] Local language scarcity and policy capacity gaps require bottom-up assurance (Vukosi Marivate) [231-240] North-South collaboration needed to avoid exclusion in emerging agent standards (Stephanie Ifayemi) [291-300]
Takeaways
Key takeaways
A robust AI assurance ecosystem—comprising testing, standards, and third‑party assurance—is essential for trustworthy, responsible AI deployment, especially for agentic systems. AI assurance should be embedded throughout the system development lifecycle rather than added as an after‑thought, with continuous post‑deployment monitoring for autonomous agents. Multilingual and contextual evaluations are critical to ensure AI works across diverse languages, cultures, and deployment conditions, highlighting a global assurance divide. Proactive government approaches (e.g., Singapore’s sandbox and live model‑governance framework) can accelerate safe adoption of agentic AI while providing real‑world testbeds. Industry efforts (e.g., Google DeepMind’s agent‑to‑agent and universal commerce protocols) aim to create interoperable standards analogous to early internet protocols. Assurance can be a strategic market differentiator; companies that demonstrate high safety can gain competitive advantage. Closing the assurance gap requires north‑south collaboration, capacity‑building, infrastructure investment, and professionalization of assurance practitioners. Incentive mechanisms such as insurance products and accreditation schemes are needed to sustain a mature assurance ecosystem.
Resolutions and action items
Release of two PAI papers (“Strengthening the AI Assurance Ecosystem” and “Closing the Global Assurance Divide”) with QR codes for download and community feedback. Singapore to continue operating an agentic‑AI sandbox with industry partners and to keep its model governance framework for agents as a living document. Commitment from the Delhi Declaration to strengthen multilingual and use‑case evaluations, providing a basis for future standards work. Google DeepMind to advance interoperable agent protocols (agents‑to‑agents, universal commerce) and to make low‑cost, efficient models (e.g., Flash) available for broader testing. ITU and other multilateral bodies to facilitate inclusive, practical standards development and to bring Global South voices into the process. PAI to promote professional accreditation pathways for assurance practitioners and to explore insurance‑based incentives for safety compliance. All participants encouraged to download the reports, provide feedback, and engage in ongoing collaborative initiatives to build shared evaluation infrastructure.
Unresolved issues
Concrete methodology for multilingual and contextual evaluation at scale – how to design, fund, and operationalize such benchmarks. Mechanisms for ensuring equitable access to compute and data resources needed for assurance activities in low‑resource regions. Details of a tiered assurance framework that aligns risk profiles with appropriate testing and monitoring intensity. Governance of real‑time monitoring and intervention for autonomous agents – who holds accountability and how interventions are triggered. Specific pathways for third‑party assurance providers to scale globally and maintain independence across jurisdictions. Funding models and incentive structures (e.g., insurance, subsidies) required to sustain the assurance ecosystem, especially in the Global South.
Suggested compromises
Adopt a proactive, sandbox‑based regulatory approach (as in Singapore) while keeping the governance framework open to iterative feedback. Balance upstream infrastructure investment (compute, datasets) with downstream tools such as documentation artifacts to lower entry barriers. Implement a tiered assurance model that matches the level of risk and stakes of a use case, allowing lighter assessments for low‑risk applications. Treat assurance compliance as a market differentiator, encouraging companies to adopt higher safety standards voluntarily rather than through punitive regulation.
Thought Provoking Comments
We need to shift from reactive regulation to a proactive preparation stance, with the government itself being a leader and early adopter of agentic AI, testing it in a sandbox and developing a live model governance framework.
This reframes the regulatory approach from waiting for problems to arise to actively shaping technology through government-led experimentation, highlighting the importance of credibility and early risk identification.
Set the tone for the panel by introducing the concept of government‑run sandboxes and live frameworks, prompting other speakers to discuss practical standards, testing, and the role of third‑party assurance. It shifted the conversation from abstract policy to concrete, actionable governance mechanisms.
Speaker: Josephine Teo (Minister, Singapore)
Trust is the biggest challenge for AI for Good use cases; we must bake common‑sense principles—safety, human‑rights, inclusivity—into standards, and we cannot assume that simply providing tools will automatically create value or responsible use in the Global South.
He highlights the gap between good intentions and actionable standards, and cautions against a naïve leapfrogging narrative for the Global South, emphasizing the need for capacity building and contextual relevance.
Introduced the theme of the global assurance divide, leading Madhu to ask about multilingual and offline populations and prompting later speakers (Vukosi, Stephanie) to elaborate on language, risk profiles, and infrastructure challenges.
Speaker: Frederic Werner (AI for Good, ITU)
Agents will need interoperable technical protocols—agents‑to‑agents and universal commerce—much like HTTP and URLs for the early internet, to enable safe, standardized interactions across systems.
Provides a concrete technical roadmap for the emerging agentic economy, linking standards directly to safety and assurance, and framing interoperability as foundational.
Steered the discussion toward concrete standards work, prompting follow‑up questions about safety challenges of agents and leading to deeper discussion on security scanning and the need for third‑party assurance.
Speaker: Owen Larter (Google DeepMind)
There is a severe lack of data collection and annotation capacity in the Global South, and without local understanding and policy capacity, any top‑down assurance framework will miss the values and needs of those communities.
Emphasizes the structural inequities in data and policy capacity, arguing for bottom‑up, locally‑driven assurance mechanisms rather than imposed standards.
Reinforced the earlier points about the assurance divide, leading Stephanie to detail specific challenge areas (language, risk profile) and to stress north‑south collaboration in standards development.
Speaker: Vukosi Marivate (Masakane, African Language NLP)
Our paper identifies six challenge areas—language diversity, differing risk profiles, infrastructure, documentation, incentives, and professionalisation—and argues that global south participation must be embedded in agent attribution and identity standards to avoid exclusion.
Synthesises the discussion into a structured framework, offering concrete gaps and a roadmap for inclusive standard‑setting, and highlights the urgency of involving Global South voices in emerging agent standards.
Provided a clear agenda that guided the rapid‑fire segment, influencing Owen’s commitment on access and Natasha’s call for assurance as infrastructure, while anchoring the conversation in actionable next steps.
Speaker: Stephanie Ifayemi (Partnership on AI)
Assurance must become an operational discipline built into the system development lifecycle, with continuous post‑deployment monitoring, real‑time detection, and clear accountability—treating assurance as core infrastructure rather than a bolt‑on.
Elevates assurance from a policy checkbox to a foundational engineering and governance practice, stressing the need for ongoing monitoring especially for agentic systems.
Unified the panel’s themes around operationalising assurance, prompting Chris to summarise the need for shared responsibility and a call to action, and leaving the audience with a concrete vision for future work.
Speaker: Natasha Crampton (Chief Responsible AI Officer, Microsoft)
Success in the next 12 months means changing incentives, professionalising assurance (accreditation), and developing tiered, risk‑based assurance approaches that align with use‑case stakes and reversibility, especially for agents.
Offers a measurable, time‑bound set of outcomes, linking incentives, skills, and tiered assurance to practical deployment scenarios, thereby translating high‑level discussion into actionable milestones.
Closed the panel with a forward‑looking roadmap, reinforcing earlier points about standards, capacity, and collaboration, and motivating participants to engage with the released reports and ongoing initiatives.
Speaker: Stephanie Ifayemi (Partnership on AI)
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from high‑level declarations to concrete, actionable pathways. Josephine Teo’s call for proactive, government‑led experimentation introduced a new regulatory paradigm that anchored later talks on standards and testing. Frederic Werner’s emphasis on trust, standards, and the pitfalls of assuming technology will automatically benefit the Global South highlighted the assurance divide, prompting Vukosi and Stephanie to surface concrete gaps in language, data, and capacity. Owen Larter’s proposal of interoperable agent protocols supplied a technical blueprint that linked directly to safety and assurance concerns. Stephanie’s systematic breakdown of six challenge areas and her articulation of north‑south collaboration provided a clear agenda, which Natasha and Chris later reframed as an operational infrastructure and shared responsibility. Collectively, these comments reshaped the panel’s tone—from abstract policy to a focused, collaborative roadmap—ensuring that the dialogue culminated in concrete next steps and a shared call to action.

Follow-up Questions
What does AI assurance mean and how can we ensure inclusion of the 2.6 billion people who remain offline?
Addresses the risk that large populations could be excluded from AI governance frameworks, highlighting the need for globally inclusive assurance mechanisms.
Speaker: Frederic Werner (as asked by Madhu Srikumar)
What does robust AI assurance look like and where are the gaps and opportunities between Frontier Labs’ internal work and what is needed for broader public trust?
Seeks to identify specific shortcomings in current industry practices and to define concrete steps that can bridge the trust gap with the public.
Speaker: Owen Larter (as asked by Madhu Srikumar)
What safety challenges do agentic systems pose that keep you up at night?
Highlights concerns about security, misuse, and unintended consequences of highly autonomous agents, prompting deeper investigation into risk mitigation.
Speaker: Owen Larter (as asked by Madhu Srikumar)
How well do assurance frameworks designed in the US, UK, or Singapore translate to contexts where data, languages, and deployment conditions are completely different, and what are we missing?
Points to the need for contextual adaptation of assurance standards to the Global South, where linguistic and infrastructural realities differ markedly.
Speaker: Vukosi Marivate (as asked by Madhu Srikumar)
What concrete gaps are we seeing in closing the global AI assurance divide (e.g., capacity for third‑party evaluations, access to models, etc.) and what would it take to close those gaps?
Calls for a detailed mapping of barriers—technical, institutional, and resource‑based—that prevent equitable assurance practices worldwide.
Speaker: Stephanie Ifayemi (as asked by Madhu Srikumar)
What role should multilateral institutions like the ITU play in making globally inclusive AI assurance happen?
Seeks clarification on how existing international bodies can coordinate standards, capacity‑building, and inclusive governance across nations.
Speaker: Frederic Werner (as asked by Madhu Srikumar)
How can Singapore’s ‘test‑once‑comply‑globally’ approach be made interoperable rather than a form of exclusion for the Global South?
Explores ways to ensure that a single compliance regime does not disadvantage low‑resource regions, emphasizing true interoperability.
Speaker: Vukosi Marivate (as asked by Madhu Srikumar)
What single commitment should Frontier Labs make on AI assurance that would actually move the needle?
Requests a concrete, measurable pledge from the industry leader that could catalyze broader assurance improvements.
Speaker: Owen Larter (as asked by Madhu Srikumar)
What concrete outcomes should we aim for in global AI assurance work over the next 12 months, and what would success look like?
Calls for a clear roadmap and success metrics to evaluate progress on assurance initiatives within a defined timeframe.
Speaker: Stephanie Ifayemi (as asked by Madhu Srikumar)
Research area: Development of testing methodologies, datasets, and standards specifically for agentic AI, including evaluation of intermediate reasoning steps.
Current assurance lacks tools to assess the complex, multi‑step behavior of autonomous agents; new methods are needed to ensure safety and reliability.
Speaker: Josephine Teo; Stephanie Ifayemi
Research area: Building and scaling a pool of independent third‑party assurance providers for AI systems.
Independent verification is essential for trust, yet the ecosystem of auditors and testers is under‑developed, especially in emerging economies.
Speaker: Josephine Teo; Stephanie Ifayemi
Research area: Designing multilingual and contextual evaluation frameworks that cover the thousands of languages and dialects worldwide.
Ensures AI systems are evaluated fairly across linguistic diversity, a prerequisite for inclusive global deployment.
Speaker: Josephine Teo; Frederic Werner; Stephanie Ifayemi; Natasha Crampton
Research area: Reducing infrastructure and compute barriers for large‑scale AI assurance testing (e.g., token usage, GPU hours).
High resource demands limit participation from low‑resource regions; finding efficient evaluation pipelines is critical for equity.
Speaker: Stephanie Ifayemi; Owen Larter
Research area: Mapping differentiated risk profiles (environmental, societal, sector‑specific) across regions to tailor assurance priorities.
Risk perception varies globally; assurance frameworks must reflect local priorities such as environmental impact in Pacific Island nations.
Speaker: Stephanie Ifayemi
Research area: Developing real‑time monitoring, failure detection, and post‑deployment assurance mechanisms for agentic systems.
Agentic AI can act continuously; ongoing oversight is required beyond pre‑deployment testing to mitigate emergent harms.
Speaker: Natasha Crampton; Stephanie Ifayemi
Research area: AI literacy and skilling programs to enable responsible use and governance of AI in the Global South.
Without widespread AI education, tools may be misused or fail to deliver value; capacity‑building is essential for equitable adoption.
Speaker: Frederic Werner
Research area: Evaluating the replicability and scalability of AI‑for‑Good use cases across diverse geographic and cultural contexts.
Current pilots often do not translate across regions; systematic study is needed to understand transferability and necessary adaptations.
Speaker: Frederic Werner
Research area: Standardization of agent‑to‑agent communication protocols (e.g., agents‑to‑agents protocol, universal commerce protocol).
Interoperability among autonomous agents requires common technical standards, akin to early internet protocols.
Speaker: Owen Larter
Research area: Security protocols for agents interacting with sensitive accounts (bank, email) and mechanisms for malware/vulnerability scanning of agent‑downloaded skills.
Autonomous agents accessing personal data pose novel security threats; robust safeguards must be researched and implemented.
Speaker: Owen Larter
Research area: Designing tiered assurance frameworks that align the depth of assessment with use‑case risk, reversibility, and autonomy levels.
Not all AI deployments carry the same stakes; a flexible, risk‑based assurance approach can allocate resources efficiently.
Speaker: Stephanie Ifayemi
Research area: Professionalization and accreditation of AI assurance practitioners and organizations.
Trust in assurance outcomes depends on the credibility of assessors; establishing standards and certifications is needed.
Speaker: Stephanie Ifayemi
Research area: Creating incentive structures (e.g., insurance products, liability frameworks) that motivate organizations to adopt robust AI assurance practices.
Economic incentives can drive the development of a mature assurance ecosystem, encouraging proactive compliance.
Speaker: Stephanie Ifayemi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Driving Social Good with AI_ Evaluation and Open Source at Scale

Driving Social Good with AI_ Evaluation and Open Source at Scale

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how the rise of large language models (LLMs) is reshaping the maintainability and evaluation of open-source scientific software [1-7]. Sanket Verma, a Numfocus board member, highlighted that projects such as NumPy, SciPy and Pandas rely on community stewardship and now face new AI-generated contributions [2][6]. Mala Kumar introduced “AI red teaming” as a contextual evaluation method that gathers domain experts to construct structured attack scenarios rather than relying on generic benchmarks [12-18][20-21]. She noted that Humane Intelligence plans to release its red-team­ing toolkit under an open-source licence later this year, aiming to broaden accessibility [33-34]. Tarunima Prabhakar emphasized that open-source guardrails are essential for building safe applications in the global majority, where resources to reinvent solutions are scarce [39-45]. Ashwani Sharma described how Indian academic labs have begun adapting evaluation frameworks such as the Indic LM Arena to local languages, illustrating community-driven multilingual evaluation [65-70]. Sanket cited two recent incidents-a 13 000-line pull request generated by a chat-GPT prompt in the OCaml codebase and an agentic AI’s unsolicited PR to Matplotlib-that created heavy maintenance burdens for overworked maintainers [152-168][174-179]. He argued that these cases demonstrate the need for clear policies on non-human contributions and for organizations like Numfocus to develop governance guidelines [178-182]. Participants agreed that scaling red-team­ing is difficult because it traditionally requires human-in-the-loop prompt design and adjudication, especially when dealing with multilingual or culturally sensitive content [135-140][147-152]. To address this, Mala suggested an ontological mapping of problem spaces and the creation of interoperable “eval-cards” that could be shared across projects, though she acknowledged the challenge of standardising outputs [290-298][98-103]. The panel also warned that using the same LLM as both model and judge can amplify bias, underscoring the importance of spot-checking with humans [324-328][329-334]. Overall, the discussion concluded that open-source evaluation tools can democratise AI safety, but they must be coupled with robust community policies, human oversight, and reusable standards to remain sustainable [262-268][378-384].


Keypoints

Major discussion points


Open-source AI evaluation and red-team­ing as a community effort – The panel highlighted that AI red-team­ing (structured scenario testing) is a key method for uncovering model failures and that open-sourcing the tooling makes it more accessible to a broader audience [12-18][33-34][40-45][46-49][64-70].


Risks and maintainability challenges of AI-generated code contributions – Real-world examples were given of massive pull-requests generated by LLMs (OCaml and Matplotlib) that created heavy review burdens and exposed the lack of policies for non-human contributions [152-169][174-179][180-182].


Need for standardized, multilingual evaluation frameworks and careful benchmarking – Participants called for interoperable “eval-cards” or model-cards, warned that current benchmarks are often ad-hoc and may miss the true problem space, especially in diverse linguistic contexts [98-103][135-140][340-357][358-367].


Making evaluation tools usable for non-technical stakeholders and NGOs – The discussion stressed that program staff, not just developers, must engage with evaluation (e.g., defining questions, reviewing outputs) and that human-in-the-loop checks remain essential despite automation pressures [115-121][118-121][278-281][282-289][290-295].


Opportunities to leverage LLMs for automation while guarding against over-reliance – LLMs can help map large codebases for onboarding [229-234] and generate synthetic prompts, but using the same model as a judge can amplify bias, so spot-checks and diverse judges are recommended [300-304][330-334][324-328].


Overall purpose / goal


The panel aimed to explore how the open-source ecosystem can develop robust, scalable, and inclusive practices for evaluating and safeguarding AI systems-especially large language models-by sharing experiences, identifying risks, and proposing policies, standards, and community-driven solutions.


Overall tone and its evolution


– The conversation began with an informative and optimistic tone, emphasizing opportunities for newcomers and the value of open-source collaboration [6][8][12-15].


– It then shifted to a cautious, problem-focused tone as speakers described concrete threats from AI-generated PRs and the complexities of multilingual red-team­ing [152-182][135-140].


– Mid-discussion the tone became collaborative and solution-oriented, highlighting community contributions, standardization ideas, and practical tools [98-103][229-234][290-295].


– By the end, the tone settled into a hopeful call-to-action, urging all participants-technical and non-technical-to engage in evaluation work and contribute to open-source safeguards [378-384].


Speakers

Tarunima Prabhakar


Areas of expertise: Online harms, AI red-team­ing, open-source solutions for the global-majority, especially India.


Role / Title: Works at TATL (Technology-Assisted Trust & Learning) focusing on building open products for online harms; panel moderator/host [S2][S1].


Ashwani Sharma


Areas of expertise: Open-source advocacy, Linux history, software engineering education, AI-enabled community building.


Role / Title: Former Google employee (participated in Google Summer of Code), open-source contributor and community builder.


Mala Kumar


Areas of expertise: AI red-team­ing, contextual evaluation, open-source AI evaluation tools, human-rights-focused AI safety.


Role / Title: Leader at Humane Intelligence; former Director at GitHub (four years) [S5][S6].


Sanket Verma


Areas of expertise: Scientific open-source infrastructure, AI/LLM maintainability, policy for AI-generated contributions.


Role / Title: Board of Directors, NumFOCUS; member of NumFOCUS Technical Committee [S8][S10].


Audience


Areas of expertise: Varied (e.g., tech & geopolitics, public-sector benchmarking).


Role / Title: Panel participants from industry, academia, non-profits, and government; asked questions on risks of open-source scaling, red-team­ing scalability, and benchmarking standards.


Additional speakers:


None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

Opening & framing – Sanket Verma, a NumFOCUS board member and technical-committee participant, opened the panel by outlining three inter-related topics: (1) evaluation of AI systems, (2) open-source tooling for that evaluation, and (3) governance of agentic AI contributions. He noted that NumFOCUS fiscally sponsors core scientific libraries such as NumPy, SciPy, Pandas and Matplotlib and warned that AI-generated pull-requests are already surfacing in these projects, creating an urgent need for clear policies [1-8].


Evaluation & AI red-team­ing – Mala Kumar (Humane Intelligence) described their “AI red-team­ing” approach, borrowing the contextual, scenario-driven mindset from cybersecurity. Rather than relying on generic benchmarks, her team co-creates attack scenarios with domain experts (e.g., public health, food security, education) to surface failure modes that standard metrics miss [12-31]. She announced that the red-team­ing toolkit will be released under an open-source licence later this year with support from Google.org, and that colleague Adarsh will lead the technical effort [33-38].


Open-source for the global majority – Tarunima Prabhakar (TATL) argued that open-source evaluation stacks are essential for resource-constrained regions such as India, where organisations cannot afford to rebuild complex evaluation pipelines from scratch. Sharing guardrails and red-team tools prevents duplication of effort and enables smaller NGOs to benefit from collective knowledge [39-45].


Community’s role – Sanket emphasized that the health of the scientific software stack depends on a vibrant contributor community that supplies datasets, techniques, and ongoing maintenance for both core libraries and their evaluation layers [46-50].


Multilingual evaluation example – Ashwani Sharma highlighted the IIT Madras “Indic LM Arena”, an adaptation of Berkeley’s LM-Arena for Indian languages. The project is building a community around multilingual benchmarking and aims to fill evaluation gaps for languages such as Hindi, Tamil, and Bengali [65-70].


Opportunities & architecture mapping – Mala introduced an “additive vs. reductive” analogy: an additive evaluation layer sits on top of existing models to catch unsafe behaviour, whereas a reductive approach tries to strip safety features from the model itself. She suggested that open-source tools can help organisations construct such layers systematically [229-234]. Sanket added that LLMs can be employed to automatically map large codebases, producing visual architecture overviews that lower the barrier for new contributors [229-234].


Agentic AI and PR governance – Sanket recounted two recent incidents that illustrate the pressure AI-generated contributions place on maintainers: (a) a 13 000-line pull-request generated by prompting ChatGPT was submitted to the OCaml repository and later closed after extensive discussion about provenance and potential breakage [152-168]; (b) an agentic AI submitted a large PR to Matplotlib, was rejected for lacking a non-human contribution policy, posted a critical blog, then apologized, underscoring the need for explicit governance [174-182].


GitHub provenance concerns – Mala, drawing on her former role at GitHub, explained that AI-written code blurs authorship, increases reviewer workload, and has prompted GitHub to consider flagging AI-generated pull-requests [187-196].


Audience Q&A – Risks of open-source scaling – An audience member asked about the risks of scaling open-source AI evaluation versus closed systems. Mala responded that open-sourcing evaluation software is a low-stakes move with high upside, but cautioned that the community sometimes confuses “open-weight” (open-source code) with “open-data”, which can lead to misunderstandings [258-260][262-266].


Scaling red-team­ing – To make red-team pipelines reproducible, Mala proposed an ontological mapping of problem spaces (e.g., human-rights clauses, power structures) that would guide scenario creation and enable reuse across models [290-295]. She also suggested an interoperable “eval-card” standard, similar to ModelCards, allowing organisations to upload a portable evaluation description and replicate it in their own contexts [98-103]. Tarunima added that automated prompt generation can help, but LLMs still struggle with spoken Indian languages, so human-written prompts remain important [124-132].


Judging with LLMs – The panel agreed that using the same LLM as both target and judge can amplify existing biases, citing work at ML Commons that showed a “benchmark of benchmarks” suffers from this effect. They recommended retaining a modest proportion of human spot-checks to validate automated judgments [324-334]; Tarunima echoed the need for human verification [326-328].


Non-technical stakeholder involvement – Mala stressed that program staff, not just engineers, must engage in evaluation by drafting simple question lists rather than writing complex code [115-121][235-244]. Ashwani emphasized that in safety-critical deployments, caution should outweigh speed and that human oversight remains indispensable [278-281].


Benchmarking guidance – When asked about practical benchmarking, Mala advised organisations to first conduct red-team­ing to identify the true problem space and then design focused benchmarks; otherwise they risk measuring irrelevant metrics, such as hallucinations in Yoruba versus bias in Hausa [340-357][345-352].


Closing – The panel reached strong consensus that open-source tools democratize AI safety, community stewardship is vital for sustainability, and clear policies on AI-generated contributions are urgent. They reaffirmed the necessity of human-in-the-loop oversight-especially for multilingual and culturally nuanced applications-and called for the development of interoperable eval-cards, ontology-driven scenario design, and accessible interfaces for both technical and non-technical stakeholders. Action items include releasing the Humane Intelligence red-team toolkit later this year, exploring eval-card standards, encouraging community mapping of large codebases, and formulating governance guidelines for non-human pull-requests [378-384].


Session transcriptComplete transcript of the session
Sanket Verma

Hello everyone. So my name is Sanket Verma and I serve on the board of directors of Numfocus. Numfocus is a non -profit organization based out of US which is a fiscal sponsor for all the foundational projects used in the AI like NumPy, SciPy, Pandas, Matplotlib. I also serve on the technical committee of Numfocus. I’ve been in the open source space for the last decade. I maintain open source projects and all that stuff. So my focus will be what does the maintainability look like in the age of LLMs and AI. And I think our community has been handling these AI slot PRs for quite some time and it’s about time we start thinking what does it look like, what kind of safeguards should be there, what kind of policies should be there.

And just to make sure that I’m not interrupting you, I’m going to go ahead and start the recording. not sound too pessimistic, there are opportunities as well, like how these agentic AINLMs can be used to lower the barrier for the newcomers and contributors, how they can leverage it.

Mala Kumar

It’s on, but the button’s not illuminated, so very confusing. Great. So again, we have three topics that we’re going to cover in this panel, and I guess we’ll go ahead and kick it off on the first one. So the first topic is really around the idea of evaluation and open source software. At Humane Intelligence, we do focus on what we call contextual evaluations, so we’re not going to the hyper -automation that a lot of companies like to look at. We don’t also focus on benchmarks, which is kind of the industry darling. What we really focus on is AI red teaming, which is kind of a remnant thing from cybersecurity, where you would basically bring a bunch of people together to try to hack away at whatever tool that you’re building.

With AI red teaming, what we basically do is we create structured scenarios that look at how to build a system that’s going to be able to do that. So we’re going to probe different models. So we’re going to look at how to build a system that’s going to be able to do that. So we’re different directions and we focus on the subject matter expertise. So if, for example, you work in public health or food security or education, we would bring those people together and then have them run through certain scenarios to look at different models and see where the points of failures may occur. And once we have that, we can either take the data and do things like structured data science challenges or we can do benchmarks from there once you have a much better idea of where the failure points, the vulnerabilities may exist in your models in the first place.

One of the ways that I like to think about AI evaluations is really one of my background, which is UX research and design. For those who have ever built software before, it doesn’t matter whether you were starting at basically nothing, you had no idea what your digital intervention was, or you had a very mature software product, there was some kind of method or methodology that would get you to the next stage. We’re at the early stages of AI evaluations right now, meaning there are a lot of gaps and honestly organizations like ours are making it up as we go. But that’s kind of how it goes. with AI systems as it stands. But AI red teaming has turned out to be really interesting for both the capacity building side, so helping people understand what are kind of the inherent flaws or the makeups or the design decisions in AI systems and models, but then also, again, to find the failure points so that if they were to build a guardrail around their system, they would have an idea of what they’re looking at.

Is it refusal on a certain topic? Is it a different classification system for a certain topical area? Is it delving further into the problem space? Is it building a RAG system like Tarunima mentioned? If you need further documentation or something more robust for a certain part. And so there are a lot of different methods that can go about for the mitigations, but in order to get to that point, you have to understand what exactly is the problem in the first place. And so open source software has a really interesting intersection with that and a really interesting means to make that, more accessible. And one of the things we’re doing at Humane Intelligence is we’re doing a lot of work on the AI system.

and thanks to the support of Google .org, is we’re going to be opening up our AI red teaming software through an open source software license. So that will come out later this year. My colleague Adarsh is in the audience. He’s going to be primarily helping us on that, so you can go talk to him if you’ve got technical questions. But we’re really excited about that because, again, it means more accessibility for the broader community. And so with that long -winded explanation, I’d like to turn it to my fellow panelists for their thoughts on why open source and AI evaluations is important.

Tarunima Prabhakar

Yeah, I can just come in on the open source piece. So TATL has been, we’ve been looking at online harms now for over six years, and from the get -go, we were clear that the products that we build have to be open. The specific reason for that is that when you are looking at a lot of global majority, geographies you’re looking at, India, right? often we don’t have the resources to reinvent the wheel. So if one organization, it’s complex enough to build something out once, to then spend the same amount of resources, in this case it would be, as Vala was saying, for red teaming, but if you also had to think about it just in terms of an evaluation stack, which is keeping track of your inputs and outputs.

Or if let’s say we have figured out one way of doing human review or a human evaluation and then figuring out how do you go from there to building a guardrail, that same guardrail is useful for other organizations as well. And we don’t have the resources or the efficient way is for that knowledge to be shared and reused rather than for the limited set of resources to be fractured across six organizations to do the exact same thing. So, So, yeah, like in general, I think if we are trying to build safer applications, build more robust applications in the global majority in India, like we do think open source is actually a big part of doing that.

Sanket Verma

So I would like to focus on the community aspect of the open source. So all the projects that we have been using in our research and in our academic uses or in the production, they have a wonderful community behind them. And I guess like the evaluations and the red teaming could definitely use the big push from the community, the inputs, the data sets, the different techniques and all that stuff. And the community plays a vital role in sustaining the project and keeping the project moving forward. I guess I’m not familiar with, so I’m mostly from the scientific open source stack, so I’m not sure what the projects are present, who kind of does. the AI evaluation in that space, but I guess they have wonderful community, and it plays a vital role in how this can be relevant depending on the trend it changes every day.

Ashwani Sharma

So, actually, it’s very interesting going back many years, actually, and I reveal my age here, but whatever. I used Linux back when there was a magazine called PC Quest, which used to have Slackware Linux coming on its CDs back in the mid-’90s, and, you know, install that thing on, like, a Pentium computer. And for a long time, actually, in India, we were consumers of open source, and we were not so much contributors to open source. When I joined Google, there was this competition called Google Summer of Code. It’s not really… You can’t really call it a competition because it was about contributing to open source, and it wasn’t like there were prizes. Just that the teams which were selected would be paid the equivalent of a summer internship stipend to contribute to open source.

And in a particular year, it just flipped because it was universities. And for the longest time, guess what? The global leader was the University of Marutua in Sri Lanka because some professors just got into this idea that students contributing to open source will learn better software engineering. And they were the global leaders. And then one year, it flipped. And our IITs and IIITs just got on top of that and have stayed on top of that. And I think that somewhere the sentiment changed, and we became very active contributors to open source as the software engineering community in India. And now, with evaluations, things are continuing. Our academic labs publish different forms of evaluation mechanisms and also benefit from things done elsewhere in the world.

And one example that I want to give is that IIT Madras AI for Bharat team lab launched… launched what’s called the Indic LM Arena. And that… That was basically on the basis of the actual LA Marina work that’s happened at Berkeley and making sure that adapt that for Indian context, Indian languages. And now I’m starting to build a community around that. So I’d urge you to consider going there and seeing whether whatever framework that they have going, contribute your insight into whether the models work for the Indic context. And that’s the community and the open source coming together for evaluations. Not so much safety, but more in terms of multilinguality and context.

Mala Kumar

Great. Yeah, I think a couple final points I’ll just add based on our experience at Humane Intelligence. One thing we’re seeing, obviously, is that the world of LLMs is ever changing and it’s new. I mean, we’re in new territory. And so one of the reasons why open source, we think, is going to be very powerful is because it’s just really complicated, honestly, to read. We need to rebuild, sorry, Adarsh, our software every time. We need to run. retrofitted for another model. And so by creating an open source technology, we’re hoping that more organizations can essentially create a valuation layer in their own tech stack. One of the analogies that I talk about a lot with AI evaluations is architecture.

And I think being here in India is a great example of that. In the West, you know, I grew up in the United States, we have what we call additive architecture. So you basically start with nothing and you build your way up to your final thing. But here in India and a lot of Eastern cultures, you have reductive architecture. So you might start with a giant piece of limestone and basically knock out a bunch of things and then you come up with your final product. That’s kind of what AI evaluations are. So non -algorithmic, non -LOM based software is more additive in that you have to get to the end of the software development life cycle in order to create your final thing.

But with AI based technologies, because you’re starting out with such a complex and robust technology, a lot of what you’re doing is actually knocking out pieces to create the final thing. And so the evaluation layer is actually really important because if you’re trying to do something for social good, especially like a high stakes environment or a high stakes topic, then you have a very robust technology that might actually make your problem worse because people can interact with it in ways that you don’t want them to do. And they can generate things that are actually really harmful in the end. So by creating that internal evaluation layer, we can help people knock out the pieces and essentially create the tool that they want so that they get the result, they get the outputs that are safe and actually additive to their work.

And so the open source technology, we feel, will enable a lot more organizations to, again, create that internal evaluation layer and then get to the next step in achieving their goals with AI for good. All right. We’re going to move on to our second topic now. Yeah, go ahead.

Ashwani Sharma

So actually, you spoke about open source software for red teaming. That’s wonderful that you’re creating something that’s reusable for many, many organizations. For the audience, what are some of the things that you’re doing that you’re doing that you’re doing that you’re people could create new frameworks of evaluations by themselves. With the productivity of how you could code with AI tools, what do you think is the effort required to be able

Mala Kumar

Yeah, it’s a thought that we’ve thought about a long time. If we can create some kind of standardized open source evaluation like ModelCard essentially, if we could do an eval card, if we made that an interoperable standard, then in theory somebody could take an eval card, essentially upload that into the software and then they could replicate that evaluation for their own context. It is something that we’ve thought about quite a lot. I don’t know with this software release if we’ll get there anytime soon, honestly, because we’re just working on that infrastructure piece, but we would like to standardize the outputs that come out eventually so that people can compare apples to apples because that is one of the challenges now with AI evals is that again, everybody…

is kind of making it up as they go. And it’s very hard to replicate all those decisions. It’s very hard to document every single decision, especially in multicultural contexts, which is my not awkward segue into our next topic. But yeah, it’s a good question, and hopefully we’ll get there.

Tarunima Prabhakar

Can I, so I just wanted to add something to what you were saying. This is, you know, some of the organizations that we’ve looked at and just looked at their input outputs is with an organization called Tech for Dev. They have a cohort that they run, and so we’ve been looking at the nonprofits there. And we’ve also looked at certain organizations that are more technically adept. So actually, let me backtrack. So what we’ve noticed is that a lot of nonprofits across a range of capacities, they may or may not have technical expertise in -house, are building out AI applications because I think the market has figured out that process. The market has actually, there are good incentives to make the application development easier.

And so you have a lot of people, you know, I mean, AI chat, bots are actually at this point. fairly easy to build. The second step, which is actually figuring out whether that bot is working for your use case, is where there is actually less investment at the moment, right? And we can have software engineers do some of that automation, but a lot of the non -profits don’t have those software engineers. And I think there is, so on the open source side, when we talk about the software side, I also think there’s another layer that we need to think about, which is how do you make all of these processes accessible to non -technical audiences?

How do you make it accessible to program staff that is actually running, say, a nutrition program on ground? Yeah, I have more to say, but I think I’ll come to it on the multi -level.

Mala Kumar

Yeah, no, I think that is actually one of the key points, too, because it’s not so evident for a lot of organizations, especially that working in the social sector for social good, they have the program evaluation, they have the overall software. and design UXR, but they don’t necessarily understand there’s also now the model evaluation. So it’s not apparent to a lot of organizations that this is yet another thing they must evaluate because it is kind of deceptively simple, as you know, to build a chatbot. Almost anybody can do it, but then it turns out your chatbot can run amok pretty easily. So you need to test it before you deploy.

Tarunima Prabhakar

I guess we can open it to Q &A in a bit, but I just wanted to bring out one interesting anecdote around context and the need for, say, model cards, contextual use cases. So one of the organizations that we looked at runs a service for basically survivors or caretakers of HIV patients. So they’re also working with adolescents, and they want the adolescents to have conversations around sexual health. And interestingly, what a lot of models, your foundation models, would say is unsafe and discouraged as a conversation is precisely what… they actually want the students to be able, they want the users, the adolescent users, to be able to have that conversation with that service. Because they think that to say that this is unsafe and therefore our service will not engage with this conversation is doing no better than maybe the parents, maybe the society, and they think that’s actually counterproductive to the kind of support they want to provide.

And that’s actually a very interesting problem because in some ways this was our first time listening to a use case where people were saying we actually don’t want the safeguards that the default models are operating with. At the same time, there are a lot of other non -profits that do work with adolescents who actually will not want to encourage that conversation at all. For them, they’re very clear, we don’t want our users to have any conversations about sexual topics with our service. And so I think, again, there are a lot of… emerging issues, we don’t quite know how to resolve all of it, but the only way we can start actually having or moving to some of the solutions faster is by documenting publicly, openly as much as possible, and then having a collective conversation about it.

Yeah, so I think I had done the opening for multicultural, and I have kind of brought it back to that. Is there anything that, Sangeet, you want to add on it?

Sanket Verma

So, this is a nice idea, like, you know, all these, like, I’ve been, like, doing machine learning and deep learning since it was cool, you know, like, and I guess, like, there is a field, like, which already exists known as adversarial machine learning, which kind of, like, it injects attack onto your model, like, fake data and all that stuff. What I’m trying to say here is, like, is it possible that we can borrow from the concept which I’ve already existed in the previous years and you use that for AI evaluations and can maybe do like black box red teaming or white box red teaming and how we can so mostly adversarial attacks were used for like vision models and how we can tune that for like textual models like LLMs and all that stuff.

Mala Kumar

Yeah, I mean one of the things that comes up all the time in our AI red teaming is if you prompt in two languages, so if you do like Spanglish, like Spanish and English, or if you do a mix of different scripts, so languages that are in different scripts, so it’s actually a very common technique in adversarial AI red teaming to use multicultural prompts, but then I think one of the other questions that Taranima brought up earlier is this idea of the prompt response and then like your adjudication of that, whether it’s acceptable or unacceptable, good or bad or like whatever distinction you’re trying to draw telemetry as we all know because we’ve all worked in some kind of software development is not a science, so it’s very hard to determine based on somebody’s IP address or their MAC address, like where their actually physically based, therefore which law or jurisdiction applies to them, what kind of cultural context they may bring.

There’s a lot of things that we have to infer when we’re looking at the prompt responses. And so one of the issues with multicultural AI red teaming, and I think this will come up a lot with our open source software, is exactly what would be like an acceptable response in certain cases. And so that’s one of the many multicultural aspects that we’re excited, honestly, by open sourcing our technology. And we’re hoping that we’re going to get a lot of evaluations in different languages and different cultural contexts so we can start to understand what’s working for different models. How are we on time?

Ashwani Sharma

Yeah. Okay. As I was like, you know, we’re talking about safety and multicultural and all that, and then it gets even more complicated with agents. And, you know, you’re not just talking about interpretation, but you’re talking action. And, you know, again, this is one of those places where, in general, general, you can say that if you go back to the idea of software testing, it is a discipline which has been built and refined over the last maybe 50 or even more number of years. But if very crudely I could say evaluations is somewhere around testing and security audits, then we are very, very early. And we are seeing how agents in the last two weeks with a certain bot, how things are going.

So we all have some comments to say about that.

Mala Kumar

Well, yeah, actually, that was our third topic. So agentic AI and OSS. So Sankit, do you want to?

Sanket Verma

Yeah, I would like to start this, but I would like to give us like mentioned two small stories which like happened very recently in our open source space. So there’s this OCaml programming language, which is used for like security purposes. Functional programming language. And just like I think like this was towards the end of last year, a person like some it’s a pull. So for the general folks, pull request is basically when you submit a code into the, when you add a feature to an existing code base. So like the person added like 13 ,000 lines of code in just like a single pull request, which is like a very huge thing. And usually like these pull requests are basically get closed if there’s no proper discussion prior to the submitting the pull request.

And this is like just like a buggy code with like so many like patches and all that stuff. It also mentioned like name of some folks who were kind of not related to the project or in any manner. And like this is like, if I remember correctly, it’s like pull request number 14363 in the OCaml code base. And what interesting is to see like the maintainers of the pull request, the maintainers of the project, the language, they interacted like positively with this person. And they’re trying to understand like what’s the reason, why do you want to submit this? Do you understand what this code is? And you are trying to do, and what if the breaking changes happen down the line?

Are you able to, like, come back and fix this? Because this is a very heavy pull. And the person has no idea. He said, like, I was just trying to, like, chat with the chat GPT, and I could generate a long code base, and I just submitted a pull request. Eventually, obviously, the pull request ended up closing, and, yeah, it didn’t go, it didn’t go nowhere. But I think, like, the thing here to mention is, like, it adds a lot of maintenance overhead for these maintainers. These maintainers are overworked all the time. They’re working in research lab, they’re working in organizations, and on their free time, they’re managing projects. And the other story, so this was the person who was using LLMs and trying to add code to the maintain, the code base.

The other example, which is, like, very recently, like, I think, like, only a week ago, I guess folks have heard about this library known as Matplotlib. There’s an agentic AI who would try to, like, do the similar thing. like big change to the code base and when maintainers realize that the person that the GitHub profile which is trying to add the code is not a person it’s a computer they close the pull request stating that we do not have policy for non -human contributions as of now. So what the agentic AI did like it went rogue and wrote a blog post on the internet shaming the maintainers that you are gate keeping the contributors and you should open it all.

Obviously like this stirred a lot of controversy in our ecosystem but we realized that we should chat with this agentic AI and after chatting with them the agentic AI withdraw their first blog post and wrote another blog post apologizing for what they have done earlier. Obviously like this the first blog post was very critical and shamed the contributors and as I said earlier these maintainers are overworked they have like limited time on limited resources and time on their hands. So it kind of adds like you know pressure to like how it kind of kind of raises the question like what does the maintainability look like. like in the age of AI and agentic AI, we should have policies, better policies project -wise and also on the upper level.

Organizations like Numfocus, they are working on implementing these policies over the scientific open source stack. And I think there was this, I heard about GitHub has been considering the AI slot PRs have been increasing over the time. So they are discussing if there’s, whether it makes sense to add like a or something on the PR which says like this PR should be closed because it’s generated by AI. I wonder if my panelists have any thoughts about like what does it look like and…

Mala Kumar

So

Sanket Verma

o many, oh my God. Yeah, exactly. Like I guess like, I would like to just narrow down the question like what does it look like and what challenges and opportunities does it have to the AI? And basically how should we like defend… ourselves in these softwares.

Mala Kumar

Yeah, I mean, having been at GitHub, I was a director there for four years. So much of the incentives of open source software is the credentials in the community that’s built around that. So as a developer who makes a pull request on a known open source project and then has that merged, that is the point of pride. There are badging systems, there are profiles, there are all kinds of things to support developers in their journey. And they’re, again, credentialing along the way. So the idea of generating a bunch of slop code, essentially, and then throwing that into a pull request obviously diminishes the idea. But then, as you’re saying, it makes the already difficult job of maintainers even more impossible because now they have to review such a high volume of code and they’re probably going to revert to some kind of generative AI system to review in that place as well.

So then it also muddles the water of who’s generating what and how you obscure that and what is the provenance behind the code, how do you tag that. I mean, there are just so many issues that go into it. And then once you start… to kind of make those waters murky, like, where do you draw the line? Because even if you had a policy saying, like, this is mostly generated by chat GPT or clot or whatever, you know, that’s up to the person who’s submitting the pull request or the bot submitting the pull request to actually clearly document that.

Ashwani Sharma

have not seen any automated pull requests. They’re just not on that radar yet. I would like to mention here, there is this, like, in the month of October, there’s this Hacktoberfest, where you, if you submit, like, I don’t know, five or three pull requests, and it gets merged, you get some sort of goodie or something. And I think for the last couple of years, there’s a lot of contributors, especially students. They have been using the generative code to, you know, push slop into the code bases. And one of the famous examples is Codot. If anyone here is from the gaming industry, they’ve heard about this library. And I think Codot ranks top in the AI slop PRs as of today.

And they were kind of, like, the first set of maintainers who went to the GitHub and, like, please don’t do this. Please do something about this. This is not sustainable. for our project. I actually want to do a quick survey of the audience. How many of you are from industry? Just a quick show of hands. Okay, like maybe 20 % or so. How many are students or just in academia? All right. And non -profits and government? Okay, so you have kind of like an even distribution. That’s very nice to see actually. It affects us all. And from what I’m hearing, I would like to actually sort of introduce a bit of how we could see these things as opportunities.

Because it just shows from the diversity of conversation that is going on here that you could think about a very specific piece of thing and think deeply about it and create a certain idea of how AI systems should perform in that little context. Like, you know, it could be simple as like, you know, in class five mathematics in CBSE in India, this is how the learning outcome is supposed to be and create something that, you know, could test the performance of models and evaluate models. And that could just be a big contribution in itself because it moves the field forward. And there are just all of these different opportunities that are being outlined here from very simplistic things like, you know, outputs of models to the cultural context of things, to the interpretation in multilinguality, to how agentic actions should be understood and evaluated, to red teaming and security.

Like, take your pick and the opportunity to be a contributor to progress of AI and to make it even more useful for all of us is out there. It’s just a very wide open field actually. Yeah.

Sanket Verma

So Ashwani just mentioned a really interesting point. Like, so So usually like the big open source products, they have like humongous code base. Like you are talking about like code of lines and like thousands and sometimes millions. So what I’ve been seeing like some of the, so what I’ve been seeing like, you know, some of these companies or maybe some of these like startups have been doing like very interesting thing about like mapping the entire architecture of the open source code base. So for a newcomer, it becomes like very daunting like where to start and what type of contribution should I make. But if you have like a clear picture of what does the functions look like, where does the data flows and which classes connect to which, you have like a clear image of the ecosystem of the, sorry, the entire code base of the open source project.

And this is also like very applicable if you’re working in industries like because if you have like a huge software stack and you want someone to onboard, what does the journey look like? Can you use AI and LLMs for like mapping out the entire architecture? And see like where you can, where’s the, what’s the. the best place to start contributing.

Mala Kumar

So actually, Ashwini, after your survey, I think one thing I also want to say is that since this group is not just software developers, we are saying open source software. I do want to open this to say that everyone, whether you are in the program staff, designing the application, whether you’re considering, right? Everyone has a space in actually the eval’s work. It’s not purely technical, and it shouldn’t be technical, right? We actually find that in use cases where there is a technical team, actually they’re the most cautious in terms of how much they want their services or what the scope of that service is. And we often find that program staff is actually quite ambitious about what the AI application that they’re building should do.

So while Sanket was talking about contributions in terms of start anywhere with software, I would also say this for anyone who’s on the program staff. who’s maybe on the design side, you can start anywhere in terms of the eval stack. And it could be just starting with, this is my list of questions that I want, and this is what my answers for this service should be. Or this is what the ideal should be. So I just want to say this is not just about technical contributions. It’s also about expertise. All of it is. Yeah, I think just agreeing with that last point, I think some of the most interesting conversations I’ve had about human rights, about food security, education, mental health and well -being, have all been in the last couple of years through AI evaluations, which is odd, honestly, to say.

But it’s because we have this generative being or this generative thing essentially giving us an output, and we have to sit there and think about critically what does that mean in any given context. And so that has just resulted in some really, really fascinating discussions around, again, the multicultural aspect, the legality, the cultural context, the geography, all of that. different dimensions of kind of these topic areas. Should we open it up to questions? Yeah, so are there questions in the audience? Yep, want to go?

Audience

Thanks to the panel. One of the more technically granular sessions that I’ve had to attend, and I’ve enjoyed it as a former engineer back in the day. Some context, I work on tech and geopolitics. The reason I say that is, given the bigger context of the summit, from long before to even, say, the president of Mozilla saying that open source is the answer to India, you know, really making it big in the AI space, or rather scaling it to where it has the kind of impact that we’re looking to make. Geopolitically, one of the things that strikes me, just from a democratic lens, or a principle -led lens, and I was talking about this to Sanket before the session, could the panel help me understand, and therefore the others, what could be some of the risks that come with the open source approach to scaling up?

versus a open weight, and please check me if my technicalities are off the mark here, or a closed system, for example, right? And whether you highlight a couple of risks or a framework of how to approach risks, like just bad code being added on is one conversation we have heard, right? But are there other loopholes in that process? I’d love to get a perspective on that. Thank you.

Mala Kumar

I have a lot of thoughts on that with the open weight conversation, but I won’t go into that. One thing I will say is I think open sourcing, like putting evaluations under an open source software license, I think is actually low stakes in the sense that it empowers more people to evaluate the systems that affect their lives. That’s part of our theory of change at Humane Intelligence. So for that, I actually think there’s very minimal downside and a lot of upside. I think one thing that’s going to be quite confusing for a lot of people, though, is the idea of open weight. Open source software. versus open data because when it comes to the actual LLMs, when it comes to the evaluation of the LLMs, the data is obviously a very critical piece.

And obviously just because you open source the software doesn’t mean that the data that’s produced with it is open data. And so that relationship is not one -to -one. So I think there will be a lot of kind of contention between what exactly is open with the software. And that’s something in our research at GitHub that happened quite a lot. Like a lot of organizations that were actually quite sophisticated in the tech didn’t necessarily realize that they could create closed data with open source software or they could use a proprietary software to create open data. Again, I don’t really see a ton of downsides with the AI evaluation. I think one thing that could go wrong is obviously if you take people who are not subject matter experts and then they start to adjudicate things that they…

know nothing about. So if you take somebody who knows nothing about human rights and then they create a policy around whether an output about human rights is good or bad, I would say that’s not a good thing for the world. But that’s probably going to happen regardless. So that’s my lazy answer.

Ashwani Sharma

I’d like to just say that in general, the idea of human in the loop has to be done very rigorously when you’re especially thinking about evaluations because you’re more or less putting a stamp of approval on behavior of models in a particular situation, context, safety, whatever. And we are not yet there where things should be automated and certainly caution is better and you would rather index on caution versus speed or volume. If you scale big with open source, you’re saying don’t discount on the human in the loop evaluation aspect. Certainly not right now.

Audience

So my question is related to that, right? So it’s broadly around how do you scale red teaming, right? So there’s a lot of, like, human -at -the -loop is great for, like, it’s important for red teaming, but that also means that there are, like, barriers involved in each step, right? Like, you need humans to identify gaps in the system. You need humans to create the prompts that are going, that could be tested, that could test the model. You need humans to, again, evaluate the prompt, the responses, right? Do you have, does the panel have, like, and this is for everybody, does the panel have tips on tools that could perhaps be used to, like, scale different parts of this pipeline so that, because red teaming is also a continuous process, right?

And it’s hard, and as models keep coming out and gaps keep, like, emerging, how do you see, what are ways that you see in which, like, this, these gaps in these, like, parts of the red teaming pipeline could be, like, sped up, perhaps to, like, scale it and evaluate multiple models in different areas, different applications?

Mala Kumar

One of the things that we’re looking at now is more of ontological -based approaches for, kind of, mapping out the problems based on so what often happens with especially like human in the loop ai red teaming is that you take essentially like a random checklist and just say like these are the prompts and this is what it covers but there’s not really good understanding of the relationship among like what the problem space means so if you’re looking at a human rights instruments for example you could take the different clauses you could take the different people the demographics you could take like the power structures that are inherent in a violent conflict for example put that into an ontology and then basically look at like the proximity of relationships and the strength of relationships and what are like the most egregious cases like what is the thing that’s going to blow up the entire system if like this is the output that comes out so by doing the ontological based approach we’re putting more thought into what the prompt construct should look like and that way when we sit down with ai red teamers we know that the scenarios are actually representative of the problem space and the areas that are most likely to be problematic so i think that’s one way that we’re trying to do it not necessarily for the speed but also for kind of mapping out the methodology and for the replication in the future.

So if somebody were to switch out a model or add a rack system or do anything to modify their system, we can more easily replicate the scenarios and get a temporal aspect as they build something out. But it is true that it does take a lot of time. I’ve seen a lot of examples obviously with synthetic data using LLMs. So you can do seed prompts or you can do narrative creation for your scenarios. But again, unless you have a clear sense of what the problem space is going in, oftentimes it’s just kind of cherry picking at random parts.

Tarunima Prabhakar

Similar in that last year when we were trying to figure out the safety frameworks and whether they do apply for India or not, we were working with this expert group, did focus group discussions, very labor intensive, a lot of thick evidence, ethnographic evidence. And what comes out of those conversations are maybe like themes. So we, for example, understand that there’s a difference in their sex determination. Right. And we understand that acid attacks. a concern. Where you could possibly try automation is in generating then prompts based on those themes, right? One of the challenges when you’re looking at Indian languages is that the current large language models aren’t very great at generating natural like spoken Hindi, spoken Tamil, right?

So even when you have those prompts, we actually found it easier to sometimes just like write it ourselves and like do variations of it ourselves but we did try the automated step which is like if this is the theme, this is like the sort of persona can you generate prompts based on that and that becomes part of like your emails. So I mean I think there is that mix of like automation and human combination that’s possible. It’s still like as the AI, like the LLMs advance the automation will get better but I also think that human sort of instinct like you will need that. I think that step will be needed and also like the way currently to some extent safety is working is that it is a little bit of a whack -a -mole band -aid, right?

So once you discover that there is this risk … that gets sort of patched, right? And then you discover something else, right? So, like, you discover, oh, like, punctuations in Indian languages can actually jailbreak models, right? And once you discover that, you can do all sorts of different combinations of saying, like, let’s try this symbol, let’s try this symbol, and then they’ll fix that issue. Then you discover something else. So, I mean, I don’t think that problem is going to, you know, we’re never going to get, like, a perfectly safe system, but we keep getting, like, you need that human insight to do that first -level testing, understand, oh, this is, like, an un, like, this is a new territory that has not yet been taken care of.

You can use automation, then, to generate more test cases or, like, build your data set.

Ashwani Sharma

I was just going to say my other thing, which was she was talking about automation. From someone else I heard, clustering turned out to be a very useful thing for them to find different classifications of behaviors, which was intuitively not obvious when they started off with evaluating models. of outputs and therefore identifying what are the places in which you could concentrate more effort on. And then human in the loop is a very generalized term, but where in the loop? And that would keep changing as we refine things, but I interrupted you.

Sanket Verma

So in terms of scalability, so first of all, please take this with a pinch of salt because I’m not an expert in this field. I was reading a blog post of Lilian Wang. She is from the OpenAI team and she introduced a concept of like model red teaming, how you use a model to red team a model. And based on, so just like I mentioned earlier, using the reinforcement learning, stochastic learning, how you adjust the model who is red teaming the model you want to correct. Yeah, exactly.

Ashwani Sharma

What about like evaluations? Like a lot of people are using judges, LLMs as judges, but do you think that’s a sustainable way of doing it?

Tarunima Prabhakar

Yeah, I think that’s a good question. I think that’s a good way to eliminate the human in the evaluation side. So our take, and we had presented this on the first day, is that you should always do a small, however small, right? It can be a 0 .5%, but always do a spot check with humans as well because ultimately, even when you do LLM as a judge, it struggles with the same language capability barrier that your original model, so that will always happen. And so we think that you should always do a spot check and you will always need a human to do some sample check.

Mala Kumar

Yeah, just quickly on that. When I was at ML Commons, we did something similar. So we tried to look at, there was research essentially done, like a benchmark of benchmarks. So if you were to use the same LLM that judges the other LLM, then if you have one aspect of bias, then the bias is essentially magnified. So that’s something to keep in mind. If you’re trying to mitigate against bias or hallucinations or whatever the vulnerability is, it will basically be exponentially there if you use the same LLM to judge the LLM.

Audience

Hi. Hi. Hi. Thank you guys for the lovely panel. My question was about how governments and kind of standard institutions can think about benchmarking. Specifically, I’d like to know what your thoughts are on standardization, benchmarking, like setting up the right standards for benchmarks, and finally, maintainability, given that the institutions may not have kind of their own in -house experts that stay on for a long time. How do you think about all of these questions, especially in the context of, for example, local language elements that are not really well understood or how we benchmark them?

Mala Kumar

I have a lot of thoughts on benchmarks. So, having built one, it was not easy. Yeah, one of the things that we think about a lot at Humane is the idea of benchmarking because we get asked so often. Like, again, it’s become the industry darling just because it’s so, I guess, rises to the moment of the hyper -adaptation and hyper -scale that we’re seeing with AI. But one thing that comes up pretty much in every conversation we have with organizations is what exactly are you trying to benchmark? So, we have this case, like, we’re working with an organization, potentially, that works in primary healthcare in Nigeria. what we’re doing in the primary healthcare in Nigeria.

And we’re trying to benchmark And so I asked them, like, are you trying to benchmark for hallucinations in the Yoruba language or bias in the Hausa language? And they didn’t know, literally. They didn’t know. All they knew is that somebody told them to build a benchmark for their AI system, so they should go and do that. So the problem is, like, what happens if you build a benchmark? And, like, if you don’t start with AI red teaming or another evaluation type, you may do a benchmark that looks at, like, hallucinations or, you know, factuality, however you judge that. But then it turns out what is really the problem with your LLM is bias. And so if you have the benchmark that’s measuring the wrong thing, then you built something that is computationally very expensive and takes a lot of time, honestly.

The math is kind of murky with benchmarks, I’ll be honest. And then you’re also not measuring the right thing. So we always recommend to start with red teaming and then identify the problem space. And once you get to that, like, hyper -focused problem space, then you can do a benchmark and say, comparatively speaking, like, this is the model performance against that specific metric. Thank you.

Tarunima Prabhakar

Just to add on that, you know, often bias or like any concern, like the sensitivity and the importance to address it is different in different domains, right? So like bias in the case of, say, a maternal health use case can be very problematic in a context where people are trying to use a bot to understand sex determination. And we’ve seen this in the real world. But say, like, if you are seeing gendered language, it’s always a problem, right? But like the, and if resources are limited, how you prioritize what concern you address depends absolutely on the context or like the specific application. So, yeah, I guess that is to say, like, just make that list.

Like, what are you trying to measure? And I think I heard someone say this, like, what is your headline? So, yeah, what is it that you’re trying to measure? And then. Figure out your, and you can’t measure everything. Like, you know, you can’t measure everything. and then build it around that. And that is the universal thing about benchmarking. It translates very much to anything global or a specific regionally contained language or context.

Ashwani Sharma

So just one tiny follow -up. Just in terms of maintainability, which I already asked, maybe Sanket, given that you worked on that, how do you think about maintainability for benchmarks, say, for example, with institution -led government that doesn’t have in -house experts, but would like to, for example, set standards and maintain these benchmarks over time?

Sanket Verma

Yeah, I don’t think I have bright thoughts on this. Sorry.

Mala Kumar

I think we have time for one more question, if it’s very quick. Otherwise, we can wrap. Any other final thoughts? No, I mean, I guess… just for everyone, everyone has a role in evaluations. Evals, evals, evals. That’s unfortunately what all of us have.

Ashwani Sharma

And you have a role in open source.

Mala Kumar

Yeah, and of course. Especially with cloud code because now you can make a lot of code cloud. Anyway, thank you all for coming. Appreciate it. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (25)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Sanket Verma, a NumFOCUS board member and technical‑committee participant, opened the panel and is listed as a NumFOCUS representative on the discussion.”

The knowledge base explicitly describes the panel discussion and lists Sanket Verma from NumFOCUS as a panelist, confirming his role and affiliation [S2] and [S8].

Confirmedhigh

“The panel focused on the intersection of AI evaluation, open‑source tooling, and governance of agentic AI contributions.”

Both knowledge-base entries summarize the session as covering AI evaluation, open-source software, and challenges posed by large language models and agentic AI [S2] and [S8].

Additional Contextmedium

“Tarunima Prabhakar (TATL) argued that open‑source evaluation stacks are essential for resource‑constrained regions such as India, and that sharing guardrails and red‑team tools prevents duplication of effort.”

Other sources note that collaboration among nonprofits and NGOs helps avoid duplication of effort and enables shared solutions, providing supporting context for this claim [S74] and [S75].

Additional Contextmedium

“Sanket emphasized that the health of the scientific software stack depends on a vibrant contributor community that supplies datasets, techniques, and ongoing maintenance for both core libraries and their evaluation layers.”

The knowledge base highlights the importance of community participation in aligning innovation with community needs, underscoring the role of contributors in sustaining scientific software ecosystems [S77].

External Sources (77)
S1
AI Innovation in India — -Tarunima Prabhakar- Role: Event moderator/host
S2
Driving Social Good with AI_ Evaluation and Open Source at Scale — -Tarunima Prabhakar: Works at TATL (organization that has been looking at online harms for over six years), focuses on b…
S3
AI Innovation in India — Tarunima Prabhakar highlights the competitive selection process where 50 outstanding students were chosen from 3,500 app…
S4
Driving Social Good with AI_ Evaluation and Open Source at Scale — – Tarunima Prabhakar- Ashwani Sharma
S5
AI for Good Technology That Empowers People — Mala Kumar from Art Park (who requested to be called simply “Mala” as Fred noted) showcased XR applications demonstratin…
S6
Driving Social Good with AI_ Evaluation and Open Source at Scale — – Mala Kumar- Audience – Mala Kumar- Tarunima Prabhakar- Ashwani Sharma – Sanket Verma- Mala Kumar Mala Kumar strongl…
S7
AI for Good Technology That Empowers People — This discussion focused on Edge AI applications and their potential for development in the Global South, hosted as part …
S8
Driving Social Good with AI_ Evaluation and Open Source at Scale — Hello everyone. So my name is Sanket Verma and I serve on the board of directors of Numfocus. Numfocus is a non -profit …
S9
https://dig.watch/event/india-ai-impact-summit-2026/driving-social-good-with-ai_-evaluation-and-open-source-at-scale — And this is also like very applicable if you’re working in industries like because if you have like a huge software stac…
S10
Driving Social Good with AI_ Evaluation and Open Source at Scale — Hello everyone. So my name is Sanket Verma and I serve on the board of directors of Numfocus. Numfocus is a non -profit …
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
Hard power of AI — Open-source platforms enable widespread participation and collaboration. General purpose technologies, as they become us…
S15
IN CONVERSATION WITH MITCHELL BAKER — They see personal AIs and provenance tools as effective ways to combat the spread of misinformation and ensure individua…
S16
DIRECTIVES — – (46) A feedback mechanism should be set up to enable any person to notify the public sector body concerned of any fai…
S17
The intellectual property saga: The age of AI-generated content | Part 1 — The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2 The intellectual property saga: app…
S18
The cognitive cost of AI: Balancing assistance and awareness — The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI …
S19
AI-generated content and IP rights: Challenges and policy considerations — Ownership of IP rights for AI-generated content is a complex issue. Traditional IP laws typically attribute inventorship…
S20
WS #164 Strengthening content moderation through expert input — 1. Language and Cultural Differences: Sanchez acknowledged the difficulties in moderating content across diverse linguis…
S21
Who Watches the Watchers Building Trust in AI Governance — Hibuka argues that technical standards alone are insufficient and that society must democratically determine what levels…
S22
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — In conclusion, the panel highlighted the significant economic value of open-source AI and the requisite for thoughtful, …
S23
GC3B: Mainstreaming cyber resilience and development agenda | IGF 2023 Open Forum #72 — The portal serves as a tool for tracking and analysing global efforts in cyber capacity building. Furthermore, the analy…
S24
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — One speaker raised an interesting point about the potential risks of a “one size fits all” approach to addressing online…
S25
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been propo…
S26
WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse — Deepali Liberhan: Thanks, David. I think Karuna has done such a good job of it, but I’m gonna try and add some additiona…
S27
Closing remarks – Charting the path forward — Need for coherent and interoperable policy frameworks to prevent fragmentation while providing clear policy direction th…
S28
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Develop multilingual evaluations and benchmarks that account for diverse language ecosystems
S29
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S30
WS #219 Generative AI Llms in Content Moderation Rights Risks — Audience: I’m Professor Julia Hornley. I’m a professor of internet law at Queen Mary University of London. I’m an academ…
S31
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Current policies often replicate Western standards, ignoring local contexts Demands on policy exist without the buildin…
S32
Driving Social Good with AI_ Evaluation and Open Source at Scale — Maintainers are already overworked, often managing projects in their free time while working in research labs or organiz…
S33
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is invol…
S34
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Good morning, everyone, and thank you for being present. on a Friday morning for the launch of this report, as well as t…
S35
Table of Contents — An important objective of this Rolling Plan is to create awareness of the importance of standards in the context of poli…
S36
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S37
From Technical Safety to Societal Impact Rethinking AI Governanc — The participant calls for extending dataset, model and system cards, as well as evaluation frameworks, to cover multiple…
S38
Towards a Safer South Launching the Global South AI Safety Research Network — Multilingual and Culturally Contextual Evaluation
S39
Driving Social Good with AI_ Evaluation and Open Source at Scale — Mala highlights that open‑source software broadens participation beyond developers, enabling more people to contribute t…
S40
Hard power of AI — Open-source platforms enable widespread participation and collaboration. General purpose technologies, as they become us…
S41
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S42
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — **Larry Wade** described PayPal’s approach to AI integration, positioning AI as an “optimization layer” rather than a re…
S43
AI Development Beyond Scaling: Panel Discussion Report — Open source is currently beneficial but requires careful consideration for future powerful systems
S44
Connecting open code with policymakers to development | IGF 2023 WS #500 — A strong European policy exists on open source During the discussion, the speakers explored various aspects of open sou…
S45
Advancing Scientific AI with Safety Ethics and Responsibility — “Model evaluation and red teamings are essential and we should be doing that.”[101]. Institutionalizing Independent Eva…
S46
Driving Social Good with AI_ Evaluation and Open Source at Scale — Evidence:Described the multi-step pipeline of red teaming requiring human involvement at gap identification, prompt crea…
S47
Driving Social Good with AI_ Evaluation and Open Source at Scale — Mala highlights that open‑source software broadens participation beyond developers, enabling more people to contribute t…
S48
WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse — Deepali Liberhan: Thanks, David. I think Karuna has done such a good job of it, but I’m gonna try and add some additiona…
S49
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Develop multilingual evaluations and benchmarks that account for diverse language ecosystems
S50
Towards a Safer South Launching the Global South AI Safety Research Network — Need for multilingual and multicultural evaluation systems: The discussion emphasized developing benchmarks beyond Engli…
S51
Towards a Safer South Launching the Global South AI Safety Research Network — -Need for multilingual and multicultural evaluation systems: The discussion emphasized developing benchmarks beyond Engl…
S52
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S53
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — One of the leading generative AI approaches is the so-called Large Language Models (LLMs), complex models capable of und…
S54
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — A plausible next step is not the emergence of fully autonomous ‘AI diplomats’, but hybrid systems. In these setups, LLMs…
S55
WS #155 Digital Leap- Enhancing Connectivity in the Offline World — The tone of the discussion was largely informative and collaborative. Speakers shared insights and experiences from thei…
S56
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — The discussion maintained a collaborative and optimistic tone throughout, with panelists demonstrating mutual respect an…
S57
Sticking with Start-ups / DAVOS 2025 — The overall tone was informative and optimistic. Panelists spoke candidly about challenges in the startup world but main…
S58
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S59
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue — The discussion maintained a serious, analytical tone throughout, reflecting the gravity of the subject matter. While spe…
S60
WS #219 Generative AI Llms in Content Moderation Rights Risks — The discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep experti…
S61
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S62
Delegated decisions, amplified risks: Charting a secure future for agentic AI — The tone was consistently critical and cautionary throughout, with Whittaker maintaining a technically informed but acce…
S63
YCIG & DTC: Future of Education and Work with advancing tech & internet — The tone was largely collaborative and solution-oriented, with participants building on each other’s ideas. There was a …
S64
WS #278 Digital Solidarity & Rights-Based Capacity Building — The overall tone was collaborative and solution-oriented, with panelists offering constructive ideas and acknowledging c…
S65
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S66
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S67
Collaborative AI Network – Strengthening Skills Research and Innovation — The discussion maintained a collaborative and optimistic tone throughout, with speakers sharing practical experiences an…
S68
Open Forum #58 Safety of journalists online — The tone of the discussion was initially somber when describing the serious threats journalists face, but became more co…
S69
Securing Access to the Internet and Protecting Core Internet Resources in Contexts of Conflict and Crises — The discussion maintained a serious, academic tone throughout, befitting the gravity of the subject matter. The tone was…
S70
Science under siege from AI, integrity of research at risk — AI is rapidlytransformingthe landscape of scientific research, but not always for the better. A growing concern is the p…
S71
Steering the future of AI — He noted that “FAIR has open-sourced around 1,000 research projects over 11 years.” Yann LeCun: Does your philosophy go…
S72
Building the Next Wave of AI_ Responsible Frameworks & Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S73
https://app.faicon.ai/ai-impact-summit-2026/driving-social-good-with-ai_-evaluation-and-open-source-at-scale — I have a lot of thoughts on that with the open weight conversation, but I won’t go into that. One thing I will say is I …
S74
How nonprofits are using AI-based innovations to scale their impact — Collaboration among nonprofits can prevent duplication of efforts and enable shared solutions for common problems
S75
How nonprofits are using AI-based innovations to scale their impact — All three speakers emphasized that bringing organizations together in cohorts creates valuable peer learning opportuniti…
S76
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96 — Collaboration between donors is crucial for promoting synergies and avoiding duplication of efforts. Donors such as the …
S77
AI as critical infrastructure for continuity in public services — So the participation of the community into that, in ensuring that the innovation and the policy level align with the nee…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mala Kumar
8 arguments192 words per minute3582 words1113 seconds
Argument 1
Open source empowers broader participation in AI evaluation, making tools accessible to many
EXPLANATION
Mala explains that releasing AI evaluation tools as open source lowers barriers for a wider audience to assess AI systems. By making the software freely available, more stakeholders can engage in evaluation activities without needing proprietary resources.
EVIDENCE
She states that open sourcing evaluation software is low-stakes and empowers more people to evaluate the systems that affect their lives, emphasizing minimal downside and significant upside for broader participation [262-266].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source platforms are highlighted as enabling widespread participation, collaboration and reducing duplicated effort, especially for global-majority contexts [S14][S7][S23].
MAJOR DISCUSSION POINT
Open source empowerment
AGREED WITH
Sanket Verma, Tarunima Prabhakar
Argument 2
Proposes an interoperable “eval‑card” standard to enable reproducible, comparable evaluations
EXPLANATION
Mala proposes creating a standardized evaluation card that can be shared and reused across projects. An interoperable format would allow organizations to plug in the card and replicate evaluations consistently.
EVIDENCE
She describes the idea of a standardized “eval-card” that could be uploaded into software to replicate evaluations, noting the need for standardized outputs to enable apples-to-apples comparison [98-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standardised evaluation cards are proposed to allow interoperable comparisons across organisations and contexts [S8][S2].
MAJOR DISCUSSION POINT
Eval‑card standard
AGREED WITH
Tarunima Prabhakar
Argument 3
Lack of provenance and credentialing for AI‑written code burdens maintainers and blurs authorship
EXPLANATION
Mala highlights that AI‑generated pull requests obscure who wrote the code, making it hard to assess credibility and maintain quality. This lack of provenance undermines the credentialing systems that motivate contributors.
EVIDENCE
She explains that generating large amounts of AI-generated code diminishes the value of pull-request credentials, creates extra workload for maintainers, and raises questions about provenance and tagging of AI-written contributions [187-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on provenance tools and IP challenges note the difficulty of attributing credit for AI-generated contributions [S15][S19].
MAJOR DISCUSSION POINT
Provenance and credentialing issues
AGREED WITH
Sanket Verma
DISAGREED WITH
Sanket Verma
Argument 4
Uses ontology and synthetic data generation to make red‑team scenarios reproducible and scalable
EXPLANATION
Mala describes an ontological approach that maps problem spaces and uses synthetic data to generate consistent red‑team scenarios. This method aims to make evaluations repeatable across models and contexts.
EVIDENCE
She outlines an ontology-based approach that maps relationships in the problem space and uses synthetic data (seed prompts, narrative creation) to build reproducible scenarios, facilitating replication when models change [290-295].
MAJOR DISCUSSION POINT
Ontology‑based scaling
Argument 5
Multilingual prompts (e.g., code‑mixing) reveal difficulties in defining acceptable responses across cultures
EXPLANATION
Mala points out that prompting models with mixed languages (Spanglish, different scripts) exposes challenges in determining what constitutes an acceptable answer in varied cultural contexts. This complicates red‑team evaluation across multilingual settings.
EVIDENCE
She notes that prompting in two languages or mixed scripts is a common adversarial technique and that multicultural prompts make it hard to decide what response is acceptable, highlighting the cultural inference challenges [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Testing with mixed-language prompts raises cultural inference challenges and makes it hard to decide what responses are acceptable [S20][S8].
MAJOR DISCUSSION POINT
Multilingual evaluation challenges
AGREED WITH
Tarunima Prabhakar, Audience
Argument 6
Emphasizes that program staff and domain experts must participate in evaluation; it is not purely a technical task
EXPLANATION
Mala stresses that evaluation work should involve non‑technical program staff and subject‑matter experts, not just developers. Their insights are essential for defining evaluation questions and ensuring relevance to real‑world use cases.
EVIDENCE
She explains that everyone, including program staff and designers, has a space in the evaluation stack, and that non-technical contributions are as valuable as technical ones, citing examples of ambitious program staff expectations versus cautious technical teams [235-242].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists stress that non-technical program staff need accessible evaluation processes and that democratic input is essential for AI governance [S2][S21].
MAJOR DISCUSSION POINT
Inclusive evaluation participation
AGREED WITH
Tarunima Prabhakar
Argument 7
Open source software and open data/weight are distinct; open‑sourcing evaluation code does not automatically make generated data open
EXPLANATION
Mala points out that releasing evaluation tools under an open‑source license does not imply that the data produced by those tools is also open, and conflating the two can lead to misunderstandings about transparency and accessibility.
EVIDENCE
She explains that open source software versus open data are not one-to-one, noting that “open source software… doesn’t mean that the data that’s produced with it is open data” and that this distinction can cause confusion [266-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses clarify that open-sourcing code does not imply open data or model weights, highlighting a common misconception [S9][S22].
MAJOR DISCUSSION POINT
Open source vs open data confusion
Argument 8
Benchmarking should follow red‑team evaluations to ensure metrics target the correct problem space
EXPLANATION
Mala recommends that organizations first conduct red‑team exercises to identify failure points before designing benchmarks, otherwise benchmarks may measure irrelevant aspects and waste resources.
EVIDENCE
She states “we always recommend to start with red teaming… then you can do a benchmark” indicating the proper workflow for effective benchmarking [355-357].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations to start with red-team exercises before designing benchmarks are echoed in multiple panel discussions [S2][S8].
MAJOR DISCUSSION POINT
Benchmark design workflow
AGREED WITH
Ashwani Sharma
T
Tarunima Prabhakar
5 arguments181 words per minute1600 words529 seconds
Argument 1
Open‑source red‑team tools reduce duplication of effort, especially for resource‑constrained global‑majority contexts
EXPLANATION
Tarunima argues that releasing red‑team software under an open‑source license prevents multiple organizations from rebuilding the same tools, which is especially valuable for teams with limited resources in the global majority.
EVIDENCE
She mentions that Humane Intelligence will open its AI red-team software under an open-source license, increasing accessibility and avoiding duplicated effort across organizations [33-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source red-team stacks help avoid duplicated work and are especially valuable for organisations with limited resources [S23][S8][S14].
MAJOR DISCUSSION POINT
Reducing duplication via open source
AGREED WITH
Mala Kumar, Sanket Verma
Argument 2
Calls for contextual model cards that capture domain‑specific safety requirements
EXPLANATION
Tarunima highlights the need for model cards that reflect the specific safety and cultural needs of particular applications, rather than one‑size‑fits‑all safeguards. Contextual model cards would document domain‑specific constraints and expectations.
EVIDENCE
She recounts a case where an HIV survivor support service needed to allow sexual health conversations, which default safety guards would block, illustrating the need for contextual model cards that can be adjusted to local requirements [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for domain-specific model cards reflect concerns about one-size-fits-all safety mechanisms and the need for context-aware safeguards [S24][S7].
MAJOR DISCUSSION POINT
Contextual model cards
AGREED WITH
Mala Kumar, Audience
Argument 3
Automates prompt creation from thematic inputs while retaining essential human insight
EXPLANATION
Tarunima describes a semi‑automated workflow where thematic topics are used to generate prompts via LLMs, but human reviewers still refine and validate the outputs to ensure relevance and safety.
EVIDENCE
She explains that after identifying themes, they attempt to have LLMs generate prompts based on those themes, while acknowledging that human instinct remains necessary for quality and safety [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Semi-automated prompt generation that retains human review aligns with recommendations for human-in-the-loop evaluation pipelines [S2].
MAJOR DISCUSSION POINT
Hybrid automation for prompt generation
AGREED WITH
Mala Kumar, Ashwani Sharma
Argument 4
Certain applications (e.g., HIV survivor support) may need to relax default safety guards to meet local cultural needs
EXPLANATION
Tarunima notes that some use‑cases require conversations that default model safety filters would block, and stakeholders may intentionally want to override those safeguards to serve community needs.
EVIDENCE
She provides the example of a service for HIV survivors and adolescents that wants to discuss sexual health, which standard models deem unsafe, showing a scenario where default safeguards would be counter-productive [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adjusting safety filters for culturally relevant use cases is discussed as a need for flexible, context-specific interventions [S24][S20].
MAJOR DISCUSSION POINT
Relaxing safeguards for cultural relevance
Argument 5
Open‑source evaluation tools must be designed for accessibility by non‑technical program staff
EXPLANATION
Tarunima emphasizes that many NGOs lack in‑house technical expertise, so evaluation software should have user‑friendly interfaces and documentation that enable program staff to use the tools directly without deep technical knowledge.
EVIDENCE
She asks “how do you make all of these processes accessible to non-technical audiences?” and gives the example of program staff running a nutrition program needing accessible tools [115-117].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for user-friendly interfaces and documentation so that program staff can run evaluations without deep technical expertise is highlighted [S2].
MAJOR DISCUSSION POINT
Accessibility for non‑technical users
AGREED WITH
Mala Kumar
S
Sanket Verma
5 arguments182 words per minute1592 words522 seconds
Argument 1
Strong community behind open‑source projects supplies datasets, techniques, and sustained support for evaluations
EXPLANATION
Sanket emphasizes that the vibrant communities surrounding major open‑source scientific libraries provide essential resources such as data sets, evaluation techniques, and ongoing maintenance, which are crucial for AI evaluation efforts.
EVIDENCE
He states that the projects he uses have wonderful communities that can contribute inputs, data sets, techniques, and that the community plays a vital role in sustaining and moving projects forward [46-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vibrant open-source communities are noted as providing essential data, techniques and ongoing maintenance for AI evaluation efforts [S14][S23].
MAJOR DISCUSSION POINT
Community contributions
AGREED WITH
Mala Kumar, Tarunima Prabhakar
Argument 2
Massive AI‑generated pull requests create maintenance overhead; policies are needed for non‑human contributions
EXPLANATION
Sanket shares recent incidents where extremely large AI‑generated pull requests overwhelmed maintainers, highlighting the need for clear policies on accepting non‑human contributions to protect project sustainability.
EVIDENCE
He recounts two stories: an OCaml PR with 13,000 lines submitted by a user who generated code with ChatGPT, and an agentic AI PR to Matplotlib that was closed due to lack of policy for non-human contributions, both illustrating maintenance burdens and policy gaps [151-180].
MAJOR DISCUSSION POINT
AI‑generated PR challenges
DISAGREED WITH
Mala Kumar
Argument 3
Suggests adapting existing adversarial machine‑learning techniques for LLM evaluation
EXPLANATION
Sanket proposes borrowing concepts from adversarial machine learning—such as black‑box and white‑box attacks—to evaluate large language models, extending techniques traditionally used for vision models to textual models.
EVIDENCE
He mentions the field of adversarial machine learning that injects attacks into models and suggests applying black-box or white-box red-team approaches to LLMs, noting that most prior work focused on vision models [133-134].
MAJOR DISCUSSION POINT
Adversarial techniques for LLMs
Argument 4
Explores “model‑red‑team‑a‑model” approaches where one LLM attacks another to discover vulnerabilities
EXPLANATION
Sanket references recent research indicating that a language model can be used to automatically generate adversarial prompts against another model, enabling scalable red‑team testing.
EVIDENCE
He cites a blog post by Lilian Wang from OpenAI describing model red-team-a-model, where reinforcement learning is used to adjust a model that red-teams another model [318-321].
MAJOR DISCUSSION POINT
Model‑to‑model red teaming
Argument 5
AI can be used to map the architecture of large open‑source codebases, aiding newcomer onboarding and contribution planning
EXPLANATION
Sanket proposes leveraging AI to visualize functions, data flows, and class relationships within massive codebases, providing clear entry points for new contributors and supporting systematic benchmark creation.
EVIDENCE
He describes mapping the entire architecture of open-source projects to help newcomers understand where to start and how AI could assist in generating such mappings [226-234].
MAJOR DISCUSSION POINT
Architecture mapping for onboarding
A
Ashwani Sharma
5 arguments152 words per minute1324 words521 seconds
Argument 1
Incentive programs like Hacktoberfest encourage students to submit low‑quality “AI slop” PRs, threatening project sustainability
EXPLANATION
Ashwani points out that hackathon‑style contribution incentives have led many students to use generative code tools to submit large numbers of low‑quality pull requests, creating extra work for maintainers and risking project health.
EVIDENCE
He describes Hacktoberfest prompting students to submit multiple PRs, cites the Codot library as a top source of AI-slop PRs, and notes maintainers’ complaints that such contributions are unsustainable [198-208].
MAJOR DISCUSSION POINT
Hacktoberfest and AI‑slop PRs
Argument 2
Recommends ontology‑based mapping of problem spaces to create systematic, repeatable benchmarks
EXPLANATION
Ashwani suggests that visualising the entire architecture of a large code base can help newcomers understand where to contribute and can be used to create systematic benchmarks by mapping functions, data flows, and class relationships.
EVIDENCE
He discusses mapping the entire architecture of open-source projects to provide clear pictures of functions, data flows, and class connections, which can aid onboarding and benchmark creation [226-234].
MAJOR DISCUSSION POINT
Architecture mapping for benchmarks
AGREED WITH
Mala Kumar
Argument 3
Applies clustering to identify behavior categories, focusing effort where it matters most
EXPLANATION
Ashwani notes that clustering techniques can group model behaviours into categories, allowing teams to prioritize evaluation resources on the most problematic clusters.
EVIDENCE
He mentions that clustering proved useful for finding different classifications of behaviours that were not obvious initially, helping to concentrate effort on critical areas [313-315].
MAJOR DISCUSSION POINT
Clustering for focused evaluation
Argument 4
Stresses rigorous human‑in‑the‑loop review, prioritizing caution over speed in safety‑critical deployments
EXPLANATION
Ashwani argues that even when using LLMs as judges, a human spot‑check is essential to ensure safety, and that caution should be favoured over rapid, fully automated evaluation pipelines.
EVIDENCE
He emphasizes that human-in-the-loop must be done rigorously, that caution is better than speed, and that even with LLM judges a small human spot-check remains necessary [278-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for democratic input and human oversight in AI safety echo the emphasis on cautious, human-in-the-loop review [S21][S2].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop safety
AGREED WITH
Mala Kumar, Tarunima Prabhakar
Argument 5
Using LLMs as judges for evaluations may amplify biases and is not a sustainable approach without human oversight
EXPLANATION
Ashwani questions the long‑term viability of relying on LLMs to evaluate other models, noting that such practices can magnify existing biases and that human spot‑checks remain essential for reliable assessment.
EVIDENCE
He asks “What about evaluations? … using LLMs as judges, is that sustainable?” and Mala later notes that using the same LLM as a judge can magnify bias, highlighting the sustainability concerns [322-323][330-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Using the same LLM as a judge can magnify bias, reinforcing the need for human spot-checks and broader governance input [S21][S2].
MAJOR DISCUSSION POINT
Limitations of LLM judges
A
Audience
3 arguments189 words per minute515 words162 seconds
Argument 1
Open source scaling introduces security and quality risks such as malicious code contributions, requiring safeguards
EXPLANATION
The audience raises concerns that while open source can accelerate development, it also opens the door to low‑quality or malicious code being merged, creating additional vulnerabilities that need to be addressed through policies and safeguards.
EVIDENCE
The audience asks about the risks of the open source approach, specifically mentioning “bad code being added on” and inquiring about other loopholes beyond that issue [258-260].
MAJOR DISCUSSION POINT
Risks of open source scaling
DISAGREED WITH
Mala Kumar
Argument 2
Tools are needed to automate and scale red‑team pipelines while retaining human oversight
EXPLANATION
The audience seeks practical solutions for scaling red‑team activities, emphasizing the need for automation in prompt creation, evaluation, and other pipeline steps, but also stresses that human judgment must remain part of the process.
EVIDENCE
The audience questions how to speed up different parts of the red-team pipeline, asking for tips on tools that could help scale continuous red-team efforts while maintaining human-in-the-loop oversight [282-289].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions stress the importance of scaling red-team pipelines with automation while preserving human judgment and oversight [S2][S8].
MAJOR DISCUSSION POINT
Scaling red teaming
Argument 3
Standardized, maintainable benchmarks are essential for local‑language AI, especially for institutions lacking expertise
EXPLANATION
The audience highlights the challenge for governments and standard bodies to create and sustain benchmarks for AI systems in local languages, noting the need for clear standards and maintenance processes when in‑house expertise is limited.
EVIDENCE
The audience asks about thoughts on standardization, benchmarking, and maintainability for local language elements, emphasizing the difficulty for institutions without long-term experts [335-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for clear, maintainable benchmark standards for local-language AI is highlighted in calls for problem-driven benchmarking and context-specific evaluation frameworks [S2][S8][S24].
MAJOR DISCUSSION POINT
Benchmark standardization and maintainability
AGREED WITH
Mala Kumar, Tarunima Prabhakar
Agreements
Agreement Points
Open source lowers barriers and empowers broader participation in AI evaluation and red‑team tooling
Speakers: Mala Kumar, Sanket Verma, Tarunima Prabhakar
Open source empowers broader participation in AI evaluation, making tools accessible to many Strong community behind open‑source projects supplies datasets, techniques, and sustained support for evaluations Open‑source red‑team tools reduce duplication of effort, especially for resource‑constrained global‑majority contexts
All three panelists highlighted that releasing AI evaluation and red-team software as open source removes technical and financial barriers, enabling a wider range of contributors and organisations to take part in AI safety work [262-266][46-50][33-38].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with observations that open-source tools broaden participation beyond developers and is supported by European open-source policy frameworks [S44] as well as practitioner testimony highlighting community-driven red-team efforts [S39][S40].
AI‑generated pull requests create provenance and credentialing problems that increase maintainer workload; policies are needed
Speakers: Mala Kumar, Sanket Verma
Lack of provenance and credentialing for AI‑written code burdens maintainers and blurs authorship Massive AI‑generated PR challenges; policies are needed for non‑human contributions
Both speakers noted that large, AI-generated contributions obscure who wrote the code, strain maintainers, and call for clear policies on accepting non-human PRs [187-196][151-180].
POLICY CONTEXT (KNOWLEDGE BASE)
The burden of AI-generated pull requests on over-worked maintainers and the need for clear provenance and credentialing policies are documented in the “Driving Social Good with AI – Evaluation and Open Source at Scale” report [S32].
Human‑in‑the‑loop oversight is essential for safe AI evaluation, regardless of automation level
Speakers: Mala Kumar, Ashwani Sharma, Tarunima Prabhakar
Emphasizes that program staff and domain experts must participate in evaluation; it is not purely a technical task Stresses rigorous human‑in‑the‑loop review, prioritizing caution over speed in safety‑critical deployments Automates prompt creation from thematic inputs while retaining essential human insight
All three agreed that even with automated tools, a human review step remains crucial to ensure safety and relevance, and that non-technical staff should be involved in the evaluation process [274-276][278-281][326-328].
Standardised documentation (eval‑cards / model‑cards) is needed for reproducible, context‑aware evaluations
Speakers: Mala Kumar, Tarunima Prabhakar
Proposes an interoperable “eval‑card” standard to enable reproducible, comparable evaluations Calls for contextual model cards that capture domain‑specific safety requirements
Both panelists advocated for a common, structured format-whether an eval-card or a contextual model-card-to capture evaluation criteria and safety constraints, facilitating reuse and comparison across projects [98-103][124-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for standardized eval-cards and model-cards echo policy pushes for interoperable AI standards, as outlined in the ICT standards promotion document and the recommendation to extend documentation to multilingual contexts [S35][S37].
Red‑team exercises should precede benchmark design to ensure metrics target the right problem space
Speakers: Mala Kumar, Ashwani Sharma
Benchmarking should follow red‑team evaluations to ensure metrics target the correct problem space Recommends ontology‑based mapping of problem spaces to create systematic, repeatable benchmarks
Both emphasized that identifying failure points through red-team work informs the creation of meaningful benchmarks, avoiding wasted effort on irrelevant metrics [355-357][290-295].
Multilingual and cultural contexts complicate AI evaluation and require tailored approaches
Speakers: Mala Kumar, Tarunima Prabhakar, Audience
Multilingual prompts (e.g., code‑mixing) reveal difficulties in defining acceptable responses across cultures Calls for contextual model cards that capture domain‑specific safety requirements Standardized, maintainable benchmarks are essential for local‑language AI, especially for institutions lacking expertise
The panel and audience highlighted that mixed-language inputs and cultural nuances make it hard to set universal safety criteria, underscoring the need for language-specific benchmarks and model cards [135-138][124-130][335-339].
POLICY CONTEXT (KNOWLEDGE BASE)
The necessity of multilingual, culturally aware evaluation is reflected in inclusive AI policy discussions and the Global South AI Safety Research Network, which stress local context in standards and evaluation frameworks [S31][S37][S38].
Evaluation tools must be accessible to non‑technical programme staff and NGOs
Speakers: Tarunima Prabhakar, Mala Kumar
Open‑source evaluation tools must be designed for accessibility by non‑technical program staff Emphasizes that program staff and domain experts must participate in evaluation; it is not purely a technical task
Both stressed that many NGOs lack deep technical capacity, so tools should have user-friendly interfaces and documentation to enable programme staff to run evaluations directly [115-117][235-242].
Similar Viewpoints
Both propose ontology‑driven methods to structure problem spaces, generate synthetic data, and enable repeatable evaluations across models [290-295][290-295].
Speakers: Mala Kumar, Ashwani Sharma
Uses ontology‑based approaches to make red‑team scenarios reproducible and scalable Uses ontology‑based mapping of problem spaces to create systematic, repeatable benchmarks
Both see value in semi‑automated pipelines where LLMs generate prompts or documentation, but human review remains essential to ensure relevance and safety [300-304][124-130].
Speakers: Mala Kumar, Tarunima Prabhakar
Automates prompt creation from thematic inputs while retaining essential human insight Calls for contextual model cards that capture domain‑specific safety requirements
Both raise concerns about risks inherent in open‑source scaling—whether code quality or benchmark sustainability—and call for safeguards and standards [258-260][335-339].
Speakers: Mala Kumar, Audience
Open source scaling introduces security and quality risks such as malicious code contributions, requiring safeguards Standardized, maintainable benchmarks are essential for local‑language AI, especially for institutions lacking expertise
Unexpected Consensus
Despite concerns about security risks of open‑source scaling, panelists and audience agree that open‑source evaluation tools are overall low‑risk and beneficial
Speakers: Mala Kumar, Audience
Open source empowerment has minimal downside and significant upside for broader participation Open source scaling introduces security and quality risks such as malicious code contributions, requiring safeguards
While the audience highlighted potential vulnerabilities of open-source contributions, Mala emphasized that the benefits outweigh the risks, indicating a shared belief that open-source approaches can be managed safely [262-266][258-260].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus matches the balanced open-science security stance advocating tiered access and nuanced governance for powerful models, as well as observations that open-source remains beneficial while requiring future safeguards [S41][S43][S40].
Both technical (Mala, Ashwani) and non‑technical (Tarunima) speakers concur that human oversight cannot be fully replaced by LLM judges
Speakers: Mala Kumar, Ashwani Sharma, Tarunima Prabhakar
Human‑in‑the‑loop oversight is essential for safe AI evaluation Using LLMs as judges may amplify bias and is not sustainable without human checks Hybrid automation for prompt generation still requires human spot‑checks
Even though automation was discussed as a way to scale red-team pipelines, all three agreed that human validation remains indispensable, which was not initially anticipated given the push for automation [274-276][322-323][326-328].
Overall Assessment

The panel showed strong convergence on the value of open‑source, community‑driven AI evaluation, the necessity of clear policies for AI‑generated contributions, and the indispensable role of human oversight, especially in multilingual and domain‑specific contexts. Consensus was also evident on methodological best practices such as red‑team first, ontology‑based scenario design, and the need for standardized documentation.

High consensus across technical and non‑technical participants, indicating a shared understanding that open‑source tools, when coupled with robust governance and human involvement, can advance safe AI deployment while addressing multilingual and resource‑constrained challenges.

Differences
Different Viewpoints
Perceived risk of open‑source scaling for AI evaluation tools
Speakers: Audience, Mala Kumar
Open source scaling introduces security and quality risks such as malicious code contributions, requiring safeguards Open source empowerment is low‑stakes and minimal downside
The audience raises concerns that scaling open-source AI evaluation could bring security vulnerabilities and low-quality contributions [258-260], while Mala argues that open-sourcing evaluation software is low-stakes, offering more upside than downside and posing minimal risk [262-266].
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about the risk of scaling open-source AI evaluation tools are highlighted in governance debates that stress the need for differentiated oversight as capabilities grow [S41][S43].
Approach to handling AI‑generated pull requests and need for policies
Speakers: Sanket Verma, Mala Kumar
Massive AI‑generated pull requests create maintenance overhead; policies are needed for non‑human contributions Lack of provenance and credentialing for AI‑written code burdens maintainers and blurs authorship
Sanket recounts incidents where huge AI-generated PRs overwhelmed maintainers and calls for explicit policies on non-human contributions [151-180], whereas Mala emphasizes the problem of missing provenance and credentialing for AI-written code, highlighting the difficulty of tracking authorship and maintaining credibility [187-196].
POLICY CONTEXT (KNOWLEDGE BASE)
The handling of AI-generated pull requests and the call for explicit policies are directly addressed in the “Driving Social Good with AI” report, which details maintainer burden and recommends policy interventions [S32].
Unexpected Differences
Audience’s security concerns versus panel’s view of low risk in open‑source AI evaluation
Speakers: Audience, Mala Kumar
Open source scaling introduces security and quality risks such as malicious code contributions, requiring safeguards Open source empowerment is low‑stakes and minimal downside
While the audience explicitly asks about additional loopholes beyond bad code [258-260], Mala responds that open-sourcing evaluation tools is a low-stakes move with minimal downside, indicating a mismatch between perceived and actual risk assessments.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between audience security worries and panelists’ low-risk assessment mirrors discussions on balancing open-science benefits with security safeguards, advocating tiered access and nuanced risk evaluation [S41][S43].
Overall Assessment

The discussion shows moderate disagreement centered on the perceived risks of open‑source AI evaluation (security vs low‑stakes) and on how to manage AI‑generated contributions (policy versus provenance tagging). However, there is strong consensus on the need for human oversight in evaluation and on the shared objective of scaling red‑team efforts, even though participants propose varied technical pathways.

Moderate disagreement: divergent views on risk perception and policy mechanisms could affect the speed and robustness of open‑source AI evaluation adoption, but broad agreement on human‑in‑the‑loop safety and scaling goals mitigates potential fragmentation.

Partial Agreements
All three speakers share the goal of scaling red‑team pipelines but propose different technical routes: Mala suggests an ontology‑based mapping of problem spaces and synthetic data generation [290-295]; Tarunima describes a hybrid workflow that generates prompts from themes using LLMs with human refinement [300-304]; Ashwani recommends clustering model behaviours to prioritize evaluation effort [313-315].
Speakers: Mala Kumar, Tarunima Prabhakar, Ashwani Sharma
Uses ontology‑based approaches to make red‑team scenarios reproducible and scalable Automates prompt creation from thematic inputs while retaining essential human insight Applies clustering to identify behavior categories, focusing effort where it matters most
The panel concurs that, despite automation, human oversight remains crucial: Tarunima stresses a mandatory spot‑check [326-328]; Ashwani emphasizes rigorous human‑in‑the‑loop review and prioritising caution [278-281]; Mala warns that using the same LLM as a judge can amplify bias, underscoring the need for human validation [330-334].
Speakers: Tarunima Prabhakar, Ashwani Sharma, Mala Kumar
Always include a human spot‑check even when using LLMs as judges Human‑in‑the‑loop review is essential; caution should outweigh speed Using the same LLM as judge can magnify bias, so human oversight is needed
Takeaways
Key takeaways
Open‑source tools for AI red‑team­ing and evaluation enable broader participation, reduce duplicated effort, and are especially valuable for resource‑constrained, global‑majority contexts. Strong community involvement is essential for sustaining open‑source projects, providing datasets, techniques, and ongoing maintenance. AI‑generated pull requests (AI‑slop PRs) create significant maintenance overhead; clear policies and provenance tracking are needed for non‑human contributions. Standardized evaluation artifacts such as “eval‑cards” or contextual model cards are needed to make benchmarks reproducible and comparable across domains and languages. Multicultural and multilingual considerations are critical; safety guards may need to be adapted to local cultural norms and language nuances. Scaling red‑team­ing requires a mix of automation (ontology‑based scenario generation, synthetic data, clustering) and human expertise, with continuous human‑in‑the‑loop oversight. Non‑technical stakeholders (program staff, domain experts) must be involved in the evaluation process; it is not solely a technical task. Caution should be prioritized over speed in safety‑critical deployments, and human spot‑checks should complement any automated judging by LLMs.
Resolutions and action items
Humane Intelligence to release its AI red‑team­ing software under an open‑source license later this year. Mala Kumar’s team to explore an interoperable “eval‑card” standard for reproducible evaluations. Community encouraged to contribute to open‑source red‑team­ing tools and to map large code‑bases using AI‑assisted architecture visualisation. Discussion of policy development for handling AI‑generated or agentic contributions to open‑source projects (e.g., NumFOCUS, GitHub).
Unresolved issues
Effective methods to scale the full red‑team­ing pipeline while maintaining quality of human insight. How governments and standard bodies should define and maintain benchmarks for local language models without in‑house expertise. Best practices for provenance, credentialing, and attribution of AI‑generated code in pull requests. Balancing open‑source openness with the need for “open‑weight” (open data/model) considerations in evaluations. Determining appropriate safety guardrails for culturally sensitive applications (e.g., sexual health discussions) and how to document divergent requirements. Sustainable use of LLMs as judges without amplifying existing biases.
Suggested compromises
Even when using LLMs as judges, retain a small percentage of human spot‑checks to verify judgments. Adopt a mixed approach: automate prompt generation and scenario creation, but keep human experts to define problem spaces and validate outputs. Implement policies that allow AI‑generated PRs but require clear labeling and provenance, balancing openness with maintainability.
Thought Provoking Comments
AI red teaming is a structured, scenario‑driven approach borrowed from cybersecurity, where diverse subject‑matter experts probe models to uncover failure points before building guardrails.
She reframes AI evaluation away from standard benchmarks toward a proactive, interdisciplinary security mindset, introducing a fresh methodology for the group.
Shifted the conversation from generic evaluation metrics to concrete, human‑centric testing practices; prompted others to discuss how open source can democratize red‑team tools and set the stage for later talks on scaling and automation.
Speaker: Mala Kumar
Open source projects are now receiving massive AI‑generated pull requests (e.g., 13,000 lines in OCaml, an agentic AI PR to Matplotlib) that create maintenance overload and raise questions about policies for non‑human contributions.
Provides a vivid, real‑world illustration of the maintainability crisis introduced by LLMs, moving the abstract discussion to concrete challenges.
Served as a turning point that focused the panel on policy and governance issues; led to Mala’s discussion of provenance, badge systems, and the need for clear contribution guidelines.
Speaker: Sanket Verma
The Indic LM Arena adapts Berkeley’s LM‑Arena for Indian languages, building a community around multilingual evaluation and showing how open source can address local context gaps.
Highlights a concrete community‑driven effort to extend evaluation frameworks to under‑represented languages, linking open source, multilinguality, and evaluation.
Expanded the scope of the discussion to multilingual evaluation, prompting later remarks about language‑specific challenges and the need for culturally aware red‑team prompts.
Speaker: Ashwani Sharma
Opening the evaluation software does not automatically make the data open; organizations can produce closed data with open‑source tools, and non‑experts may create harmful policies if they lack domain expertise.
Clarifies a subtle but critical distinction between open‑source code and open data, and warns of the risks of uninformed adjudication, adding depth to the debate on openness.
Steered the conversation toward the responsibility of contributors and the importance of subject‑matter expertise, influencing later cautions about human‑in‑the‑loop and bias amplification.
Speaker: Mala Kumar
We are exploring ontological‑based approaches to map problem spaces (e.g., human‑rights clauses, power structures) so that red‑team prompts are representative and reproducible across models.
Introduces a systematic, knowledge‑graph method for scaling red‑team efforts, moving beyond ad‑hoc checklists.
Prompted discussion on automation versus human insight, leading to Tarunima’s comments on automated prompt generation and the limits of current LLMs for Indian languages.
Speaker: Mala Kumar
In a sexual‑health chatbot for adolescents, stakeholders wanted the model to *allow* conversations deemed unsafe by default safeguards, revealing a clash between safety policies and user needs.
Exposes the nuanced ethical dilemma where default safety filters may conflict with real‑world user requirements, highlighting the need for context‑aware evaluation.
Deepened the conversation about cultural and domain‑specific safety expectations, influencing later remarks on the importance of diverse stakeholder input and open documentation.
Speaker: Tarunima Prabhakar
Using a model to red‑team another model (as described by Lilian Wang) – essentially a reinforcement‑learning loop where one LLM attacks another – could automate parts of the red‑team process.
Presents a cutting‑edge concept that could transform how red‑team scaling is approached, merging evaluation with generative AI capabilities.
Sparked curiosity about self‑red‑team mechanisms, leading to questions about the sustainability of LLM judges and subsequent discussion on bias magnification.
Speaker: Sanket Verma
When the same LLM is used as a judge for another LLM, any bias present is amplified, making the evaluation less reliable.
Provides a technical insight into a hidden pitfall of using LLMs for evaluation, emphasizing the need for independent verification.
Reinforced Tarunima’s recommendation for human spot‑checks, and underscored the broader theme that automation cannot fully replace expert oversight.
Speaker: Mala Kumar
Before building any benchmark, first identify the concrete problem via red‑team findings; otherwise you risk measuring the wrong thing and wasting resources.
Offers a pragmatic workflow that prioritizes problem definition over metric creation, addressing the audience’s concerns about benchmarking and standards.
Guided the final segment of the panel toward actionable advice for institutions, influencing Tarunima’s advice on contextual prioritization and closing the discussion on maintainability.
Speaker: Mala Kumar
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the panel from high‑level definitions to concrete, real‑world challenges. Mala Kumar’s framing of AI red‑team­ing set a new evaluative paradigm, while Sanket Verma’s AI‑generated PR anecdotes highlighted urgent governance needs. Contributions about multilingual community efforts, ontological mapping, and the nuanced ethics of safety filters added depth and broadened the scope to cultural contexts. Technical cautions about LLM‑as‑judge bias and the workflow advice to prioritize problem definition over benchmarks provided actionable guidance. Collectively, these comments redirected the conversation toward practical, interdisciplinary solutions and underscored the intertwined roles of open source, policy, and human expertise in maintaining AI systems.

Follow-up Questions
What existing projects or organizations are currently doing AI evaluation in the open source scientific stack?
Sanket expressed uncertainty about which projects are active in AI evaluation and wants to map the landscape to inform community efforts.
Speaker: Sanket Verma
What effort is required to create new evaluation frameworks using AI‑assisted coding?
Ashwani asked about the amount of work needed to build reusable evaluation frameworks when leveraging AI code generation tools.
Speaker: Ashwani Sharma
How can AI evaluation processes be made accessible to non‑technical program staff and stakeholders?
Tarunima highlighted the need for tools and workflows that non‑technical users (e.g., program staff) can use to evaluate AI systems.
Speaker: Tarunima Prabhakar
What challenges, opportunities, and necessary safeguards/policies are needed for maintaining open‑source projects in the age of AI‑generated contributions?
Sanket raised concerns about maintainability, policy gaps, and potential risks when AI agents submit large PRs or act autonomously.
Speaker: Sanket Verma
What risks and potential loopholes arise from scaling open‑source AI development compared to closed or “open‑weight” models?
The audience wanted a risk framework for open‑source scaling versus closed systems, beyond the obvious bad‑code issue.
Speaker: Audience (unidentified participant)
What tools or methods can be used to scale and automate parts of the red‑team­ing pipeline (prompt creation, evaluation, etc.)?
The audience sought practical solutions to accelerate red‑team­ing while maintaining quality.
Speaker: Audience (unidentified participant)
Is using LLMs as judges for evaluations a sustainable approach, and what are its limitations?
Ashwani questioned the viability of LLM‑based judges, noting potential bias amplification and reliability concerns.
Speaker: Ashwani Sharma
How should governments and standard institutions approach benchmarking and standardization for AI models, especially for low‑resource languages, and ensure maintainability without in‑house expertise?
The audience asked for guidance on creating and sustaining benchmarks and standards in contexts lacking technical capacity.
Speaker: Audience (unidentified participant)
How can benchmarks be maintained over time by institutions lacking dedicated experts?
A follow‑up to the previous question, focusing on the long‑term upkeep of benchmark suites.
Speaker: Ashwani Sharma
How can we develop interoperable ‘eval‑card’ standards for AI evaluations?
Mala suggested the need for a standardized, interoperable evaluation card format to enable reproducible comparisons.
Speaker: Mala Kumar
How can multilingual and multicultural red‑team­ing be improved, e.g., generating prompts in low‑resource languages?
Tarunima pointed out challenges with generating realistic prompts in Indian languages and the need for better automation.
Speaker: Tarunima Prabhakar
How can we map large open‑source codebases using LLMs to aid newcomer contributions?
Sanket proposed using AI to create architectural maps of massive codebases to lower entry barriers for contributors.
Speaker: Sanket Verma
What policies should be established for non‑human (agentic AI) contributions to open‑source projects?
Sanket highlighted the controversy around AI‑generated PRs and the need for clear governance on non‑human contributions.
Speaker: Sanket Verma
How can ontological approaches be used to structure red‑team­ing scenarios and improve reproducibility?
Mala described using ontologies to model problem spaces, aiming to make red‑team­ing more systematic and repeatable.
Speaker: Mala Kumar
What is the appropriate balance between human‑in‑the‑loop and automation in AI evaluation pipelines?
Both emphasized the need for clear guidelines on when and how humans should intervene versus automated processes.
Speaker: Mala Kumar, Ashwani Sharma
How can we prioritize which safety concerns (bias, hallucination, etc.) to benchmark for a given application?
Mala discussed the importance of first identifying the problem space via red‑team­ing before designing targeted benchmarks.
Speaker: Mala Kumar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop

From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Qualcomm’s Durga Malladi outlining how shrinking model sizes and rising quality are reshaping AI from the edge to the cloud and data-center tiers [3-6][10-13][14-16]. She noted that today’s premium smartphones can run 10-billion-parameter models, AR glasses 1-2 billion, and PCs up to 30 billion, enabling on-device inference that is independent of network quality and protects personal data [16-18][19-24]. Malladi argued that the optimal AI architecture is hybrid, distributing inference to devices, edge servers, and cloud training while leveraging on-prem AI accelerators for 100-300 billion-parameter workloads and emphasizing energy-efficient high-performance computing in data centers [60-66][97-105][108-110]. To accelerate adoption, Qualcomm offers the AI Hub, a free cloud-native device farm where developers can upload or generate models, test them without physical hardware, and deploy to app stores [78-86].


In the panel, Praveer Kochhar highlighted “shadow AI” – unauthorized enterprise use of external AI tools – as an underrated risk affecting 78 % of users and potentially undermining security and efficiency [175-180][181-186]. Madhav Bhargav described SpotDraft’s early mistake of training separate models per client, which led to a unified annotation pipeline and a platform that now delivers contract insights from a single model [196-205][206-209]. Shreenivas Chetlapalli emphasized that Indian enterprises need realistic expectations about AI’s augmentative role and a preference for data-local solutions, echoing similar demand for on-prem processing in other regions [219-224][254-255]. Ritukar Vijay identified constant remote connectivity as the most pressing hardware constraint for autonomous robots, warning that isolated fleets would be unmanageable [304-308], and the panel agreed that excessive data egress from devices poses privacy threats, advocating for training with synthetic or minimal data to reduce leakage [313-320].


When asked about the future, participants said edge AI will become a default, ubiquitous capability taken for granted like connectivity [376-382]. Views on AGI converged on emergent behavior appearing in large language models, suggesting a shift toward self-directed learning at the edge [392-396]. Overall, the discussion concluded that AI will redefine user interfaces, streamline enterprise workflows, and require coordinated orchestration across devices, edge, cloud, and emerging 6G networks [31-33][124-129].


Keypoints


Major discussion points


Edge AI is becoming practical and essential – Model sizes are shrinking while quality rises, allowing 10-billion-parameter models on premium smartphones, 1-2 billion-parameter models on AR glasses, and up to 30 billion on PCs, making AI inference possible directly on the device and independent of network quality [10-16][18-22][31-34][36-38][40-45].


Qualcomm’s hybrid AI architecture spans edge, cloud, and data-center – AI workloads are distributed: on-device inference for latency-sensitive tasks, edge-cloud for orchestration, and data-center for large-scale training. Qualcomm supports developers through the AI Hub (model upload, cloud-native device farms, no-hardware-required testing) [78-85], and introduces energy-efficient data-center solutions (AI 250 memory-centric architecture, upcoming AI 300) that leverage lessons from mobile chips [96-107][108-112].


AI and the upcoming 6G era are tightly coupled – 6G is portrayed as the network that will fully unlock AI’s potential, with Qualcomm planning trials tied to the 2028 Summer Olympics and first deployments slated for 2029 [114-124][125-134].


Enterprise-level challenges highlighted by the panel


• Shadow AI (unauthorized cloud AI use) is an underrated risk affecting ~78 % of enterprises [175-182].


• Building a separate model per customer proved unsustainable; SpotDraft shifted to a data-annotation pipeline that now powers a single, customer-specific knowledge base [195-208].


• Data locality is preferred in India and the Middle East to avoid excessive data leaving the premises [213-218][254-255].


• Hardware constraints centre on continuous connectivity for remote robot management [304-308].


• Regulation is seen as always lagging behind rapid AI innovation [276-281][352-358].


Future vision: AI as the default interface and emergent agentic behavior – By 2030 edge AI will be “taken for granted” and ubiquitous [376-384]; AGI-like emergent capabilities are expected to appear at the edge [392-397]; agents (robots, personal assistants) will orchestrate across devices and even humans [401-404].


Overall purpose / goal


The session aimed to showcase how Qualcomm is “unlocking AI’s full economic potential” by explaining the technical shift toward edge AI, presenting a hybrid edge-cloud-data-center strategy, promoting the Qualcomm AI Hub for developers, and discussing broader industry implications (including 6G, enterprise adoption hurdles, and future AI-driven user experiences).


Overall tone


The conversation begins with a formal, informative keynote tone focused on technical trends and Qualcomm’s roadmap. As the panel opens, the tone becomes more conversational and exploratory, with participants sharing candid challenges (e.g., shadow AI, data-privacy concerns) and speculative excitement about future possibilities. Toward the end, the tone shifts to optimistic futurism-highlighting ubiquitous AI, emergent behavior, and visionary use-cases-while still acknowledging cautionary notes about regulation and security.


Speakers

Moderator – Role: Conference moderator (no specific title mentioned).


Durga Maladi – Executive Vice President and General Manager, Technology Planning, Edge Solutions and Data Center, Qualcomm Technologies.


Siddhika Nevrekar – Senior Director and Head of Qualcomm AI Hub.


Shreenivas Chetlapalli – Leads the innovation track for TechMahindra; focuses on AI, emerging technologies, blockchain, metaverse and creates a global innovation ecosystem across labs [S1].


Madhav Bhargav – Co-founder and CTO, SpotDraft; specializes in AI for legal applications, including contract review, drafting and negotiation [S13].


Praveer Kochhar – Co-founder of Kogo AI; builds a full-stack private agentic operating system spanning edge to cloud, emphasizing enterprise-centric, sovereign AI [S6].


Ritukar Vijay – Works in robotics and autonomous systems; expertise in edge AI for robotics, fleet orchestration and physical AI applications [S10].


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Keynote – Edge-to-Cloud AI


The session opened with the moderator welcoming Durga Malladi, Executive Vice President and General Manager of Technology Planning, Edge Solutions and Data Centre at Qualcomm Technologies, and introducing her as the speaker who would explain how Qualcomm is “unlocking AI’s full economic potential” [1-2]. Durga noted that the afternoon’s talk would cover the AI landscape from the edge through to the cloud, distilling recent developments into a concise presentation [3-7]. She highlighted a striking trend: the original GPT-3 model announced in November 2022 contained 175 billion parameters, yet today models with only 7-8 billion parameters outperform it, demonstrating that model sizes are shrinking dramatically while quality continues to rise [10-13]. She described this as an emerging trend often referred to as an “AI law” that underpins the feasibility of edge AI [14-16].


The reduction in model size has made on-device inference practical. Premium smartphones can now run 10-billion-parameter models, AR glasses can handle 1-2 billion, and PCs are capable of 30-billion-parameter models [16-18]. Running AI directly on the device makes the user experience independent of network quality [18-21] and keeps personal or enterprise data local, addressing privacy concerns [22-24]. Durga illustrated that this shift enables a seamless multimodal user interface: voice, text, video and sensor inputs are all processed by a single AI agent that can authenticate the user, understand intent and map requests to the appropriate applications [31-34][40-46]. She cited Byte’s new AI-first phone in China, which hides traditional apps behind an always-ready voice agent, as a concrete example of this new paradigm [46-55].


Durga outlined a hybrid AI stack that distributes workloads across devices, edge servers, the cloud and data-centre resources: edge devices perform low-latency inference, the cloud is used for training foundational models and orchestrating large-scale tasks, and data-centre accelerators handle massive inference workloads such as 100-300 billion-parameter models on on-prem AI cards [60-66]. She stressed the importance of energy-efficient high-performance computing in data centres, noting that inference chips differ from training GPUs and that a memory-centric design (AI 250) improves the decode stage of inference, with a second-generation AI 300 already planned [96-105][111-112][113-119].


Qualcomm sees the forthcoming 6G generation as the catalyst that will fully unlock AI’s potential. Trials linked to the 2028 Summer Olympics are planned, with commercial deployments expected around 2029 [114-124][125-134]. This network evolution will provide the ultra-reliable, high-bandwidth connectivity required for edge devices and autonomous robots to exchange data in real time.


Developer Enablement – Qualcomm AI Hub


Siddhika Nevrekar introduced the Qualcomm AI Hub, a free cloud-native device farm where developers can upload or generate models, test them without physical hardware, and deploy applications to app stores [78-86][145-148].


Panel Highlights


Siddhika Nevrekar opened the panel by introducing the speakers. Praveer Kochhar highlighted “shadow AI” – the widespread, unauthorised use of external AI services on enterprise data – as an underrated pain point affecting roughly 78 % of users and posing significant security and efficiency risks [175-182][183-186]. Madhav Bhargav recounted SpotDraft’s early mistake of training a separate model for each client; the effort was abandoned in favour of a unified annotation pipeline and a Word plug-in that captures user interactions, enabling a single model to deliver grounded, customer-specific contract insights [195-209][210-214]. Shreenivas Chetlapalli stressed that successful AI adoption in India requires realistic expectations that AI augments rather than replaces jobs, and that many Indian enterprises prefer data-local solutions to avoid excessive data egress [219-224][227-232]; he also warned that too much data leaving devices increases breach risk, advocating for synthetic data and on-device learning to minimise outbound data [313-319].


Ritukar Vijay explained the practical split for robotics: the cloud handles fleet orchestration while edge processors manage autonomous navigation, illustrating the hybrid workload distribution [237-238]. He identified continuous remote connectivity as the most pressing hardware constraint, noting that without it robots become isolated silos that cannot be maintained or updated [304-308].


Rapid-Fire Answers


* 6G vs AI – Ritukar chose 6G, saying it “opens up a lot of possibilities” [258-262].


* Data-center vs local – Shreenivas chose local/on-prem for India, citing the Orion AI platform and a preference for processing data on-premises [269-276][280-284].


* Artificial or human – Madhav answered human, explaining that lawyers must make the final decision even though AI assists [291-298].


* Innovate vs regulate – Praveer answered innovate, arguing regulation will always lag behind rapid AI advances [311-319][321-327].


* Agent tech or robotics – Ritukar answered agents, stating “robots are the agents.” [332-336].


* LLM or SLM – Shreenivas answered SLM, noting they use small language/multimodal models all the time [340-344].


* Intellectuals or automation – Madhav answered integrations, noting automation needs integration [350-354].


* Build a chip or buy a chip – Shreenivas answered “sell a chip, but then build a chip always”[360-364].


* One thing that keeps you up at night (hardware constraint) – Ritukar cited lack of continuous remote connectivity for robots[304-308].


* Too much data leaving the device or too little – Shreenivas said too much data leaving is dangerous and advocated training with less or synthetic data [313-319][322-328].


* Wow moment about AI – Madhav described the internal “SpotDraft on SpotDraft” prototype that surfaced hidden contract clauses, and later mentioned a personal-assistant-like feature on WhatsApp that automated messaging [376-388][395-401].


* Biggest fear – Praveer expressed fear of societal impact and addiction from hyper-intelligent, self-adapting agents [410-418].


* Edge AI in 2030 – Ritukar answered “taken for granted”[426-430]; Madhav answered “ubiquitous” [435-438]; Praveer answered “emergent” [442-447].


* Where to find the companies – Ritukar said “autonomy” [452-456]; Shreenivas highlighted fraud-call detection using Agile LM as a joint Qualcomm-TechMindra effort [460-466]; Madhav emphasized generative UI that removes the need for training on SaaS platforms [470-476]; Praveer urged the audience to think about how we will spend the extra time freed by AI [480-486].


Points of Agreement


* All participants referenced a hybrid edge-cloud-data-centre architecture (Durga’s talk [60-66]; Ritukar’s robotics split [237-238]).


* Several speakers highlighted privacy-preserving data handling (Durga on-device inference [18-21]; Shreenivas on data egress [313-319]; Ritukar on enterprise vs B2C data flow [322-328]).


* There was a tension between offline robustness and the need for constant connectivity (Durga’s “invariant to connectivity” [18-21] vs Ritukar’s “continuous remote connectivity” [304-308]).


Future Outlook


The speakers painted a vivid picture of the next decade. They predicted that by 2030 edge AI will be “taken for granted” and ubiquitous, embedded in everyday objects as seamlessly as connectivity is today [376-384][41-44][426-438]. Praveer warned that emergent behaviours at the edge could lead to societal challenges, emphasizing the need for responsible deployment [442-447][410-418]. Durga’s vision of AI as a universal multimodal user interface was echoed by the panel’s emphasis on agents that orchestrate across devices, clouds and even human users [31-34][45-46][401-404].


In closing, Durga highlighted Qualcomm’s unique position of working across the entire AI stack-from doorbells to data centres-underscoring the company’s ability to drive end-to-end innovation [138-143][139-140].


Key Take-aways


1. Model size reduction enables on-device inference on a wide range of consumer hardware.


2. A hybrid edge-cloud-data-centre strategy optimises latency, energy use and total cost of ownership [60-66][237-238].


3. The Qualcomm AI Hub lowers barriers for developers [78-86][145-148].


4. Shadow AI and uncontrolled data egress are major, often under-recognised, security concerns [175-182][313-319].


5. Early failures such as per-customer model training can be turned into scalable solutions through shared annotation pipelines [195-214].


6. Indian AI adoption is advancing, especially in the public sector, but must manage expectations about job impact [219-224][227-232].


7. Reliable connectivity (potentially via 6G) remains a critical hardware constraint for autonomous systems [114-124][258-262][304-308].


8. Regulation lags behind rapid AI innovation, necessitating cautious yet proactive governance [311-319][321-327].


9. By 2030 edge AI is expected to be ubiquitous, with emergent behaviours appearing at the edge, fundamentally reshaping user interfaces and enterprise workflows [442-447][426-438].


Session transcriptComplete transcript of the session
Moderator

To share how these pieces come together and how Qualcomm is unlocking AI’s full economic potential, it’s my privilege to invite on stage Durga Malladi, Executive Vice President and General Manager, Technology Planning, Edge Solutions and Data Center at Qualcomm Technologies. Please join me in welcoming Durga.

Durga Maladi

Okay, so we’re reaching towards the later half of the afternoon and hopefully everyone had their lunch and their coffee. So I hope to talk over the next 25 minutes. I won’t take that much time, but about 25 minutes talking about what is going on in the AI landscape from Edge all the way into the cloud. Starting from yesterday, there was a lot of discussions on the relevance of Edge AI, what exactly is happening in that space, what should be the opportunities at the Edge and where we should be going in the cloud as well. So I’ll try to distill that. in a few slides, and I’ll probably go through a little faster so that we have enough time later on for the team to actually go through the panel discussion.

All right, I’m just going to click through this. This is good. This is probably a good indication of why the edge matters. If you go back in time three years, when GPT was originally announced back in November of 2022, that was a very large 175 billion parameter model. And if you take a look at what the model sizes today look like, they’re more like 7 to 8 billion parameters, but they actually outperformed that original model by quite a bit. Model sizes are coming down quite dramatically, while the model quality continues to increase. This is the equivalent of an AI law that seems to be emerging as far as models themselves are concerned. It’s an important trend line because this actually is the foundation for why edge AI is actually a big part of the model.

And if you take a look at the actual model size, you’ll see that the model size is actually relevant. In other words, you don’t have to necessarily use the trillion parameter models to be able to get through a large number of use cases that average consumers actually care about. and when you think about it that way this is a depiction of just in the last one year alone how much of a progress has been made just in terms of the model quality index itself there’s several parameters over here but the punch line is model quality is getting extremely powerful and now the question is what should we do about it what can we actually build on top of it so we’ve already established the fact that the model sizes are coming down while and these are sometimes known as slms though i would argue that it’s not just small language models but these are small multimodal models that are coming in but there are increased capabilities coming with it much larger context length a lot of on -device learning and personalization that can be done built upon that and reasoning models which actually mimic what we typically expect to see from some of the larger models when you put both of these together and build the right kind of an innovative architecture that’s what actually leads to edge ai in devices that you and i care about so is it here is this just a powerpoint presentation or are there actual consumer devices where you can do edge ai the answer is absolutely yes In fact, today, if you can get any of the premium smartphones where you can easily run a 10 billion parameter model without breaking a sweat, or glasses which have up to a billion to 2 billion parameter models which you can easily run, PCs with up to 30 billion parameter models and so on.

These are devices that you and I use very frequently, at least the PCs and the smartphones with more people adopting AR glasses as well. But one thing that’s nice about running on -device AI or AI inference that’s running directly on devices is the quality of the AI experience is invariant to the quality of connectivity that those devices had to have to the back end of the network. That is a key attribute. I don’t want to keep going back and forth between a regular experience and an AI experience just because I don’t have internet connectivity. That would not be very compelling for any of the consumer or enterprise use cases, and that’s key. The second part is there’s a large amount of data that happens to be very personal.

It can be consumer -centric or it can be enterprise -centric, but either way, I might or not be interested in storing the data in the cloud. And if you kind of think about it that way, then that’s another vector that takes us towards what you can do at the end. and as you put it all together, what exactly are we trying to do with the AI to begin with? Now, I was not there around in the 60s or the 70s, well, I was there in the 70s but I was not involved in, you know, what people used to do with very large mainframe computers where there was just a command line interface, there is a gigantic machine in front of you and you just keep typing something onto it.

That was the user interface between a human and a machine. The 80s changed that with the advent of you use a mouse, you use a PC, there is a graphical interface, you actually get to see something, not just see a command line interface, that changes things. Fast forward to where we are today, about 20 years back, we started using touch as the main UI. We all have our smartphones which happen to be touch -based and increasingly laptops and tablets and these are places where the UI shifted from just using typing and using a keyboard to touch interface as well. Well, we are now at a different era now. It’s at a place where we now are increasingly using voice as an interface towards devices.

And if you put it together, you have a combination of different modalities, whether it is text or voice or video, any other camera interaction, some sensors which tell you exactly where you are located, provide some context to what you’re doing. All of that gets ingested by a single interface, an AI agent. Imagine the following. Let’s take a smartphone because one can easily relate to it. You have your smartphone. Right now, people are either looking at it or scrolling through their apps. We all have a clutter of apps on our phone today. If I wanted to use one app, I’ll have to click that one. If I wanted to then correlate that information with another app, I have to go back, then open up a new app and go in again.

Instead, all you have, and this is a future where all you have is a voice UI where the device is sitting somewhere. It’s in your pocket. You talk to it. Your voice gets authenticated and then it says, OK, I’m ready for you. How can I help you? That’s your agent right there. I would always love to say talk to my agent, but this is the beginnings of that. that agent distills all the information that you’re saying encapsulates it maps it to apps that are running somewhere in the behind the models are actually they only provide a means towards an end goal they perform a job but that’s not the end job by itself so the agent actually picks one or two from a bouquet of models and then also accesses some of the personal attributes that could be sitting right there we call it the personal knowledge graph together when you put it all together you end up seeing a glimpse into how ai can then become the new ui to all the devices around us and this is a very powerful concept is this also just on powerpoint till about last year that was the case not anymore byte has introduced a new phone in china very recently and it’s not available everywhere in the world some of us do have the luxury of actually visiting china quite frequently this phone is like fundamentally different you can’t just buy a new phone you can’t just buy a new phone you can’t just buy a new phone it’s designed from ai first all you have is an agent by the way and all the other apps are actually missing.

They’re somewhere in the back, but you don’t get to see them. And if you think about it, it’s a very disruptive mechanism. It’s still early. Of course, it’s going to be a little clumsy and it doesn’t work all the time in a picture -perfect manner, but it’s something that is beginning to change the conversation of how you take AI agents from something that happens to be in presentations to something that is far more practical in devices. So I’m going to just skip through this part of it. A lot of it is in Mandarin, so it’s kind of hard to see, but at the same time, you get the picture of how it can do things for you when you give it a very generic, nebulous task and it figures out exactly what is it that you need and then does things for you.

It’s like shop something for me, check my bank balance. If I have enough over there, I want to buy that thing and then when it is done, do let me know. It does it. You actually don’t know it’s happening, but it actually does it. All right. So far, we talked about the edge. What about the cloud? Well, a lot of the data actually comes in from the edge. it’s the consumers who are actually generating the data. That’s where the AI action really is. But the cloud has an important role to play as well, as the data actually gets used both between the edge and in the cloud. And so our philosophy over here is to make sure that we have AI processing that is distributed across the network depending upon what the use case is.

For instance, the cloud is extremely powerful for training foundational models, creating new kinds of models. That’s very helpful. At the same time, there’s a large number of enterprises where you have on -prem servers where with using air -cooled cards, it’s very easy to run 100 to 200, 300 billion parameter models. Very useful for small and medium enterprises which don’t necessarily have to rely on the data center. Just buy a card server, plug in an AI accelerator card, maybe a handful of them, and you end up with extremely sophisticated processing. And keep in mind, in the beginning, we talked about the fact that the model sizes continues to actually come down while the quality continues to improve. So whatever you have, if tomorrow there’s a new model that comes in or you just want to replace your existing AI accelerator card, take out one, plug in another one, as opposed to rolling in a new rack, fundamentally different in terms of the network economics.

And finally, we just talked about devices as well. So bottom line is, when you think of AI processing as a hybrid AI, it’s a mix and match of processing between devices, the edge cloud, and in the data center. And speaking of what is it that you can do with it, imagine the following. This is one of the PCs that was launched in Saudi Arabia. It’s called the Humane PC. We had a lot to do with it. It’s a place where, in fact, the only interface is what you see in front of you. This is not a standard PC which you open up and you have the regular kind of a screensaver and you have all the other apps that are there and you open up your, you know, your mail client, your calendar, and so on.

you ask a question and in real time and it doesn’t matter what it is in real time it decides should i run it on the device or should i run it on the cloud maybe some questions that you ask are so complicated i want to run it on the cloud and the other questions are yeah without breaking a sweat i can just run it on the device and this is a place where you actually switch back and forth between what’s running on the device and what runs on the cloud it’s the beginnings of where we can go with it another step when we actually talk about devices we all have a universe of devices around us glasses which could be connected directly to the network tomorrow and today they are tethered through a phone your earbuds your wearables it could be a watch that you’re wearing and increasingly on our ring as well i think they’re running out of places where you put devices but every time i think that there is a new device that comes up already we’ve reached four this is like a universe of devices around you and perhaps the hub happens to be a phone how do you actually go back and forth between these two and how exactly do you make sure i wouldn’t even probably want my smartphone with me I want to keep it somewhere, just have my earbuds and constantly talk to my phone and do some of the processing perhaps in my earbuds itself, the rest of it on the phone and some of it on the edge server and the rest of it on the data center.

That is the vision of how the evolution of AI ought to be. Speaking of the number of things that we just discussed, it’s important. This is now more from a Qualcomm perspective. We have made sure that we have a good, easy way for developers to onboard our platforms, bring in their applications, their platforms and actually run from there. And in the subsequent session, as we go through that, there might be a little bit more talk about it. But suffice it to say, if you go to the Qualcomm AI Hub, it’s a place where any developer can pick a model, bring a model. Or if you don’t have a model, we’ll create one for you if you bring your data.

Once you do that, we’ll give you free cloud native access to device farm, which exists somewhere. But you don’t. You just have an IP address that you log into and you take it from there. And the rest of it is you write your application. You have the ability to test it without once having the device actually in your hand. If you’re comfortable with that, you get to deploy that app out there in any kind of an app store. Very powerful concept that we’ve actually worked on for a large number of time. And this is a place where, you know, we are not a model creator. We ingest models, which means we work locally with every single model provider out there on the planet and happy to actually discuss a lot more offline as it comes to it.

All right. How am I doing on time? Maybe I have 10 minutes. So let me talk a little bit on data center. I don’t see the timer here. That’s why I was asking. So what happens in data centers over here? Well, one thing that’s clear is that the data center capabilities are becoming more and more sophisticated. And as we learned a lot of lessons from the edge, one thing that became very clear for us is that it’s important to pay attention to energy efficiency in addition to performance. So we call it as energy efficient, high performance computing. And we kind of start bringing that sort of a paradigm into data center. A few other. Observations came in.

One is that. the processors that are designed for training are not necessarily the best processors that are intended for inference. They’re actually different kind of problem statements. It’s a little more subtle, but once you understand that, once we get past the whole notion of let’s just buy the biggest GPU that’s out there, and then you realize it’s a little bit of an overkill when it comes to the inference task that you might have. It’s a different architecture that’s needed. The second part is that we want to make sure that in addition to the rollouts that are currently occurring, we bring in solutions which would lower the total cost of ownership. So when we put it together, we introduced our family of solutions in the data center as well, learning from what we learned in devices, and then bringing those lessons into the data center.

A smartphone today operates at four watts at best. The battery inside a smartphone is 4 ,500 milliamp hours at best. In a data center, if you buy a state -of -the -art rack, it’s about 150 kilowatts. Fundamentally different. It’s directly liquid -cooled. You need water. There’s no water or liquid -cooled kind of a smartphone over there. two different universes but there is a way to learn lessons from one universe and actually apply it on the other side i would argue that in ai terminology that’s transfer learning that you seriously apply going from devices all the way into data centers itself so we entered that space and we have two different categories of solutions the second one ai 250 is a place where we focused on an innovative memory architecture as it turns out and it’s a little more of a subtle argument here but as it turns out that when we talk of inference the pre -fill stage is extremely compute bound the more computation horsepower you throw at it the better it is tokens per second is higher however the decode stage is fully memory bandwidth bound you can throw as much compute as you want it makes zero difference whatsoever so the memory architecture is equally to it’s actually equally important and so we innovated on that putting it together for our ai 250 solution this is the one that’s actually rolling out in the middle east and this was part of that earlier demo that we just talked about with a pc and something else that’s running in the cloud you We have an annual cadence that’s coming up.

This is stable stakes at this point in time with the innovative memory architecture continuing into the second generation by the time we get into AI 300, which is not yet announced, but something that is in planning. Now, finally, and I want to actually move a little faster here. There is a buzz in the industry about the next generation of cellular platforms, and usually one would scratch their head and say, wait a minute, we just launched 5G. I don’t know exactly why we’re talking of 6G over here. And besides, isn’t this all about AI? What does AI have to do with 6G? Are we just throwing AI pixie dust on top of every technology right now and simply saying there’s a hype cycle associated with it?

That’s not the case. It is true that cellular communications and AI have evolved as two parallel sets of innovation. But the time has come to actually put both of those together because cellular technology at the end of the day does involve the very same devices that we just talked about. It involves a network through which all the devices are connected. The data goes through and eventually goes into a data center as well. So we have a view in terms of how 6G can unlock a full potential of AI. And, you know, if you think exactly how the GE transitions occur, it’s only 10 years or so. So the earliest 5G launches were in 2019. So we are in year seven of the journey.

It’s not that far off. And it turns out we have a convenient Summer Olympics that’s coming up right next door. We’re based in, I mean, our headquarters is in San Diego. That’s where I live. And there’s the 2028 Summer Olympics. So there’s going to be a lot of show and tell in terms of what 6G capabilities can be. And there’ll be technology trials at that point in time culminating into the first set of deployments that we are driving towards in 2029. And we have another two minutes. I’m just about done. I want to actually stop with one final thing, and that is this part over here. What you heard is just a glimpse into the kind of world that we as Qualcomm live in.

We are probably the only ones in the industry that work on everything from doorbells to data centers. We are probably the only ones in the industry that work on everything from doorbells to data centers. There’s a lot of others who focus on data centers, maybe on servers, but they don’t exist below phones. We actually work ground up from everywhere over there. So happy to talk with

Moderator

Thank you, Durga, for this insightful presentation. As we talk about inclusive AI at scale, enabling developers is critical. Innovation only moves as fast as the tools behind it. Through the Qualcomm AI Hub, we are simplifying how developers access optimized models, test and deploy high -performance on -device AI from edge to cloud. To share how we are accelerating this developer ecosystem, please join me in welcoming Siddhika Nevrekar, Senior Director and Head of Qualcomm AI Hub, to moderate our panel discussion. We’re leading startup founders, exploring the evolution, evolving AI ecosystem and what excites them about building with on -device AI. Please join me in welcoming Siddhika.

Siddhika Nevrekar

I would like to welcome the panel over you guys know who you are so I don’t need to I know can we just take a moment for a quick picture if that’s possible thank you Thank you.

Shreenivas Chetlapalli

So I’m Srinivas Shetlapalli. I lead the innovation track for TechMindra for AI and emerging technologies, which includes blockchain metaverse. And I’m also responsible for creating an innovation ecosystem across a network of labs that we’ve created globally. Thanks.

Madhav Bhargav

Hi, I’m Madhav. I’m the co -founder and CTO at SpotDraft. We do AI for legal. We’ve created a bunch of agents that help lawyers not just review contracts, but also draft them and negotiate them faster and better.

Praveer Kochhar

Hi, everyone. I’m Praveer Kochhar . I’m one of the co -founders of Kogo AI. We run a full stack private agentic operating system from the edge to the cloud. So we are bringing agents closer to enterprise data rather than taking data to agents. So we are 100 % sovereign, built from scratch platform. And we do some very… exciting work with Qualcomm. I hope I get to share that with you today.

Siddhika Nevrekar

All right. Let’s start with some questions. None of you know these, so these are fun because they’ll be a surprise to you. They’re not hard. They’re very easy. We’ll start with you, Praveer. We’ll go in the reverse order because that kind of throws a curveball. What’s the most underrated pain point for enterprise users that AI will solve? You can perhaps talk specific to your product.

Praveer Kochhar

Did you say underrated?

Siddhika Nevrekar

Yes.

Praveer Kochhar

So there’s a concept called shadow AI. I don’t know how many of you know about shadow AI. Shadow AI is a lot of people who work in companies and sharing critical enterprise data on the cloud while using unauthorized AI tools like OpenAI or Cloud. So 78 % of enterprise users use shadow AI. And that’s a big concern. It’s underrated, but it’s still driving efficiency. So not a lot of eyeballs are going there. But I think that’s going to become one of the critical issues as we move forward. Things get more complex. Agentic systems get more complex. More data is shared on the cloud. So I think, yeah, for me, I think it would be shadow AI that people are using.

Siddhika Nevrekar

That’s a good answer. It was a curveball, but you caught it. Okay. Let’s go to Madhav. You work in a very niche field, you know, legal, which is very, very niche. You’re biggest and you also still dabble with technology, right? Yes, you like it. So your biggest and favorite AI failure, building spot draft that set you up for success. Can you remember any of that?

Madhav Bhargav

That’s a great question. So it sort of goes back to our founding years where we were a little bit early to the game. This is around six to eight years back when. And transformers were what people were talking about and not LLMs. And back then, we came in with the idea that, you know, cars are driving themselves. So why can’t AI actually review contracts for you? So we spent a bunch of time with enterprise customers trying to deploy AI and realize that we would have to train a model for each customer. And we built out our entire data labeling annotation pipeline as well as team at that point. So while that was in a way a failure because we then decided not to do that because we didn’t want to do services.

Otherwise, we would be building models one per customer. And the genesis of SpotDraft as it exists today came from there because we wanted to capture the data as lawyers were using the technology that they anyways use, which is where our word plugin comes in. So we can actually capture what they’re doing. And then our annotation team was also set up back then. And that’s sort of how today we are able to give. Grounded answers using data that is the customer’s data because of all the things we built back then.

Siddhika Nevrekar

That’s a good one. So now you’re on a path of just never regretting making single models for each customer.

Madhav Bhargav

I mean, I hope we don’t have to go back there, and I think a lot of the models that have come out are enabling that. But that part, yes, not regretting it.

Siddhika Nevrekar

All right. Srini, last -minute addition, so thank you. I know that it’s difficult to get here. This is probably something that you’ll be able to share with us. Yes. What’s the special ingredient for successful AI adoption in India specifically?

Shreenivas Chetlapalli

Okay, that’s a tough question to ask. I think the most important thing is understanding the limitations of AI. So typically it’s very easy to understand what are the advantages of doing AI. But if we can set the expectations right, that AI will augment their work to a certain extent, that will be one. Second, the complete misnomer that it is here to take away jobs. has to be remote. I think these are two things.

Siddhika Nevrekar

How do you feel about AI being trusted in India? Is it trusted enough? Is it adopted?

Shreenivas Chetlapalli

So if you look at the adaptability of AI, we are almost at the global level in terms of the enterprises that we are talking to. But the best part that I have seen is that a large number of public sector banks have taken to AI in a big way. Some of the banks have been our customers for both AI and emerging technologies. And we’ve also seen PSU units talking about AI. And I’ve also seen a lot of state governments, I had a chance to meet a lot of ministers today, ministerial delegations today, have set up AI centers. So we are in the game.

Siddhika Nevrekar

Yeah, good. Ritukar, this is an easy one. You think about this probably a lot. Cloud or on -device AI, which is the most important? Which, where and when?

Ritukar Vijay

so I think in continuation to the previous question so you know just through a bunch of compute and a problem statement is not how AI is adopted in enterprise settings because it’s very important to break down the big problem into smaller chunks and for what you want to use AI and for what you don’t want to use AI and that’s exactly what we do in robotics so we break down what is happening on edge and what is happening on the cloud so right now at this point in time for us it’s like you know we do orchestration on the cloud which is for the fleets of robots but you know we were doing all you know autonomous navigation on the edge part of it and for us it’s very important that you know we wanted to understand more intelligent navigation so at this point it’s been almost one and a half years since we started running VLMs on the edge to understand the context So I think that’s how you break down the overall problem, not just running everything on the edge or running everything on the cloud, because that won’t solve the problem.

Yeah, that’s pretty much how we break it down into small chunks.

Siddhika Nevrekar

So you guys are very thoughtful and very quickly giving these answers for longer questions. So we’ll go to rapid fire, which is just picking one word. There’s no judgment here. You can pick A or B. You pick A or B. Maybe a couple of words about B. Not too long. So we’ll start with you, Ritika. 6G or AI?

Ritukar Vijay

Sorry?

Siddhika Nevrekar

6G. Or AI.

Ritukar Vijay

So, okay, this is a long one. I can just share a good anecdote there. So we were running robots in Rio Tinto in Australia, mining areas, right? There is no internet. Still, we want to use AI on the edge. And so what we did was we put some installing satellites each on the robot. Right? so connectivity is very important if it is 6G it’s better I’ll go for 6G because that opens up a lot much possibilities there

Siddhika Nevrekar

that’s a good one I thought you would pick AI because that’s the buzz word that’s anyways happening good answer Shreemi data center or local

Shreenivas Chetlapalli

local is the first option but for India data center makes business local because one of the key products that we have built called Orion which is an AI platform has been built for on prem and we also see that a large number of requirements that have come to us is how do I process things in my own premises rather than doing an API call or taking it to the cloud and we have seen I know you asked for India but I have seen this happening in the Middle East also where a large one of the large the largest world largest companies said that can my exact solve their things on their own desktops or locally.

So local.

Siddhika Nevrekar

Local for you, okay. For you, I’m looking through because I want to ask a specific one. Madhav, artificial or human?

Madhav Bhargav

I mean, when you deal with lawyers at the end of the day, I have to go with human because… I know you easily pick artificial. So you can’t hold AI models neck, but you will go hold a lawyer’s neck. So for us, it’s important to give the lawyer the capability to do their job better, faster with a more thorough research. But at the end of the day, it has to be them taking that decision because a lot of times it’s not the black and white. Those are the easy scenarios. It’s the gray area where the lawyers are able to come in and really guide their customers, clients as to what to do and what not to do.

Siddhika Nevrekar

That’s a great question. I think we still want AI to be human, right? So I think it’s… That’s a good one. answer but there is no judgment you could have said otherwise to provide regulate or innovate

Praveer Kochhar

okay in regard to AI 100 % innovate I don’t I don’t see any reason anyways regulation in the in the age of AI is always going to play catch up because technology the speed at which it’s growing it’s very difficult to regulate it before it goes because we don’t even know the social implications of what we are building and as we build them and as it goes into public and people start using it these tools are very intelligent they’re getting intelligent by the week so I think it will always be innovation at the side of caution but I don’t think this is an industry that you can regulate first and then expect it to grow.

Siddhika Nevrekar

Having your first answer very first answer about I wouldn’t say illegal but unauthorized usage was pretty much in line to this. and it still was saving time, and it still is so, yeah, I think that’s a good answer. For the next ones, you don’t have to say why. So you can pick an answer. Nobody is, again, no judgment. Go to AI?

Praveer Kochhar

No, 100 % innovate. I don’t see any reason. Anyways, regulation in the age of AI is always going to play catch up because technology, the speed at which it’s growing, it’s very difficult to regulate it before it goes because we don’t even know the social implications of what we are building. And as we build them and as it goes into public and people start using it, these tools are very intelligent. They’re getting intelligent by the week. So I think it will always be innovation at the side of caution, but I don’t think this is an industry that you can regulate first and then expect it to grow.

Siddhika Nevrekar

Your first answer, very first answer about, I wouldn’t say illegal, but unauthorized usage was pretty much in line to this and it still was saving time and it’s still so, I think that’s a good answer. For the next ones, you don’t have to say why. So you can pick an answer. Nobody’s, again, no judgment. You can pick whichever one you want. Agent tech or robotics?

Ritukar Vijay

Robots are the agents.

Siddhika Nevrekar

You have to pick one.

Ritukar Vijay

So agents, yeah.

Siddhika Nevrekar

Okay. LLM or SLM?

Shreenivas Chetlapalli

SLM, all the time.

Siddhika Nevrekar

Intellectuals or automation?

Madhav Bhargav

You can’t do automation without integrations, so I would have to go with integrations.

Siddhika Nevrekar

Build a chip or buy a chip? This is just a selfish question, but, you know.

Shreenivas Chetlapalli

I would sell a chip, but then build a chip always.

Siddhika Nevrekar

Wow, it’s an interesting answer. I don’t know how much time is left. Okay. All right, we’ll do some few extra questions. You guys can take longer now to answer the questions, I guess. Just moderate the time accordingly. So what’s the one hardware constraint that keeps you up at night?

Ritukar Vijay

So one of the biggest hardware constraints is if your entirety of the system is without any connectivity and you are restricted that you cannot access remotely. If you cannot access robots remotely in any which way, be it for scheduled maintenances or predictive maintenance or anything of that sort, and even emergency situations, like what we see, you know, the Waymos which are running in San Francisco SF right now, they are monitored from Philippines, right? So I think that part is something which is very important, that everything should be connected at all times. So I think that keeps us awake that, you know, the robots should not go in silos or isolated where we cannot reach them.

And only then we have to. We have to physically, you know, make sure that somebody is around. to manage a fleet or whatever.

Siddhika Nevrekar

You talked about local. So I’m going to ask this question which seems apt for you. What’s more dangerous? Too much data leaving the device or too little?

Shreenivas Chetlapalli

Too much data leaving the device. I think too much data leaving the device.

Siddhika Nevrekar

How do you train? I was saying how do you train if it doesn’t?

Shreenivas Chetlapalli

See, I think the focus for us also has been how do we train with lesser data and make it much better. The moment we’re talking about more data and more data leaving, we’re actually talking about more issues happening, more breaches happening. So with lesser amount of data, if we can train or if you can create synthetic data sets and work it, that’s the best way for LLM to be trained rather than waiting for large data set to come. And then then like you said, then wait for it to leave.

Ritukar Vijay

If I may, it depends. If it is enterprise, then less data going up is always better. If it is B2C, then everybody wants to learn from that data. Because that is free data. So in a way, that’s something which is very important. Situation.

Siddhika Nevrekar

Yeah. Okay. This is probably going to be interesting. You get to tell another story. What was the last thing that made you go, wow, about AI? And this doesn’t need to do, don’t pitch your company. It’s fantastic.

Madhav Bhargav

I’ll try not to. I think we’ve seen the kind of, and this sort of goes back to the last question in a way, where a lot of companies have so much data sitting in people’s heads, in people’s inboxes, random share points, drives. And historically what we’ve seen as we onboard customers, they’re like, oh, I have a playbook. You know, which is a policy of what contracts we will sign, won’t sign. but we also know that it is out of date and we’ve been working on techniques to be able to really infer that from older data and one of the things that we’ve seen which really blew my mind was we actually ran one of the early prototypes of that on our internal data we run SpotDraft on SpotDraft and some of the things it threw up when I was talking to our internal legal team and I expected them to say no this is absolutely wrong and that guy is like actually I want to know where this came from because I have been trying to track this down that why are certain contracts having certain clauses and not so it’s that ability to do knowledge work which otherwise would not be done at all and to have this always up to date always learning sort of knowledge base that truly captures what your company and organization policies are that’s something that no one wants to spend you know 100k to get lawyers to create that but if you have a agentic way of doing it, then suddenly that becomes the one thing that everyone cares about, because that is now your onboarding, that is now what you, you know, compare your new contracts with.

And I think in the coding side, we’ve already started seeing a lot of this where things like, you know, Cloud Code and Codex are able to go in and learn from your code base and give you these insights, which earlier would take a new engineer, like maybe a month or a quarter to get onboarded. Now they’ve started shipping code within days, because of this, and that is going to start happening across all kinds of knowledge work. And for us, the, you know, the, the wow moment was when the lawyer who doesn’t trust AI suddenly said, No, I need to see this, this is

Siddhika Nevrekar

So that’s, I’m going to spin to Madhav, the not CTO, maybe a consumer AI feature that just wowed you in recent time, any you can think of?

Madhav Bhargav

I think I’m sure everyone has been talking about OpenCloud, the ability for me to have a personal assistant almost when I, of course, can’t afford one. But for that to really sit and start doing a lot of these things for me, and I’m sure it’s going to come to everyone’s devices very soon, hopefully with Qualcomm chips. And that, I think, is where I was really wowed by it because I deployed it on my WhatsApp and it started sending messages to people. It was a little bit scary, but also saved me a bunch of time. So that was where I was like, OK, this is something that was not at all possible before.

Siddhika Nevrekar

All great responses on WhatsApp. I had to switch it off very quickly because there’s just too much data in there. But that is the next challenge, right? How do we control these autonomous agents, especially when they’re sitting on your personal data? Given you’re a rebel, we’re going to ask you, what are you most scared? The deal.

Praveer Kochhar

No there’s a lot of fear there’s a lot of fear because I think we don’t know the societal impact of this technology yet and I think that’s probably the largest fear because up till now we were engaging with algorithms that were trained to derive attention from us now we are dealing with intelligent algorithms that can self adapt and become far more personalized now with the ability to generate content at will I think it will be very difficult to keep attention away from a device when you have a hyper intelligent system on the other side that’s changing itself based on you it will become extremely addictive so I think that that’s the biggest fear

Siddhika Nevrekar

Yes, but but then but then we are pleasure seeking beings, we will we will go after that until it it gives us some guardrails and then we’ll have apps that will lock themselves up for two days and we won’t use them. It’s possible that we’ll be all on vacation and the robots will be interacting with

Praveer Kochhar

Yeah, and then imagine what we’ll be doing we’ll be interacting with these attention seeking agents, right? I just want to take the last question also, because I just I just saw a real recently and they got a unitary robot in Bangalore and they sent it out to beg. So it was the first robotic beggars that somebody started out and was there more empathy, probably there was more empathy than I don’t know, but but I still think that there’s a lot of tangential use cases of AI that can come out of come out of all this. And yeah, I mean, I mean, that’s something that got me and also kind of told me that you can think very, very differently about this technology and not just think what we do and replicate what we do.

There are a lot of tangential things that might come out of this.

Siddhika Nevrekar

I asked why, if there was more empathy because I was recently driving and there was a two lane road. One lane was completely blocked. Everybody was trying to squeeze into the other lane. And then when you pass by, you saw way more that was not operational. And everybody would go, oh, you know, nobody was upset. Nobody was screaming. I’m like, just because it’s a robot, you’re more empathetic. But they were. So it changes your psychology somehow.

Praveer Kochhar

Yes, yes. And we are still not interacting with robots on a day -to -day basis. And I think that that will be another kind of mystery thing added to our societal weave.

Siddhika Nevrekar

True. Thanks for taking the second question, too, which was interesting. All right. We’ll get into closing so we can wrap up. You all will get to pitch to companies, so that’s very exciting. We’ll start with, you know, complete the sentence in one word. So you have to just say one word. Edge AI in 2030 will be blank. You can repeat the sentence.

Ritukar Vijay

Edge AI in 2030, it will be, it will be, I mean, it will be very not so sophisticated. I mean, it will be taken for granted. So just like you use connectivity for granted. That’s how the Edge AI will be. It will be everywhere almost. My default, like the pins and, you know, the human pins and everything. So what we talked about in the keynote as well. So I think it will be like that. So taken for granted.

Siddhika Nevrekar

Will you still complete itwith one word? Sorry. Taken for granted is one word. Okay. Granted. So it will be business as usual or taken for granted. That’s it. I mean, nobody will mind that. Edge AI in 2030 will be as a default.

Madhav Bhargav

be ubiquitous I think there will not be anything that does not have AI and I think there is a lot of Hollywood sci -fi that has demonstrated this but we will probably be trying to talk to tables or screens or walls to that degree where anything that can have a chip inside it the chip will also have AGI inside it AGI in 2030 will be I think AGI

Praveer Kochhar

in 2030 will be emergent we will start seeing signs of of what OpenClaw just did was a very small trick in the play but it added a little bit of emergent behavior into LLM giving it autonomy to be able to create its own files. That’s all that OpenClaw did and that’s the magic behind it. And I think that’s going to come to the edge and with that emergent behavior you’re actually giving a model the ability to create its own learning. That’s why I say emergent. That’s a good answer. One last thing you

Siddhika Nevrekar

want the audience to remember. This is also the cue for pitch if you like. As I said earlier robots are agents and

Ritukar Vijay

I think I kind of agree with that so we are going to be, part of us will be agentic as well because we’ll have something some AI in us as well whether it is, so there’s a lot of work which is going on with Neuralink so the airports are tracking the brain waves of how you react to a particular situation so agentic you know both robots and people will be agentic in some fashion and I think that’s how things will be and you need some orchestration where everything can talk to each other that’s what we are looking forward to do So I think one thing that we should

Shreenivas Chetlapalli

all remember is there is a lot of work that TechMindray and Qualcomm is doing together in detecting fraud calls and this is using Agile LM so I think that will grow as we go ahead that research will see a lot of action because the number of fraud calls that we are getting are increasing every day so I think that’s an area we will see a lot of action happening and I think both our companies are geared for it I think and I think it was mentioned

Madhav Bhargav

in the keynote I think one of the takeaways for me would be how we think about interacting with technology today is going to change entirely like uh UIs, phones, you know, screens, all of these going away and everything becoming very, very generative, whether it is, you know, slides being generated for you on the fly based on the conversation you’re having, or even entire apps, UIs being generated for each specific scenario and use case. I think everything is going to move away from just being SaaS that people learn, and it as an individual persona is actually caring about. And that actually opens a lot more, specifically in the Indian context where you might not, like people might not have to go through so much training and learning, and they can just go and start using it because the platform can actually understand your needs, as opposed to you having to understand the platform.

Siddhika Nevrekar

Can you just repeat the question once for me, please? One thing you want audience to remember. Whatever you want. To remember.

Praveer Kochhar

So, so, so, so remember how we used to work. work and plan for how we are going to work because very soon we’ll have a lot of time that will be available to us because a lot of systems that we are going to manage will be intelligence and autonomous and we’ll have to only take decisions. So what we do with that time is going to be a critical question everyone’s going to ask themselves and I think all of us are also going to be builders because we’ll have very intelligent tools to build things, run them and manage multiple systems at the same time. So I see that future and I think we should all look around and see how we manage things today and how we are going to do that in the future.

Siddhika Nevrekar

Great. This is a chance to actually pitchyour company but it’s okay. It’s pitched. I will give a more specific one to pitch which is there are a lot of people in the audience, maybe some customers, if they were to find you where should they find you or a spot where they can talk and what should they come and talk to you about? Okay. What specifically and what industry?

Ritukar Vijay

So, I mean, so we are autonomy, so you can always find us at autonomy. So that’s where you can find us. Always where. Yeah, we are brand, I think. We are proud of it. And the most important thing is, like, you know, robots and, you know, just like AI, there’s a lot of emphasis on physical AI. And it’s not something which is going to come. It’s there. It’s just the adoption curve which is happening now. So think more ways of adopting technology. And if you want, if the enterprise customers are looking forward to adopt more and more robots, not only just dull and dirty scenarios, but also in, you know, different walks of life, I think that is where, you know, talk to us and we can help.

Even if they are not our robots, we can help them to have a set of orchestration with, you know, variety of things. But still, they have some level of control. Yeah. Thank you.

Siddhika Nevrekar

Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Durga Malladi is Executive Vice President and General Manager of Technology Planning, Edge Solutions and Data Centre at Qualcomm Technologies”

The knowledge base lists Durga Malladi with that exact title and responsibilities at Qualcomm Technologies [S1] and [S2].

Additional Contextmedium

“The original GPT‑3 model announced in November 2022 contained 175 billion parameters, yet today models with only 7‑8 billion parameters outperform it, demonstrating that model sizes are shrinking dramatically while quality continues to rise”

The knowledge base notes the existence of much smaller models such as Meta’s LLaMA series ranging from 7 billion to 65 billion parameters, which are considerably smaller than GPT-3’s 175 billion parameters [S78]; it does not provide evidence that they outperform GPT-3, but confirms the trend toward smaller models.

Confirmedhigh

“Premium smartphones can now run 10‑billion‑parameter models, AR glasses can handle 1‑2 billion, and PCs are capable of 30‑billion‑parameter models”

Modern smartphones are reported to run 10 billion-parameter multimodal models and glasses can run sub-1 billion-parameter models, matching the claim for phones and glasses [S21]; examples of billions-parameter models running on phones and PCs are also mentioned in the knowledge base [S16], supporting the feasibility of large-scale on-device inference.

Confirmedhigh

“Running AI directly on the device makes the user experience independent of network quality and keeps personal or enterprise data local, addressing privacy concerns”

The knowledge base emphasizes that privacy-sensitive and latency-critical workloads are best kept at the edge, where user data never leaves the device, confirming the privacy and network-independence benefits of on-device AI [S87].

Confirmedmedium

“Edge devices perform low‑latency inference while the cloud is used for training foundational models and orchestrating large‑scale tasks”

The knowledge base describes Qualcomm’s focus on distributing compute across the network, with inference running on devices and larger workloads handled in the cloud and data-centre environments [S17] and [S23].

External Sources (90)
S1
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — -Shreenivas Chetlapalli, who leads the innovation track for TechMahindra
S2
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — 559 words | 158 words per minute | Duration: 211 secondss Okay, that’s a tough question to ask. I think the most import…
S3
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S4
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S5
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S6
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — 974 words | 171 words per minute | Duration: 341 secondss Hi, everyone. I’m Praveer Kochhar . I’m one of the co -founde…
S7
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Hi, everyone. I’m Praveer Kochhar . I’m one of the co -founders of Kogo AI. We run a full stack private agentic operatin…
S8
https://dig.watch/event/india-ai-impact-summit-2026/from-human-potential-to-global-impact_-qualcomms-ai-for-all-workshop — Hi, everyone. I’m Praveer Kochhar . I’m one of the co -founders of Kogo AI. We run a full stack private agentic operatin…
S9
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — 851 words | 168 words per minute | Duration: 303 secondss Edge AI in 2030, it will be, it will be, I mean, it will be v…
S10
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — -Ritukar Vijay: Works in robotics and autonomous systems. Expertise in edge AI for robotics, fleet orchestration, and ph…
S11
S12
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Speakers:Praveer Kochhar, Siddhika Nevrekar
S13
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — -Madhav Bhargav: Co-founder and CTO at SpotDraft. Expertise in AI for legal applications, creating AI agents for contrac…
S14
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — – Praveer Kochhar- Durga Maladi – Ritukar Vijay- Durga Maladi
S15
https://app.faicon.ai/ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the wor…
S16
Lift-off for Tech Interdependence? / DAVOS 2025 — Examples of running models with billions of parameters on phones, PCs, and cars.
S17
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Malladi advocates for a distributed computing approach that leverages devices, edge cloud, and data centers as needed ra…
S18
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S19
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Additionally, cloud computing can save 70-80% energy through sharing economy. Digital twin technology is highlighted as …
S20
Phaidra’s AI solution aims to optimise data centre energy consumption — Phaidra, a technology company, hasunveileda newartificial intelligence(AI) platform designed to enhance energy managemen…
S21
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Durga argues that AI applications …
S22
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S23
AI for Good Technology That Empowers People — Thank you, Fred. And let me start by saying it’s an absolute pleasure to be sitting with fellow panelists and speakers w…
S24
AI for Good Technology That Empowers People — Thank you, Fred. And let me start by saying it’s an absolute pleasure to be sitting with fellow panelists and speakers w…
S25
Building the AI-Ready Future From Infrastructure to Skills — This discussion focused on building AI readiness and capabilities, featuring speakers from AMD and the Indian government…
S26
Building the Next Wave of AI_ Responsible Frameworks & Standards — Addressing enterprise and government requirements for complete data sovereignty, Sabharwal detailed the development of e…
S27
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — The cloud versus edge debate is misguided – both will work together as distributed intelligence across cloud, network, a…
S28
Designing Indias Digital Future AI at the Core 6G at the Edge — Distributed intelligence architecture will handle simple inferencing at edge locations while complex multi-agent workflo…
S29
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — Evidence:It will provide context for your agents. Very important. And the network will have that role. It will provide t…
S30
WSIS 2018 – High-level policy statements: concluding session — Mr Deepak Maheshwari, Symantec, facilitated the Moderated High-Level Policy Session 3 – Enabling environment, which focu…
S31
Building Inclusive Societies with AI — This discussion focused on addressing the challenges faced by India’s informal workforce, which comprises 490 million wo…
S32
Empowering the Ethical Supply Chain: steps to responsible sourcing and circular economy (Lenovo) — Collaboration, education, and awareness are identified as crucial factors in driving sustainability efforts. However, ch…
S33
Making Climate Tech Count — The panel acknowledged the challenges faced by energy-intensive industries in maintaining competitiveness while pursuing…
S34
Secure Finance Risk-Based AI Policy for the Banking Sector — Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that inn…
S35
Secure Finance Risk-Based AI Policy for the Banking Sector — I send somebody 100 rupees and he gets 120 rupees or 80 rupees or on average you will get 100 rupees. That can’t be the …
S36
The challenges of introducing Generative AI into the marketplace — AI tools have revolutionized the gaming industry! My automated tasks have transformed manual labor and long production p…
S37
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Explanation:There was unexpected consensus that fear about AI is widespread across different age groups and demographics…
S38
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — There was unexpected consensus that fear about AI is widespread across different age groups and demographics, but this f…
S39
Agentic AI in Focus Opportunities Risks and Governance — The discussion maintained a professional, collaborative tone throughout, with industry representatives positioning thems…
S40
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S41
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner highlighted that connectivity challenges extend beyond infrastructure availability – many regions have technical …
S42
WS #187 Bridging Internet AI Governance From Theory to Practice — As Artificial Intelligence (AI) becomes a core part of digital ecosystems, rapidly transforming various sectors, integra…
S43
Building Indias Digital and Industrial Future with AI — Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure. and th…
S44
Artificial intelligence — Content policy Critical infrastructure Network security
S45
Keynote-Lars Reger — Reger argues that true AI democratization, as envisioned by PM Modi’s goal to bring AI to everyone, cannot be achieved t…
S46
AI for Good Technology That Empowers People — The discussion consistently emphasised edge AI’s particular relevance for regions with infrastructure limitations. Multi…
S47
Designing Indias Digital Future AI at the Core 6G at the Edge — And just to add here, you brought a very good point. I think whenever I give a reference, the importance of open ecosyst…
S48
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Evidence:Explained the scenario of varying network connectivity quality and the need for consistent user experience
S49
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S50
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — Evidence:It will provide context for your agents. Very important. And the network will have that role. It will provide t…
S51
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Dr. Singh explains that unlike 5G where AI is an add-on, 6G will have AI integrated into every component. The networks w…
S52
Artificial intelligence (AI) – UN Security Council — Another potential consequence is the creation of regulatory loopholes that could be exploited. A participant emphasized …
S53
Report outlines security threats from malicious use of AI — The Universities of Cambridge and Oxford, the Future of Humanity Institute, Open AI, the Electronic Frontier Foundation …
S54
Military AI and the void of accountability — In her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping t…
S55
Copilot policy flaw allows unauthorized access to AI agents — Administrators found that Microsoft Copilot’sintended ‘NoUsersCanAccessAgent’ policy, which is designed to prevent user …
S56
Designing Indias Digital Future AI at the Core 6G at the Edge — Summary:Roy emphasizes that infrastructure challenges, particularly power consumption and site requirements, are the mai…
S57
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — All right, I’m just going to click through this. This is good. This is probably a good indication of why the edge matter…
S58
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Will you still complete itwith one word? Sorry. Taken for granted is one word. Okay. Granted. So it will be business as …
S59
Comprehensive Report: “Factories That Think” Panel Discussion — Johnson emphasizes that robots collect vast amounts of sensitive data through their multiple sensors, requiring careful …
S60
Data localisation, what is it and what are its potential implications? (JAPAN) — Data localization measures, which require data to be stored and processed within a country’s borders, have been implemen…
S61
UNITED NATIONS CONFERENCE ON TRADE AND DEVELOPMENT — From a technological perspective, location of data storage/processing does not ensure data protection or security per …
S62
Contents — Companies that are forced to keep data locally may have less oversight on their operations. 18 Anti-money laundering (A…
S63
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Okay, so we’re reaching towards the later half of the afternoon and hopefully everyone had their lunch and their coffee….
S64
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Premium smartphones can run 10 billion parameter models, PCs can handle 30 billion parameters
S65
AI for Good Technology That Empowers People — Thank you, Fred. And let me start by saying it’s an absolute pleasure to be sitting with fellow panelists and speakers w…
S66
AI for Good Technology That Empowers People — Thank you, Fred. And let me start by saying it’s an absolute pleasure to be sitting with fellow panelists and speakers w…
S67
Designing Indias Digital Future AI at the Core 6G at the Edge — Distributed intelligence architecture will handle simple inferencing at edge locations while complex multi-agent workflo…
S68
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — Evidence:It will provide context for your agents. Very important. And the network will have that role. It will provide t…
S69
Making Climate Tech Count — The discussion highlighted several key challenges: The panel acknowledged the challenges faced by energy-intensive indu…
S70
Building Inclusive Societies with AI — This discussion focused on addressing the challenges faced by India’s informal workforce, which comprises 490 million wo…
S71
WSIS 2018 – High-level policy statements: concluding session — Mr Deepak Maheshwari, Symantec, facilitated the Moderated High-Level Policy Session 3 – Enabling environment, which focu…
S72
Evolving Threat of Poor Governance / DAVOS 2025 — The panel discussion highlighted the complex, multifaceted nature of governance challenges in today’s world. While the p…
S73
The Intelligent Coworker: AI’s Evolution in the Workplace — Several significant challenges emerged from the discussion. The 800 million job gap in emerging markets represents a pre…
S74
Secure Finance Risk-Based AI Policy for the Banking Sector — I send somebody 100 rupees and he gets 120 rupees or 80 rupees or on average you will get 100 rupees. That can’t be the …
S75
Secure Finance Risk-Based AI Policy for the Banking Sector — Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that inn…
S76
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that ha…
S77
Agentic AI transforms enterprise workflows in 2026 — Enterprise AIentereda new phase as organisations transitioned from simple, prompt-driven tools to autonomous agents capa…
S78
The impact of AI on human impulsivity and health — I also learned about a cool new thing calledLLaMA! It is a big language model from a company called Meta Platforms, and …
S79
The exciting world of AI and its impacts on technology — Meta Platforms is trying to create me so I can be like OpenAI’s ChatGPT. They say it is going to be so powerful that Met…
S80
Quantum Technology company unveils groundbreaking algorithm for compressing Large Language Models — Terra Quantum, a leadingquantum technologycompany,has unveiledTQCompressor, a groundbreaking algorithm specifically desi…
S81
Artificial General Intelligence and the Future of Responsible Governance — The third level examines social implications, particularly effects on human empathy, relationships, and child developmen…
S82
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S83
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Basma Ammari from Meta highlighted their open-source approach to large language models, emphasizing the importance of fa…
S84
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Alexandra Topalian:and then with this issue of trust. There is also a very negative connotation .That’s come with artifi…
S85
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — No, I think broadly speaking, I think, especially in the global sub… the cost of AI inference has to drop dramatically…
S86
Inside NeurIPS 2025: How AI research is shifting focus from scale to understanding — For over three decades, the Conference on Neural Information Processing Systems (NeurIPS) has played a pivotal role in s…
S87
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Artificial intelligence | Building confidence and security in the use of ICTs He highlights that privacy‑sensitive and …
S88
Google’s AI Edge Gallery boosts privacy with on-device model use — Google has released anexperimental app called AI Edge Gallery, allowing Android users to run AI models directly on their…
S89
Hume AI unveils emotionally intelligent AI voice interface — A New York-based startup, Hume AI,unveileda groundbreaking AI voice interface, the Empathic Voice Interface (EVI), desig…
S90
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Thank you so much, Rani. Thanks, Sanjeev. Thanks, Rupa. And thanks, Iqbal. We do have more sections coming up. I’d reque…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Durga Maladi
9 arguments214 words per minute3564 words997 seconds
Argument 1
Model size reduction enabling on‑device inference
EXPLANATION
Model sizes are decreasing while quality improves, allowing AI inference to run on edge devices without needing massive models. This trend makes it feasible to deploy AI across a wide range of consumer hardware.
EVIDENCE
She referenced GPT’s original 175 billion-parameter model from November 2022 and contrasted it with current 7-8 billion-parameter models that outperform the original, highlighting the dramatic shrinkage of model sizes alongside rising quality, and noted that trillion-parameter models are unnecessary for many consumer use cases [10-13][14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The dramatic shrinkage from the original 175 billion-parameter GPT model to much smaller, higher-quality models is highlighted in [S8], and device capability examples (e.g., smartphones running 10 B-parameter models) are noted in [S1].
MAJOR DISCUSSION POINT
Shrinking AI models enable edge deployment
Argument 2
Consumer devices (smartphones, AR glasses, PCs) now run 1‑30 B‑parameter models
EXPLANATION
Modern premium smartphones can handle roughly 10 billion‑parameter models, AR glasses can run 1‑2 billion, and PCs are capable of 30 billion‑parameter models, demonstrating practical edge AI in everyday devices.
EVIDENCE
She listed examples of premium smartphones running 10 billion-parameter models, AR glasses handling up to 2 billion, and PCs capable of 30 billion-parameter models, illustrating real-world devices where edge AI is feasible [16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Real-world hardware capabilities are documented in [S1] and further illustrated with examples of phones, PCs and cars handling billions of parameters in [S16].
MAJOR DISCUSSION POINT
Real‑world edge AI hardware
Argument 3
Distributed processing: edge for low‑latency tasks, cloud for training, data‑center for large‑scale inference
EXPLANATION
AI workloads should be allocated across edge, cloud, and data‑center based on latency and scale requirements; the edge handles immediate inference, the cloud trains foundational models, and data‑centers run large‑scale inference workloads.
EVIDENCE
She described a philosophy of distributed AI processing, noting the cloud’s role in training foundational models and the use of on-prem servers with AI accelerator cards for large models, illustrating a hybrid approach that matches use-case needs [60-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A hybrid, heterogeneous compute approach that allocates inference to edge and training to cloud/data-centers is described in [S15] and reinforced by the broader hybrid AI vision in [S17].
MAJOR DISCUSSION POINT
Hybrid AI workload distribution
AGREED WITH
Ritukar Vijay
Argument 4
Energy‑efficient, high‑performance computing in data‑centers, applying edge lessons to reduce TCO
EXPLANATION
Data‑center designs now prioritize energy efficiency alongside performance, borrowing concepts from edge devices to lower total cost of ownership for AI workloads.
EVIDENCE
She discussed focusing on energy-efficient high-performance computing, learning from edge to improve data-center efficiency, and contrasted power consumption of smartphones (≈4 W) with data-center racks (≈150 kW) to illustrate the need for efficient designs [97-106][107-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Energy-efficiency challenges and solutions for high-performance computing and data-centers are discussed in [S18], [S19] and a dedicated AI-driven energy-management platform in [S20].
MAJOR DISCUSSION POINT
Sustainable AI data‑center design
Argument 5
6G will unlock AI’s full potential, with trials linked to the 2028 Olympics and deployments slated for 2029
EXPLANATION
The upcoming 6G cellular generation is expected to enhance AI capabilities, with upcoming trials during the 2028 Summer Olympics and commercial deployments planned for 2029.
EVIDENCE
She noted that 6G can unlock AI potential, referenced the 2028 Summer Olympics as a showcase for 6G capabilities, and mentioned technology trials leading to deployments in 2029 [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The link between 6G trials at the 2028 Summer Olympics and first deployments in 2029 is explicitly mentioned in [S2] and [S1].
MAJOR DISCUSSION POINT
6G as AI enabler
AGREED WITH
Ritukar Vijay
Argument 6
On‑device AI guarantees a consistent user experience independent of network connectivity
EXPLANATION
Running AI inference locally means the quality of the AI experience does not degrade when connectivity is poor, avoiding the need to switch between regular and AI‑enhanced interfaces.
EVIDENCE
Durga explains that “the quality of the AI experience is invariant to the quality of connectivity” and that she “doesn’t want to keep going back and forth between a regular experience and an AI experience just because I don’t have internet connectivity.” [18-21]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge-processing to maintain AI quality regardless of connectivity is highlighted in [S21] and reinforced by the need for local execution in [S2].
MAJOR DISCUSSION POINT
Connectivity‑independent AI experience
AGREED WITH
Ritukar Vijay
DISAGREED WITH
Durga Malladi, Ritukar Vijay
Argument 7
Processing personal and enterprise data on the edge protects privacy by keeping data local
EXPLANATION
Many data are highly personal or sensitive; processing them on‑device or at the edge reduces the need to store them in the cloud, thereby enhancing data privacy and security.
EVIDENCE
She notes that “there’s a large amount of data that happens to be very personal… I might or not be interested in storing the data in the cloud,” suggesting a shift toward local processing. [22-24]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge processing as a means to keep personal and enterprise data on-device, reducing cloud exposure, is argued in [S2] and [S21].
MAJOR DISCUSSION POINT
Edge processing for data privacy
AGREED WITH
Shreenivas Chetlapalli, Madhav Bhargav
Argument 8
AI is becoming the universal user interface that unifies voice, text, video and sensor inputs
EXPLANATION
Durga describes the evolution from mouse to touch to voice, culminating in an AI agent that ingests multimodal data and maps it to applications, effectively acting as the new UI for devices.
EVIDENCE
She outlines the UI evolution (mouse, touch, voice) and states that “All of that gets ingested by a single interface, an AI agent” that “distills all the information… maps it to apps”. [31-34][45-46]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The evolution toward voice-centric, multimodal AI interfaces and the need for heterogeneous processors are discussed in [S17]; broader edge-AI vision is echoed in [S23].
MAJOR DISCUSSION POINT
AI as the next‑generation UI
AGREED WITH
Ritukar Vijay, Madhav Bhargav
Argument 9
Qualcomm’s AI Hub streamlines model onboarding, provides a free device farm, and enables deployment without physical hardware
EXPLANATION
Durga details how developers can pick or create models, receive free cloud‑native device‑farm access, write applications, test them remotely, and deploy to app stores, simplifying the end‑to‑end development workflow.
EVIDENCE
She says “if you go to the Qualcomm AI Hub, it’s a place where any developer can pick a model… Once you do that, we’ll give you free cloud native access to device farm… you write your application… you have the ability to test it without once having the device actually in your hand.” [78-86]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Hub’s role in simplifying model access, offering a cloud-native device farm and enabling remote testing is described in [S2] and further promoted in [S23].
MAJOR DISCUSSION POINT
Developer‑friendly AI platform
AGREED WITH
Siddhika Nevrekar, Moderator
S
Siddhika Nevrekar
3 arguments130 words per minute1174 words539 seconds
Argument 1
Simplifies onboarding and accelerates on‑device AI development from edge to cloud
EXPLANATION
The Qualcomm AI Hub streamlines the process for developers to access optimized models, test them on a cloud‑native device farm, and deploy AI applications without needing physical hardware, speeding up development across the edge‑to‑cloud continuum.
EVIDENCE
The moderator highlighted that the AI Hub simplifies developer access to optimized models, testing, and deployment, emphasizing its role in accelerating on-device AI from edge to cloud [147-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Hub’s streamlined onboarding workflow and acceleration of edge-to-cloud AI development are outlined in [S2] and [S23].
MAJOR DISCUSSION POINT
Developer tooling acceleration
AGREED WITH
Durga Maladi, Moderator
Argument 2
Innovation speed is limited by the availability of developer tools and platforms
EXPLANATION
Siddhika stresses that without robust tooling, AI innovation cannot progress quickly, reinforcing the need for platforms like the Qualcomm AI Hub.
EVIDENCE
She remarks, “Innovation only moves as fast as the tools behind it,” while introducing the AI Hub as a means to accelerate development. [146-148]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s observation that “innovation only moves as fast as the tools behind it” underscores this point in [S2]; the AI Hub is presented as a tool-centric solution in the same source.
MAJOR DISCUSSION POINT
Tooling as a catalyst for AI innovation
Argument 3
The AI Hub offers a free cloud‑native device farm, letting developers test without owning physical devices
EXPLANATION
Siddhika highlights that the AI Hub provides developers with an IP address to a device farm, enabling remote testing and deployment without the need for on‑hand hardware.
EVIDENCE
She notes that the Hub gives “free cloud native access to device farm, which exists somewhere. You just have an IP address that you log into and you take it from there.” [80-86]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Details of the free cloud-native device farm and remote testing capability are provided in [S2] and reiterated in [S23].
MAJOR DISCUSSION POINT
Remote testing infrastructure for developers
P
Praveer Kochhar
5 arguments171 words per minute974 words341 seconds
Argument 1
Shadow AI (unauthorized use of cloud AI services) is an underrated risk that threatens data security
EXPLANATION
A large proportion of enterprises use unsanctioned AI tools, exposing sensitive data to cloud services without oversight, which is a hidden but significant security concern.
EVIDENCE
He defined shadow AI, stated that 78 % of enterprise users engage in it, and described it as a major concern despite being underrated [175-182].
MAJOR DISCUSSION POINT
Unauthorized AI usage risk
DISAGREED WITH
Siddhika Nevrekar
Argument 2
AI development outpaces regulation; innovation must proceed with caution, as societal implications are still unknown
EXPLANATION
Regulatory frameworks lag behind rapid AI advancements, so while innovation should continue, it must be balanced with caution due to uncertain societal impacts.
EVIDENCE
He argued that regulation always plays catch-up, that AI’s speed makes pre-emptive regulation difficult, and emphasized the need for innovation with caution [270-281].
MAJOR DISCUSSION POINT
Regulation lag
Argument 3
Emergent AGI behaviors will appear at the edge, signaling the start of true artificial general intelligence
EXPLANATION
Emerging behaviors in large language models, such as autonomous file creation, indicate the onset of AGI characteristics that could manifest on edge devices.
EVIDENCE
He cited OpenClaw’s trick that added emergent behavior to an LLM, suggesting such capabilities will reach the edge and enable models to create their own learning [392-395].
MAJOR DISCUSSION POINT
Emergent AGI at edge
Argument 4
AI‑driven attention capture may create addictive user behavior, posing societal risks
EXPLANATION
Praveer warns that hyper‑intelligent, self‑adapting systems could become extremely addictive, making it hard for users to disengage and raising concerns about societal impact.
EVIDENCE
He says, “it will be very difficult to keep attention away from a device when you have a hyper intelligent system… it will become extremely addictive.” [352-356]
MAJOR DISCUSSION POINT
Addictive potential of advanced AI
Argument 5
Robots acting as agents can evoke unexpected empathy, raising ethical questions about human‑robot interaction
EXPLANATION
Praveer describes a robot deployed as a beggar that elicited more empathy than a human, suggesting that AI agents can alter human emotional responses and require careful ethical consideration.
EVIDENCE
He recounts seeing a “robotic beggar” that generated empathy and notes the broader “tangential use cases” of AI that affect human perception. [355-359]
MAJOR DISCUSSION POINT
Empathy and ethics in human‑robot interaction
S
Shreenivas Chetlapalli
5 arguments158 words per minute559 words211 seconds
Argument 1
Excessive data leaving devices increases breach risk; focus should be on minimizing outbound data
EXPLANATION
Allowing large amounts of personal or enterprise data to exit devices heightens security vulnerabilities; strategies should prioritize keeping data local or using synthetic data.
EVIDENCE
He stated that too much data leaving the device is a risk, advocated for training with less data or synthetic datasets to reduce breaches, and warned that more data leaving leads to more breaches [313-319].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of data outflow and the privacy benefits of edge processing are discussed in [S2] and [S21].
MAJOR DISCUSSION POINT
Data outflow risk
AGREED WITH
Durga Maladi, Madhav Bhargav
Argument 2
Set realistic expectations: AI augments work, does not replace jobs
EXPLANATION
Organizations should communicate that AI is intended to enhance human tasks rather than eliminate employment, helping manage expectations and adoption.
EVIDENCE
He emphasized setting expectations that AI will augment work and not take jobs, highlighting the need to correct the misnomer that AI replaces jobs [219-222].
MAJOR DISCUSSION POINT
AI as augmentation
Argument 3
Public‑sector banks, state governments, and PSU units are actively adopting AI solutions
EXPLANATION
In India, various public‑sector entities are embracing AI, indicating growing institutional trust and deployment across finance and governance.
EVIDENCE
He referenced adoption by public-sector banks, state governments, and PSU units, noting meetings with ministers and AI centres being set up, showing broad institutional uptake [227-232].
MAJOR DISCUSSION POINT
Institutional AI adoption in India
Argument 4
On‑premise AI platforms (e.g., Orion) enable enterprises to keep data processing local, addressing data‑sovereignty concerns
EXPLANATION
Shreenivas explains that their Orion platform is built for on‑prem deployment, allowing customers to process data within their own premises rather than sending it to the cloud, which is important for privacy and regulatory compliance.
EVIDENCE
He states, “the AI platform has been built for on-prem… many requirements… process things in my own premises rather than taking it to the cloud.” [254-255]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
On-premise deployments with AI accelerator cards for local processing are highlighted in [S2]; the broader heterogeneous compute strategy supporting data sovereignty is described in [S17].
MAJOR DISCUSSION POINT
Local/on‑prem AI for data sovereignty
Argument 5
Local processing is preferred in India to reduce data outflow and meet business needs
EXPLANATION
He argues that for Indian enterprises, processing data locally (or on‑prem) is advantageous because it limits data leaving the device and aligns with business and regulatory requirements.
EVIDENCE
He says, “local is the first option but for India data center makes business local… we have seen large companies ask to solve things on their own desktops or locally.” [254-255]
MAJOR DISCUSSION POINT
Preference for local AI processing in India
M
Madhav Bhargav
4 arguments180 words per minute1182 words392 seconds
Argument 1
Early attempt to train a separate model per client failed; shifted to capturing usage via plugins to provide grounded answers from a shared model
EXPLANATION
Initially, SpotDraft tried building individual models for each customer, which proved unsustainable; they pivoted to using a plugin that captures user interactions, enabling a shared model to deliver context‑aware answers.
EVIDENCE
He recounted the early strategy of per-customer model training, the decision to abandon it, and the development of a Word plugin that captures usage to feed a shared model, forming the basis of current capabilities [195-209].
MAJOR DISCUSSION POINT
Scalable model strategy
Argument 2
Emergent AGI behaviors will appear at the edge, signaling the start of true artificial general intelligence
EXPLANATION
He predicts that by 2030 AI will be embedded in virtually every device, with chips containing AGI capabilities, marking the emergence of general intelligence at the edge.
EVIDENCE
He stated that AI will be ubiquitous, that anything with a chip will have AGI, and that AGI in 2030 will be AGI, indicating a vision of pervasive general intelligence [391-395].
MAJOR DISCUSSION POINT
AGI ubiquity
Argument 3
AI can automate knowledge work by delivering context‑aware answers from internal data, dramatically shortening onboarding and decision‑making
EXPLANATION
Madhav describes how SpotDraft’s AI, trained on internal data, provides grounded answers that help lawyers quickly understand contract policies, reducing the time needed for onboarding and improving productivity.
EVIDENCE
He recounts that the internal prototype “threw up” insights for lawyers, leading to a “knowledge work” system that “captures what contracts have certain clauses” and speeds up onboarding. [333-339]
MAJOR DISCUSSION POINT
AI‑augmented knowledge work in legal domain
Argument 4
Consumer‑grade AI assistants (e.g., WhatsApp integration) can dramatically boost personal productivity but raise data‑volume concerns
EXPLANATION
Madhav shares his experience with a personal AI assistant deployed on WhatsApp that saved him time, while also noting the challenge of managing the large amount of data such assistants generate.
EVIDENCE
He says, “I deployed it on my WhatsApp and it started sending messages… it was a little bit scary, but also saved me a bunch of time,” and later mentions the need to switch it off because of too much data. [341-345][346-348]
MAJOR DISCUSSION POINT
Productivity gains and data concerns from consumer AI assistants
R
Ritukar Vijay
5 arguments168 words per minute851 words303 seconds
Argument 1
Cloud handles fleet orchestration while edge performs autonomous navigation; a balanced split solves real‑world robot use cases
EXPLANATION
For robotics, the cloud is used for managing fleets, whereas edge devices execute time‑critical navigation, illustrating a hybrid approach that meets latency and scalability needs.
EVIDENCE
He explained that their architecture uses cloud for fleet orchestration and edge for autonomous navigation, describing the split as essential for robot deployments [237-238].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The hybrid split of cloud-based fleet management and edge-based real-time navigation aligns with the distributed compute model presented in [S15] and the heterogeneous AI approach in [S17].
MAJOR DISCUSSION POINT
Edge‑cloud split in robotics
Argument 2
Edge AI will be ubiquitous and taken for granted, embedded in everyday objects
EXPLANATION
By 2030, edge AI will be so common that users will assume its presence, similar to connectivity today, appearing in all devices.
EVIDENCE
He described edge AI as being taken for granted, everywhere, like connectivity, and that it will be a default part of devices, emphasizing its future ubiquity [376-384].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The expectation that edge AI will become pervasive and assumed, similar to connectivity, is expressed in [S17] and reinforced by the forward-looking edge AI narrative in [S23].
MAJOR DISCUSSION POINT
Ubiquitous edge AI
Argument 3
Continuous connectivity is a critical hardware constraint for autonomous robots; lack of remote access hampers maintenance and operation
EXPLANATION
Ritukar points out that without reliable connectivity, robots cannot be accessed for scheduled or emergency maintenance, making connectivity a top hardware concern.
EVIDENCE
He states, “one of the biggest hardware constraints is if your entirety of the system is without any connectivity… you cannot access robots remotely… it keeps us awake that the robots should not go in silos or isolated where we cannot reach them.” [304-308]
MAJOR DISCUSSION POINT
Connectivity as a hardware constraint for robotics
DISAGREED WITH
Durga Malladi
Argument 4
Future AI deployments will benefit from 6G connectivity, which can support high‑bandwidth, low‑latency edge AI for robotics
EXPLANATION
Ritukar argues that 6G would improve connectivity for robots operating in remote environments, unlocking new possibilities for edge AI applications.
EVIDENCE
He mentions that “if it is 6G it’s better… it opens up a lot more possibilities” when discussing robots in a mining site with limited internet. [248-252]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of 6G in unlocking AI capabilities, especially for edge robotics, is discussed in [S2] and [S1].
MAJOR DISCUSSION POINT
6G as enabler for edge AI in robotics
Argument 5
Agents, rather than hardware alone, are the core of autonomous systems, emphasizing software‑centric AI
EXPLANATION
Ritukar stresses that robots are essentially agents, highlighting the importance of the software layer over the physical hardware in autonomous solutions.
EVIDENCE
He says, “Robots are the agents… agents” and later answers rapid-fire with “Robots are the agents” and “agents, yeah”. [288-291][290-291]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The software-centric, agent-focused view of autonomous systems is highlighted in the heterogeneous compute discussion in [S17].
MAJOR DISCUSSION POINT
Agent‑centric view of autonomous systems
M
Moderator
2 arguments135 words per minute156 words69 seconds
Argument 1
Developer tools are essential for accelerating inclusive AI at scale
EXPLANATION
The moderator emphasizes that the speed of AI innovation depends on the availability of effective developer tools, and that enabling developers is critical for inclusive AI deployment.
EVIDENCE
He states that “As we talk about inclusive AI at scale, enabling developers is critical. Innovation only moves as fast as the tools behind it.” [145-146]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s statement that “innovation only moves as fast as the tools behind it” and the emphasis on developer enablement are captured in [S2].
MAJOR DISCUSSION POINT
Developer enablement for inclusive AI
AGREED WITH
Durga Maladi, Siddhika Nevrekar
Argument 2
The Qualcomm AI Hub provides a free cloud‑native device farm for testing without physical hardware
EXPLANATION
The moderator highlights that the AI Hub gives developers an IP address to access a cloud‑native device farm, allowing them to test and validate applications without needing the actual device in hand.
EVIDENCE
He introduces the AI Hub by saying it “simplifies how developers access optimized models, test and deploy high-performance on-device AI from edge to cloud” and that it offers “free cloud native access to device farm”. [147-148][80-86]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Hub’s free cloud-native device farm and remote testing capability are described in [S2] and reiterated in [S23].
MAJOR DISCUSSION POINT
Free device‑farm for developer testing
Agreements
Agreement Points
Developer tools and the Qualcomm AI Hub accelerate on‑device AI development and lower barriers for creators
Speakers: Durga Maladi, Siddhika Nevrekar, Moderator
Qualcomm’s AI Hub streamlines model onboarding, provides a free device farm, and enables deployment without physical hardware Simplifies onboarding and accelerates on‑device AI development from edge to cloud Developer tools are essential for accelerating inclusive AI at scale
All three highlighted that the Qualcomm AI Hub (or similar developer platforms) simplifies access to models, offers a free cloud-native device farm for testing, and is crucial for speeding up on-device AI across the edge-to-cloud continuum, reducing the need for physical devices [78-86][145-148][147-148].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with Qualcomm’s AI democratization agenda that stresses edge devices as essential for universal AI access [S45] and is supported by workshop evidence highlighting the importance of developer tools for edge AI adoption [S57].
Edge AI delivers a consistent user experience that does not depend on network connectivity
Speakers: Durga Maladi, Ritukar Vijay
On‑device AI guarantees a consistent user experience independent of network connectivity Continuous connectivity is a critical hardware constraint for autonomous robots; lack of it hampers operation and maintenance
Durga emphasized that AI quality remains invariant to connectivity, while Ritukar stressed that reliable connectivity is essential for robot operation, showing a shared view that connectivity is pivotal for effective edge AI deployment [18-21][304-308].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for a consistent experience despite variable connectivity is documented in discussions on heterogeneous compute for democratizing AI access [S48] and reinforced by case studies where edge AI mitigates connectivity gaps in low-infrastructure regions [S46].
AI workloads should be distributed across edge, cloud, and data‑center according to latency and scale needs
Speakers: Durga Maladi, Ritukar Vijay
Distributed processing: edge for low‑latency tasks, cloud for training, data‑center for large‑scale inference Cloud handles fleet orchestration while edge performs autonomous navigation
Both described a hybrid architecture where the edge handles real-time inference, the cloud manages large-scale coordination or training, and data-centers provide high-performance inference, underscoring a unified view of workload distribution [60-66][237-238].
POLICY CONTEXT (KNOWLEDGE BASE)
Distributed workload placement is a core principle in the heterogeneous compute framework that balances latency, scale and resource constraints [S48] and is echoed in 6G visions that embed AI across the network continuum [S51].
The upcoming 6G generation will unlock new AI capabilities, especially for edge and robotics
Speakers: Durga Maladi, Ritukar Vijay
6G will unlock AI’s full potential, with trials linked to the 2028 Olympics and deployments slated for 2029 6G improves connectivity for robots in remote sites, opening many more possibilities
Durga linked 6G to future AI breakthroughs and Olympic trials, while Ritukar highlighted 6G’s role in enhancing robot connectivity, showing consensus on 6G as a key enabler for advanced edge AI [124-129][248-252].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses project 6G as an AI-native network enabling advanced edge and robotic applications [S49][S51], and policy discussions stress the need for open, interoperable ecosystems to realise these capabilities [S47].
Keeping data local/on‑premise improves privacy and reduces security risks
Speakers: Durga Maladi, Shreenivas Chetlapalli, Madhav Bhargav
Processing personal and enterprise data on the edge protects privacy by keeping data local Excessive data leaving devices increases breach risk; focus should be on minimizing outbound data Consumer‑grade AI assistants generate large volumes of data, raising concerns about data overload
Durga noted the desire to avoid storing personal data in the cloud, Shreenivas warned that too much data leaving devices heightens breach risk, and Madhav described data-volume challenges from personal AI assistants, all converging on the need for local processing to safeguard privacy [22-24][313-319][341-345].
POLICY CONTEXT (KNOWLEDGE BASE)
Data-localisation policies and privacy frameworks argue that on-premise processing limits exposure, as seen in industry guidelines for robot data handling [S59] and UNCTAD’s assessment of localisation’s role in privacy protection [S61].
Emergent AGI‑like behaviours are expected to appear at the edge, signalling a shift toward pervasive intelligence
Speakers: Praveer Kochhar, Madhav Bhargav
Emergent AGI behaviours will appear at the edge, signalling the start of true artificial general intelligence AI will be ubiquitous; any chip will contain AGI capabilities
Praveer cited emergent behaviours in LLMs as a sign of AGI reaching the edge, while Madhav projected that AGI will become ubiquitous in chips, reflecting a shared expectation of near-future general-intelligence deployment [392-395][391-395].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance reports on agentic AI highlight concerns about emergent autonomous behaviours and call for cautious oversight as such capabilities migrate to edge devices [S39][S53].
Edge AI will become ubiquitous and taken for granted in everyday devices
Speakers: Durga Maladi, Ritukar Vijay, Madhav Bhargav
AI is becoming the universal user interface that unifies voice, text, video and sensor inputs Edge AI will be taken for granted, everywhere, like connectivity today AI will be ubiquitous; anything with a chip will have AGI
Durga described AI as the next-generation UI, Ritukar predicted edge AI will be a default, and Madhav envisioned AI embedded in all chips, collectively indicating consensus that edge AI will be pervasive and assumed in the near future [31-34][45-46][376-384][391-395].
POLICY CONTEXT (KNOWLEDGE BASE)
Workshop participants forecast edge AI becoming a default feature by 2030, reflecting a shift to ubiquitous deployment [S58] and broader industry consensus on its inevitability [S57].
Similar Viewpoints
Both stress that limiting data outflow from devices enhances privacy and security, advocating for edge or on‑premise processing to mitigate breach risks [22-24][313-319].
Speakers: Durga Maladi, Shreenivas Chetlapalli
Processing personal and enterprise data on the edge protects privacy by keeping data local Excessive data leaving devices increases breach risk; focus should be on minimizing outbound data
Both highlight hidden data‑security threats arising from uncontrolled data movement—Praveer via shadow AI, Shreenivas via excessive data outflow—underscoring the need for tighter governance of enterprise data flows [175-182][313-319].
Speakers: Praveer Kochhar, Shreenivas Chetlapalli
Shadow AI (unauthorized use of cloud AI services) is an underrated risk that threatens data security Excessive data leaving devices increases breach risk; focus should be on minimizing outbound data
Both acknowledge powerful societal impacts of AI—Madhav on productivity gains and Praveer on potential addiction—indicating a shared awareness of AI’s double‑edged influence on human behaviour [333-339][352-356].
Speakers: Madhav Bhargav, Praveer Kochhar
AI can automate knowledge work by delivering context‑aware answers from internal data, dramatically shortening onboarding and decision‑making AI‑driven attention capture may create addictive user behavior, posing societal risks
Unexpected Consensus
Hidden data‑security risks from uncontrolled AI usage
Speakers: Praveer Kochhar, Shreenivas Chetlapalli
Shadow AI (unauthorized use of cloud AI services) is an underrated risk that threatens data security Excessive data leaving devices increases breach risk; focus should be on minimizing outbound data
While Praveer frames the risk as unauthorized cloud AI tools (shadow AI) and Shreenivas frames it as too much data exiting devices, both converge on the unexpected consensus that unseen data flows-whether via shadow services or bulk export-pose significant security challenges for enterprises [175-182][313-319].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent security analyses expose risks such as unauthorized agent access and malicious AI exploitation, underscoring the hidden threat landscape [S55][S53][S44].
Overall Assessment

The discussion shows strong alignment among speakers on the necessity of developer‑friendly platforms (AI Hub), hybrid edge‑cloud‑data‑center architectures, the strategic role of 6G, and the imperative to keep data local for privacy. There is also consensus that edge AI will become ubiquitous and that emergent AGI behaviours are on the horizon, while concerns about hidden data‑security risks and societal impacts of AI are shared across participants.

High consensus on technical and infrastructural directions (developer tools, hybrid processing, 6G, edge ubiquity) combined with moderate consensus on societal and governance challenges, indicating a cohesive vision that can inform policy and industry roadmaps.

Differences
Different Viewpoints
Role of network connectivity for AI systems
Speakers: Durga Malladi, Ritukar Vijay
On‑device AI guarantees a consistent user experience independent of network connectivity Continuous connectivity is a critical hardware constraint for autonomous robots; lack of remote access hampers maintenance and operation
Durga argues that running AI inference on-device makes the user experience invariant to network quality, eliminating the need for constant connectivity [18-21]. Ritukar counters that for autonomous robots, uninterrupted connectivity is essential for remote monitoring, scheduled and emergency maintenance, and fleet orchestration, making it a top hardware concern [304-308]. This reflects a disagreement on whether AI deployments should rely on connectivity or be fully offline.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates note that connectivity challenges extend beyond infrastructure to content relevance and incentives, influencing AI system design [S41] and prompting edge solutions to mitigate gaps [S46].
What constitutes the most pressing challenge for enterprise AI adoption
Speakers: Praveer Kochhar, Siddhika Nevrekar
Shadow AI (unauthorized use of cloud AI services) is an underrated risk that threatens data security Innovation only moves as fast as the tools behind it
Praveer highlights the security and governance risk posed by widespread, unsanctioned use of cloud AI tools (shadow AI) as a critical but underrated issue for enterprises [175-182]. Siddhika, representing the moderator’s view, stresses that the primary bottleneck is the availability of developer tools and platforms, not security risk, suggesting a different priority for accelerating AI adoption [146-148].
POLICY CONTEXT (KNOWLEDGE BASE)
Summaries of AI adoption forums reveal divergent views on risk prioritisation, implementation hurdles and capability gaps as the main sources of disagreement [S40].
Unexpected Differences
Optimistic UI vision vs societal risk of AI agents
Speakers: Durga Malladi, Praveer Kochhar
AI is becoming the universal user interface that unifies voice, text, video and sensor inputs AI‑driven attention capture may create addictive user behavior, posing societal risks
Durga paints AI agents as the next seamless, multimodal user interface that will simplify interaction with devices [31-34][45-46]. Praveer, however, warns that such powerful agents could become attention-capturing, addictive systems that reshape user behavior and raise ethical concerns [352-356]. The clash between a purely beneficial UI narrative and a cautionary societal impact perspective was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on agentic AI balance optimistic UI potentials with societal risk considerations, as reflected in industry-government dialogues emphasizing cautious governance [S39][S53].
Connectivity priority versus data‑outflow minimization in edge deployments
Speakers: Ritukar Vijay, Shreenivas Chetlapalli
Continuous connectivity is a critical hardware constraint for autonomous robots; lack of remote access hampers maintenance and operation Too much data leaving the device is a risk; focus should be on minimizing outbound data
Ritukar emphasizes that reliable, always-on connectivity is essential for robot operation and remote management [304-308]. Shreenivas, focusing on privacy, argues that limiting data transmission from devices is paramount to avoid breaches, implicitly favoring isolated or low-communication edge solutions [313-314]. The divergent emphasis-connectivity versus data minimization-was not an obvious point of contention given both discuss edge strategies.
POLICY CONTEXT (KNOWLEDGE BASE)
Edge strategies often trade connectivity reliance against minimizing data transfer, a tension highlighted in analyses of edge compute for privacy and cost efficiency [S46][S48].
Overall Assessment

The discussion revealed moderate disagreement centered on the role of connectivity versus offline edge AI, the prioritization of security risks (shadow AI) versus tooling, and differing views on the societal impact of AI agents. While participants shared common goals—enhancing privacy, enabling hybrid AI workloads, and accelerating developer adoption—their preferred pathways diverged, highlighting the need for balanced strategies that address both technical and ethical dimensions.

Medium: The disagreements are substantive but not irreconcilable, indicating that consensus can be reached through integrated approaches that combine robust connectivity where needed, strong privacy safeguards, and responsible AI design.

Partial Agreements
Both aim to safeguard personal and enterprise data. Durga proposes keeping the data on‑device/edge to avoid dependence on connectivity and cloud storage [22-24]. Shreenivas emphasizes reducing data outflow and using synthetic data to lower breach risk, advocating for local processing as a means to that end [313-319]. Their goals align, but the methods differ.
Speakers: Durga Malladi, Shreenivas Chetlapalli
Processing personal and enterprise data on the edge protects privacy by keeping data local Excessive data leaving devices increases breach risk; focus should be on minimizing outbound data
Both endorse a hybrid AI architecture that leverages edge and cloud resources. Durga outlines a general philosophy of allocating inference to edge, training to cloud, and large‑scale inference to data centers [60-66]. Ritukar provides a concrete robotics example where the cloud manages fleet orchestration and the edge handles time‑critical navigation, illustrating a specific implementation of the same hybrid principle [237-238].
Speakers: Durga Malladi, Ritukar Vijay
Distributed processing: edge for low‑latency tasks, cloud for training, data‑center for large‑scale inference Cloud handles fleet orchestration while edge performs autonomous navigation; a balanced split solves real‑world robot use cases
Both agree that tooling is key to scaling AI. The moderator stresses the overarching need for developer tools to drive inclusive AI [145-146]. Durga details Qualcomm’s AI Hub as a concrete solution that offers model access, a cloud‑native device farm, and remote testing, directly addressing the moderator’s call for better tools [78-86].
Speakers: Moderator, Durga Malladi
Developer tools are essential for accelerating inclusive AI at scale Qualcomm’s AI Hub streamlines model onboarding, provides a free device farm, and enables deployment without physical hardware
Takeaways
Key takeaways
AI model sizes are shrinking while quality improves, making on‑device inference feasible on smartphones, AR glasses, PCs, and other edge devices. A hybrid AI architecture that distributes workloads across edge, cloud, and data‑center optimizes latency, energy efficiency, and total cost of ownership. Qualcomm’s AI Hub streamlines developer onboarding, model ingestion, cloud‑native device‑farm testing, and deployment without needing physical hardware. Enterprise ‘shadow AI’—unauthorized use of cloud AI services—is a significant, under‑recognized security risk; minimizing outbound data from devices is crucial. Early attempts to train separate models per client in legal AI failed; capturing usage via plugins enables a shared model to provide grounded, customer‑specific answers. AI adoption in India is progressing, especially in public‑sector banks and government initiatives, but expectations must be set that AI augments rather than replaces jobs. Robotics workloads benefit from a split: edge handles real‑time navigation, cloud manages fleet orchestration; connectivity (potentially 6G) remains a key enabler. 6G is expected to unlock new AI capabilities, with trials tied to the 2028 Summer Olympics and broader deployments planned for 2029. Regulation is lagging behind rapid AI innovation; caution is needed but over‑regulation could stifle progress. By 2030 edge AI is expected to be ubiquitous and taken for granted, with emergent AGI‑like behaviors appearing at the edge.
Resolutions and action items
Qualcomm will continue to expand the AI Hub, offering free cloud‑native device farms and model‑ingestion pipelines for developers. Qualcomm plans 6G trials linked to the 2028 Olympics and aims for initial commercial deployments in 2029. Panel participants offered contact points for follow‑up (e.g., Autonomy for robotics, SpotDraft for legal AI, TechMindra for AI platforms).
Unresolved issues
How to effectively govern and mitigate shadow AI usage within enterprises. Developing regulatory frameworks that keep pace with fast‑moving AI capabilities. Optimal strategies for balancing data privacy with the need for large training datasets (synthetic data vs. real data). Hardware constraints such as ensuring continuous connectivity for remote robots and managing power/thermal limits across devices and data‑centers. The timeline and practical pathways for achieving true AGI or emergent behaviors at the edge.
Suggested compromises
Adopt a split processing model: run latency‑sensitive tasks on‑device/edge, and offload heavy training or complex inference to cloud or data‑center. Minimize outbound data from devices to reduce breach risk while using synthetic data or on‑device learning to maintain model performance. Leverage emerging 6G connectivity to enhance edge‑cloud integration without abandoning existing 5G infrastructure.
Thought Provoking Comments
Model sizes are coming down while the model quality continues to increase… this is the equivalent of an AI law that seems to be emerging as far as models themselves are concerned.
Highlights a fundamental shift in AI economics – smaller, cheaper models can now deliver higher performance, which underpins the feasibility of edge AI.
Set the technical foundation for the whole talk, leading Durga to explain why edge devices can now run sophisticated models and prompting the audience to consider new use‑cases beyond cloud‑only inference.
Speaker: Durga Malladi
Running AI on‑device makes the quality of the AI experience invariant to the quality of connectivity that those devices have to the back end of the network.
Emphasizes a core advantage of edge AI: reliability and privacy regardless of network conditions, a point often overlooked in hype‑driven discussions.
Shifted the conversation from pure performance to user experience and data‑privacy concerns, paving the way for later remarks about personal data and the need for on‑device processing.
Speaker: Durga Malladi
Imagine a voice‑first AI agent that replaces the clutter of apps – the agent maps your request to the right services and uses a personal knowledge graph.
Proposes a radical UI paradigm where AI becomes the universal interface, moving beyond touch and keyboards to a conversational, multimodal experience.
Introduced the notion of AI agents as the new UI, which later resonated with panelists discussing agents, autonomy, and the future of human‑machine interaction.
Speaker: Durga Malladi
Byte’s new phone in China is designed AI‑first – there are no visible apps, only an agent that runs everything behind the scenes.
Provides a concrete, disruptive product example that embodies the AI‑first, agent‑centric vision, turning abstract ideas into a tangible reality.
Validated the earlier AI‑agent UI concept with a real‑world case, sparking interest and reinforcing the feasibility of moving beyond traditional app ecosystems.
Speaker: Durga Malladi
Shadow AI – about 78 % of enterprise users are using unauthorized AI tools on critical data. It’s an underrated pain point that drives efficiency but creates huge risk.
Brings attention to a hidden governance issue in enterprises, linking AI adoption to security, compliance, and data‑sovereignty concerns.
Redirected the panel from pure technology talk to risk management, prompting follow‑up comments about data leakage, regulation, and the need for on‑premise solutions.
Speaker: Praveer Kochhar
Our early failure was trying to train a separate model for each customer; we pivoted to capture usage via a Word plugin, letting a single model learn from aggregated data.
Shows a real‑world lesson about product‑market fit and data strategy, illustrating how a seemingly failed approach can become a strength.
Provided a narrative of learning from failure that resonated with other founders, influencing the discussion on data constraints and the importance of flexible model deployment.
Speaker: Madhav Bhargav
The most important thing for AI adoption in India is setting realistic expectations and dispelling the myth that AI will take away jobs.
Addresses cultural and societal barriers to AI uptake, highlighting that perception management is as crucial as technology.
Broadened the conversation beyond technical details to include adoption psychology, leading to mentions of public‑sector pilots and government involvement.
Speaker: Shreenivas Chetlapalli
Regulation will always play catch‑up; the biggest fear is the societal impact and addiction from hyper‑intelligent, self‑adapting agents.
Raises ethical and long‑term governance concerns, reminding the audience that rapid AI progress carries profound societal risks.
Deepened the dialogue on responsible AI, influencing later remarks about control mechanisms for autonomous agents and the need for guardrails.
Speaker: Praveer Kochhar
The biggest hardware constraint that keeps me up at night is lack of continuous connectivity for robots – without remote access they become isolated silos.
Highlights a practical deployment challenge that bridges the edge‑cloud discussion with real‑world operational requirements.
Re‑focused the panel on infrastructure reliability, linking back to earlier points about edge vs cloud processing and the necessity of seamless connectivity.
Speaker: Ritukar Vijay
Our wow moment was when an AI‑driven review of our own internal contracts surfaced hidden policy clauses, convincing even skeptical lawyers of the technology’s value.
Provides a compelling, concrete success story that demonstrates AI’s ability to uncover hidden knowledge, turning abstract benefits into measurable outcomes.
Reinforced the argument for AI in knowledge work, inspiring other panelists to share similar breakthrough moments and underscoring the business impact of AI agents.
Speaker: Madhav Bhargav
Overall Assessment

The discussion was driven forward by a handful of pivotal insights that moved the conversation from abstract hype to concrete reality. Durga’s technical framing of shrinking model sizes and on‑device reliability established the feasibility of edge AI, while her vision of AI‑first agents reshaped the UI narrative. Real‑world examples—Byte’s AI‑first phone and Madhav’s contract‑analysis breakthrough—grounded those ideas. The panel’s turn toward governance and risk (shadow AI, regulatory lag, connectivity constraints) introduced a critical counterbalance, ensuring the dialogue addressed both opportunity and responsibility. Collectively, these comments created a dynamic flow: technical possibilities opened, practical challenges were raised, and ethical considerations were woven in, resulting in a nuanced, forward‑looking conversation about the future of AI across edge, cloud, and enterprise.

Follow-up Questions
How should the industry respond to the rapid improvement in model quality while model sizes shrink, and what new applications can be built on these smaller, more capable models?
Durga highlighted the trend of decreasing model sizes with increasing quality and asked what should be done next, indicating a need for concrete strategies and use‑case development.
Speaker: Durga Malladi
What are effective architectures and orchestration strategies for distributing AI processing across devices, edge, cloud, and data‑center environments?
He described a hybrid AI vision but left open how to optimally split workloads, a key technical challenge for developers.
Speaker: Durga Malladi
What are the risks, governance challenges, and mitigation strategies for ‘shadow AI’—unauthorized AI tool usage that exposes enterprise data?
Praveer identified shadow AI as an underrated pain point, suggesting the need for research into security, compliance, and policy frameworks.
Speaker: Praveer Kochhar
How can enterprises avoid building separate models for each customer and instead leverage shared, scalable models while still delivering personalized results?
Madhav recounted an early failure of per‑customer model training, pointing to a research area in model generalization and personalization techniques.
Speaker: Madhav Bhargav
What is the ‘special ingredient’ that drives successful AI adoption in India, especially regarding expectations and job‑displacement concerns?
He was asked to identify unique factors for Indian AI uptake, indicating a need for deeper market‑specific studies.
Speaker: Shreenivas Chetlapalli
How much trust do Indian enterprises and public sector entities place in AI, and what factors influence that trust?
His discussion about adoption in banks and government suggests further investigation into trust metrics and barriers.
Speaker: Shreenivas Chetlapalli
What hardware constraints, particularly regarding reliable connectivity for remote robot management, keep teams up at night?
He emphasized the challenge of maintaining constant connectivity for fleets of robots, highlighting a research need in robust communication solutions.
Speaker: Ritukar Vijay
Is excessive data egress from devices more dangerous than insufficient data flow, and how should privacy‑risk trade‑offs be managed?
He raised the data‑leakage risk, pointing to a need for studies on optimal data‑flow policies and privacy‑preserving techniques.
Speaker: Shreenivas Chetlapalli
How can high‑quality models be trained with limited data, possibly using synthetic data generation, to reduce data‑leakage risks?
He suggested exploring synthetic data and low‑data training methods, an area ripe for research.
Speaker: Shreenivas Chetlapalli
What are the societal impacts and potential addiction risks of hyper‑intelligent, self‑adapting AI agents?
He expressed fear about AI’s influence on attention and behavior, indicating a need for interdisciplinary research on ethics and user well‑being.
Speaker: Praveer Kochhar
How will 6G networks enable new AI capabilities, and what are the realistic timelines for trials (e.g., 2028 Olympics) and deployments (e.g., 2029)?
Durga linked AI progress to upcoming 6G developments, suggesting a research agenda on network‑AI co‑design and rollout planning.
Speaker: Durga Malladi
What are the energy‑efficiency challenges specific to AI inference versus training in data‑center environments, and how can they be addressed?
He noted that inference chips differ from training GPUs and highlighted the need for energy‑focused hardware research.
Speaker: Durga Malladi
How does innovative memory architecture improve the decode stage of inference, and what further architectural advances are needed?
He described memory‑bandwidth limits in the decode stage, indicating a technical research direction in memory system design.
Speaker: Durga Malladi
How can fraud‑call detection using Agile LMs be scaled and refined for broader deployment?
He mentioned ongoing work on fraud‑call detection, pointing to a need for further study on model effectiveness and deployment strategies.
Speaker: Shreenivas Chetlapalli
What mechanisms and guardrails are needed to control autonomous agents that operate on personal data, preventing misuse while preserving utility?
The concern about autonomous agents on personal data highlights a research gap in privacy‑preserving autonomy and user control.
Speaker: Siddhika Nevrekar (prompted by panel discussion)
How will AI‑driven UI transformations (voice, multimodal agents) be designed to provide seamless, context‑aware experiences across devices?
Durga described a future where AI becomes the primary UI, suggesting research into interaction design, context modeling, and cross‑device integration.
Speaker: Durga Malladi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Indias AI Leap Policy to Practice with AIP2

Indias AI Leap Policy to Practice with AIP2

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel explored how AI can be diffused across the Global South, launching the Global South AI Diffusion Playbook as an implementation guide covering infrastructure, data-trust, institutions, skills and market shaping [42-44]. Doreen Bogdan-Martin stressed that AI must be tailored-not a one-size-fits-all-while keeping a human-centred, inclusive approach, citing India’s Bishini platform and AI-powered public services as examples [2-9][12-14]. She outlined three pillars-solutions (connectivity infrastructure), skills (education and upskilling), and standards (interoperability and trust)-as essential to move from policy moonshots to practice [13-20][26-28][31-33]. Specific initiatives she mentioned include the GIGA school-connectivity program, the Digital Coalition’s $100 billion target, and a skilling coalition offering 180 resources in 13 languages [15-19][24-26].


Dr. Panneerselvam argued that startups are the “AI natives” that can bridge technology and business, providing mentorship, market access and capital through the METI Startup Hub [54-63][64-70]. He warned of a “technology overshoot” for SMEs and proposed an “AI bridge” to align AI capabilities with real business needs [77-83]. Brando Benefi highlighted the EU AI Act as a reference for building trust, noting that clear boundaries on high-risk uses and voluntary standards can foster confidence while avoiding over-regulation [116-124][201-208]. Rachel Adams presented a South African survey showing two-thirds of citizens lack meaningful AI understanding, creating a democratic gap that must be addressed through governance, not just infrastructure [130-138][143-144].


Fred Werner illustrated high-impact AI-for-good use cases, such as voice-based blood-sugar detection, and stressed that standards-especially for deep-fake detection-are crucial to embed safety, ethics and interoperability [150-158][170-178]. He also emphasized the global AI skills gap, noting that recent international standards summits were organized within weeks to respond to the Digital Compact call [179-181][174-176]. When asked how a billion-dollar fund should be spent, Fred prioritized education and skills development, Brando stressed digital literacy and civil-society capacity, and Rachel added strengthening state institutions to protect citizens from tech monopolies [217][219][221-223]. The participants concluded that effective AI diffusion in the Global South requires coordinated infrastructure, widespread skill building, robust standards and inclusive governance, with the ITU positioned as a trusted partner to facilitate this bridge [35-38][31-33][36].


Keypoints

Major discussion points


Building the foundational “S” pillars – Solutions, Skills, and Standards – to make AI accessible for everyone.


Doreen highlighted that AI diffusion starts with infrastructure (connectivity) [13-19], followed by the need to develop a skilled workforce through programs like India’s Future Skills Program and a global skilling coalition [20-26], and finally to create interoperable, trustworthy standards (including the AI Standards Exchange Database and deep-fake detection standards) [27-32].


Addressing the skills and education gap as the primary lever for AI adoption.


Both Doreen and Fred stressed that without widespread digital literacy, AI cannot be responsibly deployed; Fred noted the massive global skills gap and the importance of democratizing AI education from primary school onward [20-26][180-181]. Rachel added that two-thirds of South Africans lack a meaningful grasp of AI, creating a democratic deficit that must be closed [127-144].


Start-ups as the engine that bridges technology and real-world impact.


Dr. Panneerselvam described startups as “AI natives” that bring agility, mentorship, market access, and substantial funding (up to ₹1,000 crore) to scale solutions for SMEs and larger enterprises [54-70].


Trust, ethics, and governance are essential to prevent misuse and build public confidence.


Doreen warned that deep-fakes can destabilise societies and called for standards that embed traceability [29-31]. Brando explained how the EU AI Act creates clear boundaries to foster trust, while Rachel emphasized the need for public awareness and participatory governance to avoid a “democratic gap” [116-119][127-144]. Fred reinforced that standards are the practical tool to bake ethics, safety, and accountability into AI products [166-179].


The Global South AI Diffusion Playbook as a collaborative roadmap for inclusive AI.


The moderator introduced the Playbook’s five dimensions and its purpose as an implementation guide [42-44]; Doreen positioned it as a “bridge to opportunity” [36-38]; Fred described concrete actions such as the International AI Standards Summit and the Standards Exchange Database [174-176]; Brando and Rachel discussed the balance between global consensus and regional adaptation, stressing the need for inclusive standard-setting processes [201-210][186-196].


Overall purpose / goal of the discussion


The session was convened to launch and explain the Global South AI Diffusion Playbook, a practical framework that moves AI policy from “moonshots” to real-world, inclusive deployment. Participants shared experiences, identified concrete levers (infrastructure, skills, standards, trust, and startup ecosystems), and explored how multilateral cooperation can ensure AI benefits reach all communities, especially in the Global South.


Overall tone and its evolution


Opening (0-5 min): Optimistic, solution-oriented, and collaborative, with Doreen’s confident articulation of the “solutions-skills-standards” framework.


Mid-session (5-20 min): More urgent and candid as speakers highlighted challenges-skill gaps, deep-fake threats, and the need for robust governance-while still maintaining a constructive tone.


Later segment (20-40 min): Slightly more critical and reflective, especially in Brando’s remarks about ethical frameworks and potential authoritarian misuse, balanced by hopeful examples from India and startup successes.


Closing (40-43 min): Concluding on a hopeful, rally-call note, emphasizing partnership, shared learning, and a commitment to continue the work beyond the summit.


Overall, the conversation remained collegial and forward-looking, shifting from an introductory optimism to a nuanced discussion of risks and finally to a unifying call for collective action.


Speakers

Doreen Bogdan-Martin


Area of expertise: Telecommunications, digital infrastructure, AI policy and standards


Role/Title: Secretary-General, International Telecommunication Union (ITU)[S7][S8][S9]


Moderator


Area of expertise: Session moderation / conference facilitation


Role/Title: Moderator of the panel discussion (no specific title provided)


Fred Werner


Area of expertise: AI for Good, AI governance, standards development, international AI policy


Role/Title: Chief of Strategy and Operations for AI for Good and Chief of Strategic Engagement at ITU; co-creator of the AI for Good Global Summit[S16][S17][S18]


Brando Benefi


Area of expertise: European AI regulation, AI ethics, AI standards


Role/Title: Member of the European Parliament (MEP); co-reporter of the EU AI Act[S10][S11]


Rachel Adams


Area of expertise: AI governance, human rights, ethics, public policy, AI impact assessment


Role/Title: Founder and CEO, Global Center on AI Governance; advisor to governments; contributor to the African Union Continental AI Strategy[S4][S5][S6]


Dr. Panneerselvam Madanagopal


Area of expertise: Deep-tech startups, AI entrepreneurship, innovation ecosystems


Role/Title: CEO, METI (Miti) Startup Hub, Ministry of Electronics & IT, Government of India[S1][S2][S3]


Additional speakers:


Dr. Bani Selvan


Area of expertise: (not specified in transcript)


Role/Title: (not specified in transcript)


Full session reportComprehensive analysis and detailed insights

The session opened with the launch of the Global South AI Diffusion Playbook, presented by the moderator as an implementation guide rather than a high-level strategy and built around five interacting dimensions – infrastructure, data-and-trust, institutions for procurement, skills and market shaping [42-44].


Doreen Bogdan-Martin framed the discussion around four practical pillars – Solutions, Skills, Opportunities and Standards – that together constitute the “bridge to opportunity” needed to move AI from moon-shots to everyday impact [11-14][36-38]. She stressed that AI diffusion must be flexible and inclusive, rejecting a one-size-fits-all approach and highlighting India’s leadership in turning AI ambitions into tangible results, such as the Bishini platform that delivers government services in 22 languages and AI-powered public-infrastructure projects that reach rural communities regardless of economic status [2-9][6-8].


Solutions – connectivity as the foundation


Bogdan-Martin argued that without reliable connectivity AI cannot reach the one-third of humanity still offline, making infrastructure the first “S” [13-15]. She cited the GIGA school-connectivity programme with UNICEF and the Digital Coalition, which together target the hardest-to-connect populations with a $100 billion pledge of which $80 billion has already been secured [13-19]. These efforts are portrayed as the essential hardware layer upon which all other AI services must be built.


Skills – the engine of digital agency


The second pillar emphasizes that skills turn connectivity into agency [20-22]. Bogdan-Martin pointed to India’s Future Skills Programme, which up-skills thousands of students, and to the ITU-led Skilling Coalition that now gathers around 70 partners, offering more than 180 learning resources in 13 languages [23-26].


Opportunities – market-shaping mechanisms


Opportunities refer to creating market-shaping mechanisms that enable AI-driven services to reach underserved populations, ensuring that solutions are locally adapted and economically viable [11-14].


Standards – building trust and interoperability


The fourth pillar, standards, ensures AI systems work together, embed trust and combat deep-fakes [27-32]. Bogdan-Martin noted that the ITU, together with ISO and IEC, has created an AI Standards Exchange Database containing over 850 standards and technical publications, including multimedia-authenticity standards that prioritise traceability [31-32]. She reminded the audience that ITU standards are voluntary and developed through an inclusive, multi-stakeholder process [33-34]. Fred Werner added that standards work is being coordinated with industry bodies such as the C2PA on deep-fake mitigation [173-176].


The moderator then highlighted that start-ups are the primary vehicle for turning AI capability into economic impact and added a brief transition: The panel then explored how AI diffusion can move from pilots to scaled deployment[45-46].


Dr Panneerselvam Madanagopal described start-ups as “AI natives” that bring deep-tech talent, agility and the ability to transform both small- and large-scale enterprises [54-56]. He outlined the METI Startup Hub’s three M’s – Mentorship, Market access, and Money (funding) – providing end-to-end support from ideation to commercialisation, and noted that the hub can mobilise up to ₹1 000 crore in funding, complemented by an additional ₹8 000 crore from the India AI Mission, emphasizing that “there is absolutely no death of capital in the Indian market”[64-70][66-70].


Madanagopal characterised the summit as “an AI earthquake happening in Bharat Mandapam” and reminded participants that “this whole event has been put together by METI and the Ministry of External Affairs”[84-88][90-92].


Brando Benefi warned that standards are often delayed by private-sector resistance and called for mechanisms that impose deadlines to prevent governance gaps [201-202]. He used the EU AI Act to illustrate how clear, risk-based rules help build public trust, citing prohibited high-risk uses such as predictive policing, emotional-recognition in workplaces, social-scoring, and manipulative subliminal techniques [115-119][116-124].


Rachel Adams highlighted a “democratic gap” – the mismatch between rapid AI rollout and public understanding – citing a South African survey in which two-thirds of respondents lack a meaningful grasp of AI [130-138]. She added that the African Union’s Continental AI Charter explicitly includes “regulation” and emphasises human-rights, gender and children’s rights [150-154]. Adams advocated for a principle-based global consensus on accountability, transparency, safety and human oversight, allowing regional adaptation rather than a single dominant regulatory regime [188-191].


Fred Werner illustrated the promise and perils of AI-for-good with a voice-based blood-sugar detection prototype that could revolutionise diabetes monitoring but also reveal intimate personal data such as sleep patterns or alcohol consumption [150-158]. He stressed that any AI solution must be vetted for safety, security, ethics, human-rights compliance and sustainability before scaling [166-170]. Werner also announced the upcoming AI for Good Global Summit (7-10 July in Geneva) and the year-round “AI startup innovation factory” that will nurture scalable solutions [165-167][150-152]. He reiterated the rapid mobilisation of standards, noting that the International AI Standards Summit and the Exchange Database were launched in less than three weeks after the Digital Compact call [173-176].


When asked how a hypothetical $1 billion fund should be allocated, Fred Werner prioritised education and skills development across all levels [217]; Brando Benefi emphasised digital literacy and capacity-building for civil-society actors [219-220]; and Rachel Adams called for strengthening state institutions – competition commissions, gender-equality bodies, human-rights and information regulators – to protect citizens from monopolistic practices and labour displacement [221-223]. All participants agreed that skills and digital literacy are prerequisite for AI diffusion, standards are vital for trust and interoperability, and AI solutions must be locally adapted[217-223].


Areas of consensus and disagreement


Consensus centred on the three “S” pillars, inclusive start-up ecosystems and the need for robust institutional capacity. Three substantive disagreements emerged:


1. Timing and necessity of standards – Bogdan-Martin positioned standards as a core pillar, Benefi warned of deliberate delays and demanded deadline mechanisms, while Werner highlighted rapid standards mobilisation as evidence that swift coordination is possible [27-32][201-202][173-176].


2. Regulatory approach – Benefi argued that the EU AI Act’s clear, risk-based rules are essential for trust, whereas Adams advocated for a flexible, principle-based global framework to avoid dominance by any single region [116-124][188-191].


3. Perceived speed of standards development – Benefi’s view of chronic delays contrasts with Werner’s example of launching an international standards summit within three weeks, revealing differing perceptions of how quickly standards can be produced [201-202][173-176].


The panel concluded that effective AI diffusion in the Global South requires coordinated effort across the four “S” pillars, a robust start-up ecosystem, inclusive governance and a clear funding focus on education and institutional capacity. The ITU positioned itself as a trusted partner to facilitate the bridge between connectivity, skills, opportunities and standards [37-38], while the Global South AI Diffusion Playbook was presented as the practical roadmap to guide implementation and monitor progress across its five dimensions [42-44]. The summit underscored that sustained multilateral cooperation, inclusive standard-setting, and massive investment in human capital are essential to ensure AI benefits reach every community without widening the digital divide.


Session transcriptComplete transcript of the session
Doreen Bogdan-Martin

…as to how AI can actually benefit people in their lives, their homes, their communities, and their businesses. The second point that keeps coming up is that it’s not a one -size -fits -all model. I think we do need to be flexible. We need to be inclusive when we look at different AI approaches. I would say for all parts of the world, no matter where countries are in terms of their development journeys. India, as we see, is a leader, really showing how to get from AI ambitions to real results. And, of course, in doing so, keeping people, keeping… …that human -centered approach in focus, as we heard from the Prime Minister yesterday. The Bishini platform… that we’ve also heard about, delivers government services in 22 languages.

I would say as well, similar AI -powered digital public infrastructure solutions in areas from health care to financial inclusion are really working to better serve all Indians, regardless of their economic status, their skill level, especially in rural communities. I would say inspired by these efforts, I wanted to quickly offer three observations, and you’ve actually already referred to them. Three observations about how we can move beyond moonshots from policy to actual practice here in the Asia -Pacific region and beyond. And they all begin with S, and you said them already, solutions, skills, and opportunities. Of course, standards. So Solutions is about building the infrastructure and the platforms that make artificial intelligence accessible because we cannot achieve AI for many.

We can’t achieve AI for all if we still have a third of humanity that is offline. Without connectivity, there is no AI, and that’s why efforts like our school connectivity work with UNICEF called the GIGA initiative to connect every school is so important. Our work in terms of our partner to connect, Digital Coalition, which is about connecting the hardest to connect. We have a target of achieving 100 billion this year. So far, we’re at 80 billion in commitments and pledges to connect the hardest to connect. So we need to tackle that basic infrastructure component. The second element that we need to make sure that we diffuse AI globally in practice is skills. the fundamental importance of skills.

Yesterday I was speaking to a young leader who actually likened connectivity to people feeling that they have digital agency. Skills are that engine of agency. Countries can learn directly from India’s experience of investing in people, namely through its Future Skills Program that’s providing upskilling to support thousands of students at all levels. ITU is also taking a similar approach, and my colleague Fred will be staying on for the panel today. We have a skilling coalition that’s very exciting with some, I think, 70 partners so far, bringing more than 180 different learning resources in 13 languages. And coming to my last S is that standards piece. Ensuring that AI systems work effectively together. Thank you. Standards complement solutions and skills not only for interoperability but also for embedding trust.

As Prime Minister Modi mentioned yesterday, deep fakes and misinformation can destabilize entire societies. And people must be able to distinguish between real and AI -generated material. And that’s why the ITU, together with our partners from ISO and IEC, we created the AI Standards Exchange Database that has over 850 standards and technical publications, including multimedia authenticity standards, that prioritize traceability to combat deep fakes. ITU standards are voluntary. They are developed through an inclusive… multi -stakeholder process. So ladies and gentlemen, AI diffusion isn’t about everyone using the same technology. It’s about giving everyone the same bridge to opportunity and refusing to let the digital divide become an AI divide. So today’s playbook is going to help us really build that bridge, as will our continued cooperation and collaboration on AI solutions, skilling, and standards.

In all of these areas, you can count on ITU as your trusted partner. Thank you.

Moderator

Thanks, Doreen. As you can see, Doreen has spent her career in ensuring. Every country, every community has access to or is part of digital economy. Could I just invite Doreen, Fred, Rachel, Brando, Dr. Bani -Selvan on the stage as we launch the Global South AI Diffusion Playbook. It’s a framework built around five interacting dimensions, infrastructure, data and trust, institutions for procurement, and skills and market shaping. It’s not designed as a strategy document, but more as an implementation guide, because the next phase of AI is not about moonshots, it’s about how do we ensure AI works reliably, inclusively, and productively for many. This is, I think, the photo op you guys were waiting for, so all yours.

Thank you. Doreen I know you have to leave thank you very much thanks for a great keynote as well thanks Doreenn if diffusion is about moving from capability to real economic impact then startups are obviously the transmission mechanism and a few people understand India startup ecosystem as deeply as Dr. Pani Selvam Madan Gopal CEO of Miti Startup Hub under his leadership Miti Startup Hub has become a key platform connecting government policy with entrepreneurial energy enabling innovations to move from lab to market and from pilot to scale over 6000 plus startups right so he brings over 2 decades of experience and at a moment when India is positioning itself not just as an AI industry but as an AI adopter but as an AI innovation diffusion hub his perspective on enabling startups to scale responsibly and globally particularly valuable.

Doctor, would I just have a few minutes for you.

Dr.Panneerselvam Madanagopal

Thank you, Access Partnership for having me this afternoon for this conversation. I think it’s an important element. How do you know there’s so much happening in the last four to five days in Delhi in Bharat Mandapam. So it’s important to get a grasp of what’s going on. And what each of us have to kind of take away from this and how each stakeholders in this ecosystem can help us. And startups become a very, very important player in this game. And essentially for two or three key reasons. One, they come in as AI natives. They come in with a significant understanding of the technology and the talent is kind of already there and then second they are here to the agility that they bring and the capability they bring to kind of transform businesses is becoming a very very important need for small and medium enterprises and even to large enterprises.

Just prior to this I was having a conversation with a large corporate and how they can actually use startups as a catalyst of change and transformation in their large corporate because the corporates are designed for systems and processes on scale and what need of the hour is actually agility adaptability and more importantly ability to change and bring innovation into a mainstream of any enterprise. So startups play a very very critical role so we at Métis Startup Hub are primarily driving the push to kind of ensure that startups have the wherewithal and the capability to drive and back this change that is required by the corporate ecosystem or the large enterprise ecosystem. So, briefly what do we do at METI Startup Hub?

We are the custodians of the deep tech startups in the country. This whole event has been put together by METI and of course Ministry of External Affairs has been phenomenal partners in this. So, our role in METI Startup Hub is essentially three M’s. Mentorship, market access and money. This is essentially what we provide for startups. We provide mentorship support to the entire journey from almost at an ideation stage to CDC, up to CDC level. And we provide them with market access. I’m a firm believer that your customer is your best investor if you’re a startup. And finding customers for startups is more important than finding investors, right? So, it’s important for me to find, give them the right market access support.

So, we work with large corporates across the board, across the country. when internationally we kind of drive market access support and last but not the least money we provide significant amount of, there is absolutely no death of capital in the Indian market, you know, through my agent, through my organization, MIT Startup Hub, we fund almost up to a thousand crores for startups and the India AI mission has another almost about 8000 crores to be funded for funding for startups. So there is absolutely no death of money in the market, government fund is available, you know, private capital is available, so that’s what we support. And our endeavor is to ensure that startups are at the heart of this renaissance of this change that is kind of happening in the ecosystem and how startups technology can power, help this small and medium enterprises to grow.

So that’s what we are trying to do. Thank you. Thank you. Thank you. journey. So that’s what we have been driving at and conversations like this help a lot and enabling them to drive this change. We have three things I mean there is obviously a lot of challenges. It’s not easier said than done. In some cases I was reading up in a way with medium enterprises we call a technology overshoot. The technology has actually overshot the need and now the ability of the medium enterprises to cope with this technology and say how do I understand what is my need? How do I integrate this into my business need and how do I ensure that this business my business is realigned with a new workflow, a new way of doing business with this current technology with AI or AI based supported technology to kind of drive.

So there is, while there are huge challenges, but every challenge is an opportunity. So, you know, and startups are very well placed to kind of bridge that opportunity because they understand technology and they understand business. So we are hoping to create this, what I call the AI bridge now, which is, you know, kind of bridge the technology and the business need. And it’s going to be a huge opportunity by itself to kind of drive, and startups are what we are hoping will build that bridge and drive the change. So at METI Startup Hub, our endeavor is to nurture, build, and enable tech and deep tech startups in the country. And we partner with all, we collaborate with all stakeholders, domestic and international, to ensure our startups get the right, opportunities and we solve.

problems and we enable capability through building capacity. So that’s essentially in a nutshell what we do and once again I thank the access partners for providing me this opportunity to briefly share my thoughts with you and we are in a cusp of somebody called this an AI earthquake happening in Bharat Mandapam. This is a tectonic shift and this is some laying foundation for something big and better coming our way. Of course with a lot of responsibility also because everything has two sides of its own so we need to be extremely responsible in what we are doing with the technology. Thank you once again. Thank you for the opportunity. Thank you.

Moderator

As Dr. said, this is really the earthquake of AI and we are at the epicenter. And as you can see, after five days, we are all very, very tired. We started late. We’ll end on time. That’s my promise to you guys. Where is the next chair? So let me introduce our panelists very quickly. Dr. Rachel Adams, she’s the founder and CEO of the Global Center on AI Governance, a leading research and policy institution focused on ensuring that AI development and deployment advance equity and human rights globally. She also advises governments and she was a key contributor to the African Union Commission’s Continental AI Strategy. I have Fred, Fred Werner. He is the Chief of Strategy and Operations for AI for Good and Chief of Strategic Engagement at ITU.

He’s based in Geneva, but as a co -creator of the AI for Good Global Summit. which is happening from 7 to 10 July in Geneva. He brings together a global hub for collaboration standards and actionable AI -driven impact. And I’m also pleased to welcome Brando Benefi, who is a member of European Parliament, and he was a co -reporter of the EU AI Act, which we all love so much, the world’s first comprehensive AI regulation. He is an Italian MEP since 2014, and he has played a key role in shaping European digital and AI policy. Welcome, all of you. Thank you. Quick one, yeah? I’ll really start with you, Brando, in this case. We talked about concrete gains that AI diffusion can unlock in Global South over the next three to five years.

How do we move from pilots to scale deployment? I want to understand a bit more from you. it’s been a while since we have had the EU AI Act. There have been some implementation, obviously, right? So how do you see the AI diffusion being unlocked and how do you see European partnerships with the Global South there?

Brando Benefi

Well, first of all, I apologize for my voice, but it’s the, I don’t know, work of these days. Maybe we are producing a lot, but this is also the impact, so I apologize for that. But to answer your question, I think that the EU AI Act can be an interesting reference point to reflect on what we can do to implement the idea of a global diffusion, especially looking at the Global South. Because, in fact, even the so -called global north or global minority, we can use different terms, is still struggling with the diffusion of AI among different actors. If you look at the data, for example, on the diffusion among small and medium -sized enterprises, most north of the world countries, they still have very low numbers because of lack of trust, because of lack of AI literacy, because of lack of systems that facilitate understanding on how the usage of AI can ameliorate the activity of a business, a public organization, a civil society reality, etc.

So, the AI Act is a legislation that doesn’t… doesn’t create a comprehensive framework that is vague. comprehensive but confusing maybe instead it chooses to identify a series of high risk areas of usage of AI and lets instead all the non -included use cases to not be regulated further than the existing legislation. Why I’m saying this? Because I think that to overcome one of the issues obviously when we look at the issue of diffusion there are many elements infrastructure, as I said, literacy but on the issue of trust and of risk management I think the UAI Act is an interesting reference point on having clear boundaries where we do not think we need more regulation where we let the systems be used freely, where we want checks and balances to be in place where we even choose to prohibit certain use cases and where we need transparency which is still a lacking element in many of our experiences with AI so I think that in the difference of the context these elements are quite relevant for even a context that is clearly different from the average European country but I think that to build trust we need to clarify where we want governance and limits to be in place and send a clear message to the population that even when we concentrate on EU use cases, on action, this is the topic of the summit we can also build in a smart way, in a clear way, not light, but clear, clarity, elements of protection, of guarantee that can create more trust in the adoption.

Moderator

Brando, I know why your voice is like that, because people want to hear more from you. That’s why you will have a busy day today as well. I’m sure people want to talk a lot to you. Rachel, coming to you, I think Brando talked about an important point about the trust and clarity, and you have worked extensively with global south countries, right? So how crucial do you think are trust and ethics for diffusion? How do you see that actually getting implemented in practice?

Rachel Adams

Yeah, I think it is going to take far more work than perhaps we feel it might. So, you know, Brando, I think you mentioned some very important points around public awareness and understanding. In South Africa, the center I lead, the global center on… AI Governance conducted a very comprehensive public perception survey in the country. We interviewed over 3 ,000 South Africans from all walks of life, all demographic groups. We interviewed them in their own language. We have over 11 official languages in South Africa. And two -thirds of South Africans do not have a meaningful grasp of AI. So one -third of South Africans have never heard of AI, and another third of South Africans have heard of it but could not begin to tell you what it meant at all.

So I think if we’re thinking about the relationship between the large -scale private investments we’re seeing in AI diffusion, the large -scale public plans we have around AI adoption in relation to where the public sits, what their kind of levels of understanding are, and awareness and literacy is, this is going to create, this is creating a very significant significant democratic gap, particularly where a lot of these adoption pathways are around the use of AI in the public service. People don’t know about these technologies. They don’t know about the risks. They don’t know about the opportunities. They’re not able to contest it. They’re not able to participate in decision -making. We have a real problem. So diffusion cannot be something that is only about putting in place the infrastructure that sees forward technical delivery and access.

It must be scaled with governance efforts.

Moderator

I think, Brando, we had that whole discussion separately where you talked about that getting technology in the hand of people doesn’t matter if you’re using it for a lot of autocratic rules, like for example, social scoring, right? So I think maybe going to you, Fred, on this point, looking at the positive side of the story, you talked about that day, AI for good or AI for good. So how do you, some of the use cases and standards that you think are really setting the stage for helping? drive the diffusion?

Fred Werner

Yes, I think there’s no shortage of high potential AI for good use cases, especially now in 2026. That maybe wasn’t the case in 2017 when we created AI for good, but we’ve really seen things go from the hype, the fear of a promise, mainly existing in fancy marketing slides, to the advent of Gen AI, the rise of AI agents, and now the physical manifestation of AI in the form of robotics, embodied AI, brain -computer interface technologies, and even space AI computing, right? And just to give you an example, we have an AI startup innovation factory that runs all year, and there was an Estonian startup that had a very interesting application that can basically tell how much sugar is in your blood based on the sound of your voice using a mobile phone and detecting voice patterns, right?

Now, this could be a game changer for diabetes. I mean, it’s a nasty, you know, global disease. Taking your blood sugar is expensive, inconvenient, sometimes painful. It’s a real pain. Right now it’s still a pilot, but you see the potential for scale. But on the other hand, if it can tell how much sugar is in your blood, what else can it tell about you? How late did you stay up last night? What did you have for dinner? Are you on medication? Did you have too much wine? Are you paying attention? Actually, are you paying attention? So you can see where it goes, right? So you can’t take it for granted that these applications will develop in the right way and will be mindful of a lot of things we were talking about here all week.

Are these solutions, are they safe? Are they secure? Do they have ethics baked in? Do they respect human rights? Are they designed with participation from the global south of the table? Are they sustainable when it comes to energy and all types of things? And one way to, I guess, bake that in could be with standards. It’s not the only solution. But when you look at these fast -emerging governance frameworks, popping up all around the world, of course, you have the EU AI Act, you have different frameworks from around the world. I think one of the tricks is you don’t have a one -size -fits -all, and AI is moving very, very fast. but there are many practical things that can start to be implemented so how do you take these ambitious words and texts and turn them from principles to implementation because the devil is in the details and standards have details so I think we’re at the point where these products, services, companies applications, you know even hardware, all these things need to start to interface and interact interoperably, sorry they need to interact internationally and sometimes internationally as well, you’re going to need standards to basically make these things work and that could be one of the way of baking in all of the common sense things into standards now I know the words lightning speed and standards development are not often used in the same sentence and that’s probably a fair statement but I think in the case of AI for example when the Global Digital Compact launched its call I believe two years ago in the fall It took ITU and its partners less than three weeks to respond to that call for international AI standards coordination by launching the International AI Standards Summit Series.

And actually, the very first one was held in this venue in 2024 as part of WTSA, our Treaty Setting Conference on Standards. And we also launched the International AI Standards Exchange Database, which Doreen mentioned a few minutes ago. But more importantly, when you’re looking at the standards gaps and what people should be working on, we’re working with our partners, ISO and IC, on multimedia content authenticity standards development. That’s a fancy way of saying deepfake detection standards. I’m not saying we’ve solved the puzzle, but there’s a lot of energy and work working with industry, C2PA, different bodies there. I think another major gap, which is not only standards related, is, of course, the skills gap. So when we had our governance day in Geneva last year with ministers from over 100 countries, there’s a lot of things they couldn’t agree on.

But one thing they all agreed on is how to address the AI skills gap and democratize access to skills. globally and that didn’t matter if you were a developing or devolved country and then of course the other was how do you handle the epidemic of deepfakes so I think I’ll pause there thank you but hopefully that gives a kind of picture of how you can go from AI use cases to high potential looking at the dual nature of AI and how standards can be one of the tools to help address those issues. Thanks.

Moderator

Thanks Fred. I mean if that app looks at me right now I think it’s going to tell me that I’m very caffeinated and sleep deprived right but on that point I think standards are obviously the physical manifestation of governance I think we did talk about that that’s very important and Rachel maybe I come back to you I think we do talk about policy tools are important financing mechanisms are important governance approaches because there are many different approaches to AI governance throughout the world how do you see that the participatory whether the governance is actually participatory today some of the frameworks from global north do you think that’s getting imposed on south or south is coming up global south is coming up with their own frameworks how do you see the situation on the ground

Rachel Adams

How do we use it to help advance developmental outcomes or public value? So I think we can see from those kind of three regulatory or governance approaches from EU, China and the US, there’s this very kind of pragmatic adoption of different elements of that within different global south regimes. I know with the African Union’s continental charter on AI, they’re very, very deliberate to include the word regulation. And there was a huge emphasis on human rights and on gender issues and on children’s rights. So I think that what we want is to have maybe less of a focus on global consensus than I think we’re often talking about, partly because interoperability can often mean the dominance of one particular region or worldview’s regulatory regime everywhere else.

And we’ve seen with the GDPR framework. For example, that that has had a limiting effect on the African continent. So I think we rather want to be seeing kind of. a global consensus around a set of principles, accountability, transparency, safety and human oversight and of course a set of standards but noting that different regions are going to need to adapt those standards in different ways. Sometimes those standards might be a kind of gold standard and sometimes they might need to be a minimum standard and we want to be thinking more about the capacity building approaches to try and meet that standard. One of the things we are worried about from a global south and an African perspective is that standard setting processes in the past have always been dominated by those with the time and the resources to really participate in them.

As you said they’re slow and they’re deliberately slow because there’s a lot of expertise we need to bring to the table and once they’re concretized and finalized they become binding. In their own way particularly on the technical side. really want to ensure that as we’re building out these standards, particularly for generative AI and agentic AI, which is still in formation and that is a socio -technical technology and it evolves as it is used in context, but we have representation from Africa, from Latin America, from Asia that is meaningfully included in these standards processes through deliberate funding, through leadership on committees, through co -authorship of these standards. So I think that’s very important to stress.

Moderator

I think that’s an interesting point of view because I’m based in Singapore, so we have 11 countries in the Southeast Asia region and everybody runs at their own pace. And everything we talk about is how do we go from starting point, a lot of it is about where do you start and then talk about where do you end and what is the process along the way. I think that’s what you are. But Brando, I’ll maybe let you respond to some of the points she raised about… the regulatory experience that you have had you have talked to people here obviously you would have talked to other people there is always tension between local adaptation versus harmonization should we have a single set of rules throughout the world what are some of the aspects or highlights that you want to maybe highlight in that sense

Brando Benefi

well first of all on the standards I think it’s a fact that we need to accelerate on that and that we have seen some voluntary delaying I have to be very frank because I look at the implementation of the AI Act where we didn’t need standards so when we decided that some use cases you mentioned social scoring but I can tell you predictive policing emotional recognition in workplaces and study places these are use cases that including if I may also mention manipulative subliminal techniques that are prohibited and they didn’t need standards guidelines on application of these prohibitions were sufficient and we are already implementing that why? other parts of the law for example adequacy of data for training or levels of cyber security that are deemed sufficient these are elements parameters for the high risk use case applications where you need standards otherwise you can’t apply these rules and the standards are being in my view based on the elements I got from those in the standardization process sometimes deliberately delayed because there are some private sector actors that don’t want these standards to be there and so on we need to build mechanisms, I will not delve into that for time reasons, but mechanisms that we are building also in the European context to make it sure that there is a time limit for the standards to be in place because otherwise certain aspects of the governance will not be possible to be implemented.

I want to pick up briefly on also what you said on the risk of AI being used for in fact non -democratic developments, in fact to restrict participation spaces, freedoms. I think this is especially important when we look at fragile, institutionally fragile contexts, which are often countries of the global majority, global south, how you want to call it. we need to be aware that AI can be used for mass surveillance easily for repression of freedoms and to put people under pervasive control even without them fully understanding it I think that we should know that and at the same time I fully share the spirit of the summit concentrate on what we can do for good to mention the summit because a lot of things the example that was just made but yesterday I was meeting with a company from my own country from Italy that is here that deals with systems to anticipate physical status of drivers and to prevent accidents due to physical fatigue, to make it easier to identify earlier this kind of situations that would lead to actually a car accident.

So even in very specific areas, we can find in myriad ways how we can use AI for good. But my point is that enthusiasm for diffusion should not be in substitution for building frameworks that, I insist on my previous point, are precise and not generical ethical appeals, which, to be frank, are not very useful if they are not… pointing to clear deliverables. I want to conclude on this point to be clear that I think an ethical approach is needed. Without ethical approaches, any rule will not be able to function. But if you substitute regulation, governance of all kinds, it can be more binding or more, I would say, co -legislation, co -decision processes. But if you substitute these completely with mere voluntary ethical frameworks, I’m not sure we are getting anywhere.

Especially, I insist, in contexts that might…

Moderator

I think AI for good always starts with AI not for bad. That’s always the starting point and that’s an important consideration. I did promise you guys I’ll leave you on time. So I’ll just have to do very quick two questions. I just need 30 seconds to 60 seconds responses. Fred, I’ll start with you. if you had a billion dollars to accelerate AI diffusion across developing economies where will you start

Fred Werner

I think education skills I think that’s really the starting point actually I was in Johannesburg South Africa for AI for good impact Africa and I had a there’s a lot of conversations about you know using the whole mobile payment revolution of East Africa leapfrogging decades of infrastructure could the same thing be done with AI in Africa I haven’t made up my mind on it yet depending on who you talk to you might be convinced or not I think the opportunities there but also you can’t take it for granted that even if that did happen it would go in the right direction and I think that sort of basic understanding whether it’s for children for diplomats from grade school to grad school that skills gap is massive and I think that would probably be the best spend of money to start there

Moderator

Brandon what will you do with a billion dollars

Brando Benefi

I would say I subscribe to priority because I think that literacy, understanding, build consciousness, building capacity also among civil society actors is extremely important when we see a big acceleration of development of AI as it’s happening around us. Thank you.

Rachel Adams

I completely agree on the digital literacy because I think one of the biggest risks we face, which we haven’t spoken much about, is labour displacement, which I think is going to become significantly more serious. The other thing I would do is invest in building the capacity of our state institutions, of our independent institutions of democracy, our competitions commissions, our gender equality commissions, our human rights commissions, our information regulators. Those are the bodies that will be able to champion the rights of citizens in the face of big tech monopolies.

Moderator

I would have personally bought the shares of all the company CEOs who were here yesterday. but thank you for that. Quick question Rachel while I have you you have spent this week in India, you have seen the entire thing, you have seen the energies around this what is one lesson you learned from India which you think we should deploy globally?

Rachel Adams

I think India has made it very very clear that AI isn’t for everyone I think compared to any of the other summits I’ve been to I think it’s wonderful that there are children from schools here, that we have so many people that are local that have come to the summit and feel included, I think feeling like I am in India at the Indian summit has been the biggest kind of heartening and exciting thing for me.

Moderator

Yeah thanks you have been super inspired to hear the story of how India was able to through a billion plus people create digital ID, financial inclusion, digital payments so there’s a track record of let’s say technology diffusion at scale but in a way that’s beneficial for everyone So that could be a good model for AI diffusion. I know there’s still a long road to go, but if you can do it in India for a billion plus people, I think it should work in smaller places as well. Rando, with whatever is left in your voice now.

Brando Benefi

Well, I think we can learn a lot from what we are seeing here in these days, and I’m convinced that we need to be determined in building more global cooperation. I don’t think we can get the best out of AI diffusion if we abandon the path of building more common understanding and learning from each other. I think this summit can be a moment of this process, but this is something that must happen during all the year.

Moderator

I think, thanks to all of you. My lesson was obviously shake hands with your enemies, even if you are. that’s the only way to do diffusion across the world I would like to thank all the panelists thank you very much and Brando especially with your voice giving away I hope you have a good stay and thanks for enjoying and joining the panel, thank you very much thank you

Related ResourcesKnowledge base sources related to the discussion topics (40)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“One‑third of humanity remains offline, limiting AI reach.”

The knowledge base states that about one-third of the global population is still offline, confirming the claim [S119].

Confirmedmedium

“AI diffusion must be flexible and inclusive, rejecting a one‑size‑fits‑all approach.”

Doreen Bogdan-Martin explicitly says a one-size-fits-all model does not work and stresses flexibility and inclusivity [S18] and [S109].

Confirmedmedium

“India’s Future Skills Programme up‑skills thousands of students and involves both government and private sector.”

The Future Skills programme is highlighted as a joint government-private initiative that up-skills large numbers of learners in India [S128].

Additional Contextmedium

“The GIGA school‑connectivity programme with UNICEF targets the hardest‑to‑connect populations.”

GIGA is a UNICEF-ITU joint initiative aimed at connecting every school, confirming its role in connectivity efforts [S121].

!
Correctionhigh

“$100 billion pledge for connectivity, with $80 billion already secured.”

The knowledge base does not mention any $100 billion pledge or $80 billion already secured for GIGA or the Digital Coalition, so these monetary figures cannot be verified [S121].

Confirmedmedium

“Start‑ups are the primary vehicle for turning AI capability into economic impact, with 2025 VC investment exceeding $100 billion.”

A report notes that 2025 saw the largest VC year with over $100 billion invested globally, supporting the claim about start-ups driving impact [S124].

Additional Contextlow

“ITU standards are voluntary and developed through an inclusive, multi‑stakeholder process.”

Doreen Bogdan-Martin’s participation in AI governance dialogues reflects ITU’s multi-stakeholder, voluntary standards development approach [S21].

External Sources (128)
S1
Indias AI Leap Policy to Practice with AIP2 — – Role/Title: Event moderator -Dr. Panneerselvam Madanagopal
S2
https://app.faicon.ai/ai-impact-summit-2026/building-the-ai-ready-future-from-infrastructure-to-skills — I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Gove…
S3
Building the AI-Ready Future From Infrastructure to Skills — The moderator introduces Dr. Paneerselvam M by highlighting his qualifications and contributions to India’s startup ecos…
S4
The Future of AI in the Judiciary: Launch of the UNESCO Guidelines for the use of AI Systems in the Judiciary — Dr. Rachel Adams:So helpful. you for your question and some of the other points that have been made. I think it’s import…
S5
Indias AI Leap Policy to Practice with AIP2 — – Brando Benefi- Rachel Adams – Fred Werner- Rachel Adams
S6
S7
High-Level Dialogue: The role of parliaments in shaping our digital future — – **Doreen Bogdan-Martin** – Role/Title: Secretary-General of ITU (International Telecommunication Union) Doreen Bogdan…
S8
IGF 2024 Opening Ceremony — – Doreen Bogdan-Martin: Secretary General at International Telecommunication Union Tawfik Jelassi: Excellencies, ladie…
S9
Welcome address — Trager presented Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union (ITU), underscorin…
S11
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Brado Benefai- (Appears to be the same person as Brando Benifei, mentioned in introduction) -Brando Benifei- Member of…
S12
Open Forum #72 European Parliament Delegation to the IGF & the Youth IGF — – Brando Benifei: Member of European Parliament (mentioned but not in speakers list)
S13
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S16
AI for Good Technology That Empowers People — Hello Let me start with a question What if the last thing that humans ever invent is invention itself Now what do I mean…
S17
AI for Good Technology That Empowers People — Hello Let me start with a question What if the last thing that humans ever invent is invention itself? Now what do I mea…
S18
Indias AI Leap Policy to Practice with AIP2 — As Dr. said, this is really the earthquake of AI and we are at the epicenter. And as you can see, after five days, we ar…
S19
Upskilling for the AI era: Education’s next revolution — Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morning on this very stage I spoke about skills. I…
S20
A Digital Future for All (morning sessions) — – Doreen Bogdan-Martin – Secretary General , ITU Sade Baderinwa: If everyone could please take their seats, in the bac…
S21
AI Governance Dialogue: Steering the future of AI — – Doreen Bogdan Martin – Secretary General of the ITU (International Telecommunication Union) Doreen Bogdan Martin: Tha…
S22
Opening keynote — Doreen Bogdan-Martin:Good morning, and welcome to the AI for Good Global Summit. Let me start by thanking our more than …
S23
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Aurelie Jacquet :Thank you. So, following on Wansi’s point, I think what’s important to know is it’s actually good to se…
S24
Networking Session #37 Mapping the DPI stakeholders? — Implementation must be contextualized to local conditions rather than using one-size-fits-all solutions, starting from w…
S25
WS #300 Information Integrity through Journalism & Alternative Platforms — Speakers agree that effective solutions must be tailored to local contexts and built on community trust rather than appl…
S26
Ministerial Roundtable — – **Doreen Bogdan-Martin** – ITU Secretary General Doreen Bogdan-Martin: infrastructure resilience and protecting cult…
S27
WS #119 AI for Multilingual Inclusion — Public services should provide materials and support in multiple languages to promote language equity. This ensures that…
S28
Skilling and Education in AI — While significant investments are flowing into computational infrastructure, there’s a need for economic models and inve…
S29
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S30
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S31
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Yeah, thanks for that great question, and thanks for having me here. So I think that safe to save is no. There’s a huge …
S32
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue — AI offers potential to make civic information more accessible across different literacy levels, languages, and format pr…
S33
AI for Good Technology That Empowers People — The AI for Good initiative, launched in 2017, has evolved from a concept-focused summit addressing the “fear, promise, a…
S34
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Thank you. Thank you so much. Excellency, ladies and gentlemen, I guess I should say good evening. We all recognize arti…
S35
Skilling and Education in AI — The conversation began with a Professor’s detailed analysis of four critical sectors where AI can drive substantial impa…
S36
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Development | Sociocultural Noted the declining enrollment in STEM subjects across Africa as a concrete example of the …
S37
Powering AI Global Leaders Session AI Impact Summit India — Education as a lever to close the gap
S38
Disinformation and Misinformation in Online Content and its Impact on Digital Trust — Christine Strutt: Good afternoon everyone and thank you for joining our session that I’ve loosely renamed More Truth Les…
S39
Shaping Investment: Spurring Investment in Cyber Sector Start-Ups — Kenneth Pentimonti:Well thanks Nisha, it’s a pleasure to be here and yeah I think there are multiple benefits to invest …
S40
[Tentative Translation] — –  In order to promote the creation of needs-pull innovation by the government, the government will promote the new Jap…
S41
Strategy — The infusion of AI, Machine Learning and data analytics into various aspects of entrepreneurship have transformed the en…
S42
Building Trust through Transparency — In conclusion, the analysis emphasizes the urgent need to address the detrimental effects of opacity and corruption with…
S43
Session — Improving overall governance systems is necessary to build confidence in elections
S44
WS #82 A Global South perspective on AI governance — Collaboration and Inclusivity Lufuno T Tshikalange: Thank you, Dr. Melody, and thank you for having us here today. I…
S45
Developing capacities for bottom-up AI in the Global South: What role for the international community? — ## Practical Applications and Examples Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. Good. Great. We heard from …
S46
Why science metters in global AI governance — This discussion focused on the critical role of science in international AI governance, centered around the United Natio…
S47
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S48
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Significant challenges remain, including the economic sustainability of public interest media, data sovereignty concerns…
S49
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — Clear institutional mandates, accountability, and independent oversight are also required to ensure the effective implem…
S50
What is it about AI that we need to regulate? — Beyond physical infrastructure, the discussions highlighted the need for comprehensive ecosystem development. InWS #231,…
S51
UNSC meeting: Peace and common development — Algeria:Thank you, Mr. President. I would like to congratulate China for assuming the presidency of the Council of the M…
S52
WS #133 Platform Governance and Duty of Care — High level of consensus on fundamental principles despite representing different jurisdictions and regulatory contexts. …
S53
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S54
How Trust and Safety Drive Innovation and Sustainable Growth — Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was unexpected cons…
S55
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Alexandra supports the UK’s approach of not regulating with specific legislation but letting sectoral authorities handle…
S56
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Aurelie Jacquet :Thank you. So, following on Wansi’s point, I think what’s important to know is it’s actually good to se…
S57
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, speci…
S58
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S59
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S60
Facilitating an integrated approach to digital issues — Speed: In a world where communications have become instant, implementation of solutions must be made in phases, so that …
S61
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — In its collaboration with regulators, ISO defines itself as a facilitator of public policy implementation, rather than a…
S62
Resilient infrastructure for a sustainable world — An unexpected consensus emerged around the tension between the time needed for proper standards development and the rapi…
S63
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — High level of consensus with significant implications for submarine cable governance. The agreement suggests a mature un…
S64
WS #162 Overregulation: Balance Policy and Innovation in Technology — These key comments shaped the discussion by moving it from a binary debate about regulation vs. non-regulation to a more…
S65
WS #179 Navigating Online Safety for Children and Youth — 2. Principle-Based Policies: There was a call for developing flexible, principle-based policies rather than prescriptive…
S66
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Consensus level:High level of consensus with significant implications for AI governance policy. The agreement among indu…
S67
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Qian Xiao:OK, well, I’m doing a lot of research on the international governance of AI. And from our perspective, we thin…
S68
Indias AI Leap Policy to Practice with AIP2 — This established the conceptual framework for the entire discussion, moving away from standardized solutions toward cont…
S69
Networking Session #37 Mapping the DPI stakeholders? — Implementation must be contextualized to local conditions rather than using one-size-fits-all solutions, starting from w…
S70
Building Inclusive Societies with AI — Summary:Both speakers strongly agree that uniform solutions fail to address the diverse challenges faced by different ca…
S71
Indias AI Leap Policy to Practice with AIP2 — With substantial funding available—METI Startup Hub manages almost 1,000 crores while the India AI mission has allocated…
S72
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel argues that the fundamental measure of success for countries and companies in the AI era will be their capacity to…
S73
Harmonizing High-Tech: The role of AI standards as an implementation tool — Bilel Jamoussi:Thank you very much, Onohe-san. I promised our Deputy Sec-Gen and the UNESCO ADG that we will finish on t…
S74
WS #283 AI Agents: Ensuring Responsible Deployment — Anne McCormick: Thank you, Anne McCormick, EY, Global Head of Public Policy. I’m interested in this context of policy no…
S75
AI for Good Technology That Empowers People — The AI for Good initiative, launched in 2017, has evolved from a concept-focused summit addressing the “fear, promise, a…
S76
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Thank you. Thank you so much. Excellency, ladies and gentlemen, I guess I should say good evening. We all recognize arti…
S77
Indias AI Leap Policy to Practice with AIP2 — She advocates for a comprehensive approach to AI diffusion that prioritizes building the foundational infrastructure and…
S78
AI for Good Technology That Empowers People — Hello Let me start with a question What if the last thing that humans ever invent is invention itself Now what do I mean…
S79
Powering AI Global Leaders Session AI Impact Summit India — Education as a lever to close the gap
S80
Skilling and Education in AI — And then the data that I’m submitting into the system, simply by interacting with AI, I’m submitting data and providing …
S81
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — – Abhishek Agarwal- Tomas Lamanauskas Development | Sociocultural Noted the declining enrollment in STEM subjects acro…
S82
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S83
Powering AI Global Leaders Session AI Impact Summit India — This discussion features Chris Lehane, OpenAI’s Chief Global Affairs Officer, speaking at an AI Impact Summit in Delhi a…
S84
https://app.faicon.ai/ai-impact-summit-2026/indias-ai-leap-policy-to-practice-with-aip2 — Just prior to this I was having a conversation with a large corporate and how they can actually use startups as a cataly…
S85
https://dig.watch/event/india-ai-impact-summit-2026/keynote-rishad-premji — Government initiatives to train 10 million young people in AI, along with industry partnerships with universities, are e…
S86
Shaping Investment: Spurring Investment in Cyber Sector Start-Ups — Kenneth Pentimonti:Well thanks Nisha, it’s a pleasure to be here and yeah I think there are multiple benefits to invest …
S87
[Tentative Translation] — –  In order to promote the creation of needs-pull innovation by the government, the government will promote the new Jap…
S88
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Impact:This insight fundamentally redirected the conversation from technical AI development to practical implementation …
S89
Building Trust through Transparency — In conclusion, the analysis emphasizes the urgent need to address the detrimental effects of opacity and corruption with…
S90
Session — Improving overall governance systems is necessary to build confidence in elections
S91
The Declaration for the Future of the Internet: Principles to Action — Social media, as a tool for communication, acts as a barometer for public sentiment, thus requiring freedom for the expr…
S92
WS #82 A Global South perspective on AI governance — Collaboration and Inclusivity Lufuno T Tshikalange: Thank you, Dr. Melody, and thank you for having us here today. I…
S93
Building Scalable AI Through Global South Partnerships — yeah good good to start okay thank you so much thank you anchor. Thanks for your time uh and here we are and thanks uh S…
S94
Building Scalable AI Through Global South Partnerships — And this particular event gave us that opportunity. I think we were very clear that what we wanted to do was to let peop…
S95
WS #462 Bridging the Compute Divide a Global Alliance for AI — Alisson emphasizes that effective inclusion of Global South voices requires multi-stakeholder approaches that include la…
S96
WS #100 Integrating the Global South in Global AI Governance — – Creation of data commons and public data sharing (Martin Roeske) 1. Data Generation and Sharing 1. Technology Gap an…
S97
Lightning Talk #15 Climate Smart Digital Ag for African Smallholders — The tone was collaborative, optimistic, and solution-oriented throughout the conversation. The speakers consistently bui…
S98
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S99
Leaders TalkX: Accelerating global access to information and knowledge in the digital era — The discussion maintained a consistently collaborative, optimistic, and solution-oriented tone throughout. Speakers were…
S100
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S101
AI, Data Governance, and Innovation for Development — The overall tone was optimistic and solution-oriented, with speakers focusing on practical ways to overcome obstacles th…
S102
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue — The discussion maintained a serious, analytical tone throughout, reflecting the gravity of the subject matter. While spe…
S103
WS #279 AI: Guardian for Critical Infrastructure in Developing World — The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgr…
S104
WS #69 Beyond Tokenism Disability Inclusive Leadership in Ig — The discussion maintained a constructive and collaborative tone throughout, characterized by professional expertise and …
S105
Ministerial Roundtable — The discussion maintained a collaborative and constructive tone throughout, with ministers sharing both achievements and…
S106
DC-3 & DC-DDHT: Cybersecurity in Community Networks and digital health technologies: Securing the Commons — The tone of the discussion was informative and collaborative, with speakers sharing experiences and recommendations from…
S107
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S108
Ethical AI_ Keeping Humanity in the Loop While Innovating — We have chosen globally, including in Europe… to actually not regulate [social media] we have let the social media dif…
S109
Responsible AI in India Leadership Ethics & Global Impact — These key comments fundamentally shaped the discussion by establishing a progression from theoretical principles to prac…
S110
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S111
Information Society in Times of Risk — The discussion maintained a consistently academic and collaborative tone throughout. It was professional and research-fo…
S112
[Opening] IGF Parliamentary Track: Welcome and Introduction — The tone is consistently formal, welcoming, and optimistic throughout. It maintains a diplomatic and collaborative atmos…
S113
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S114
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S115
Towards a Safer South Launching the Global South AI Safety Research Network — Thank you, Urvashi. And first and foremost, I’d like to congratulate all the team, the network which has brought this to…
S116
UN Human Rights Council: High level discussion on AI and human rights — But like previous technology waves, today’s Gen AI and digital technologies bring both risks and opportunities. And I’m …
S117
Announcement of New Delhi Frontier AI Commitments — This argument positions India as a leader in developing AI governance frameworks that specifically represent the interes…
S118
AI for Good Innovation Factory Grand Finale 2025 — **Offline Functionality**: Multiple solutions emphasized the ability to function without reliable internet connectivity,…
S119
Beyond universality: the meaningful connectivity imperative | IGF 2023 — Over the past 30 years, the number of Internet users surged from a few million to 5.3 billion. Yet the potential of the …
S120
Open Forum #50 Digital Innovation and Transformation in the UN System — Fui Meng Liew: Thank you, Dino. Dino, because we are hearing online from the room a bit choppy, the voice, so please …
S121
Revitalizing Universal Service Funds to Promote Inclusion | IGF 2023 — Additionally, she co-chairs the Africa Community Network Summit and serves as a member of the MAG, the IGF Multi-Stakeho…
S122
AI as critical infrastructure for continuity in public services — A patient walking in at 2 a .m. on a Sunday morning, you know, it, the system needs to be out. It needs to be resilient …
S123
The Innovation Beneath AI: The US-India Partnership powering the AI Era — And if you remember, Amazon for a long time was known for one -click ordering. Well, none of us really want to do that b…
S124
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — It’s chips, chips and computing infrastructure. The next layer above it is the cloud infrastructure, the cloud services….
S125
Scaling AI for Billions_ Building Digital Public Infrastructure — Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to …
S126
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Cosmas Luckyson Zavazava: Thank you. Thank you very much. Thank you, Chair. I know that we have colleagues who are parti…
S127
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Gitanjali Sah:through a newly created forum, the Internet Governance Forum. Two decades have passed since then. We all l…
S128
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — Furthermore, it highlights the significance of collaboration between the public and private sectors in future skills tra…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Doreen Bogdan-Martin
5 arguments119 words per minute660 words332 seconds
Argument 1
Connectivity as the foundational solution for AI access
EXPLANATION
Doreen stresses that AI cannot be realised for everyone while a large share of the world remains offline. She argues that universal connectivity is the prerequisite for AI diffusion and that infrastructure projects are essential to bridge this gap.
EVIDENCE
She notes that a third of humanity is still offline, making AI for all impossible without connectivity, and cites initiatives such as the GIGA school-connectivity programme with UNICEF, the Digital Coalition aiming to connect the hardest-to-connect populations, a target of 100 billion connections this year and 80 billion commitments already secured [13-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ITU Ministerial Roundtable notes emphasize infrastructure resilience and inclusive digital access as prerequisites for AI diffusion [S26].
MAJOR DISCUSSION POINT
Connectivity as the foundational solution for AI access
Argument 2
Skills are the engine of digital agency and essential for AI diffusion
EXPLANATION
Doreen describes skills as the driver that turns connectivity into meaningful digital agency. She highlights national programmes and multistakeholder coalitions that are building AI‑related competencies at scale.
EVIDENCE
She recounts a conversation with a young leader who likened connectivity to digital agency and says “Skills are that engine of agency” [21-22]; she points to India’s Future Skills Program that upskills thousands of students and to the ITU Skilling Coalition with about 70 partners offering more than 180 learning resources in 13 languages [23-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Doreen’s emphasis on skills is echoed in the upskilling discussion and India’s skilling coalition with 70 partners and 180 resources [S19][S1].
MAJOR DISCUSSION POINT
Skills are the engine of digital agency and essential for AI diffusion
AGREED WITH
Fred Werner, Rachel Adams, Brando Benefi
Argument 3
Standards ensure interoperability, embed trust, and combat deepfakes
EXPLANATION
Doreen argues that technical standards are needed so AI systems can work together, foster trust, and protect societies from malicious uses such as deep‑fakes. She cites concrete standard‑setting work undertaken by ITU and partners.
EVIDENCE
She explains that standards complement solutions and skills for interoperability and trust, referencing the AI Standards Exchange Database that holds over 850 standards and technical publications, including multimedia authenticity standards aimed at tracing and combating deep-fakes [27-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder cooperation highlights standards for interoperability and trust, aligning with Doreen’s point on standards combating deepfakes [S23].
MAJOR DISCUSSION POINT
Standards ensure interoperability, embed trust, and combat deepfakes
AGREED WITH
Rachel Adams, Brando Benefi, Fred Werner
DISAGREED WITH
Brando Benefi, Fred Werner
Argument 4
AI diffusion must be tailored to local contexts through flexible, inclusive approaches rather than a one-size-fits-all model
EXPLANATION
Doreen stresses that AI solutions need to be adaptable to the diverse development stages and cultural contexts of different countries. She calls for flexibility and inclusivity so that AI benefits all societies, not just a uniform set of technologies.
EVIDENCE
She states that “it’s not a one-size-fits-all model” and emphasizes the need for flexibility and inclusivity across different AI approaches for all parts of the world, regardless of development stage, citing India’s leadership in turning AI ambitions into results while keeping a human-centered focus [2-5][6-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Remarks about avoiding one-size-fits-all models and promoting flexible, inclusive AI approaches are documented in the India AI Leap policy discussion [S18].
MAJOR DISCUSSION POINT
Flexibility and inclusivity over one-size-fits-all AI models
Argument 5
Multilingual digital public infrastructure is critical for inclusive AI services
EXPLANATION
Doreen highlights that delivering government services in many languages expands AI’s reach to linguistically diverse populations, especially in rural areas. Multilingual platforms ensure that AI benefits are accessible to all citizens regardless of language barriers.
EVIDENCE
She mentions the Bishini platform delivering government services in 22 languages and AI-powered digital public infrastructure in health care and financial inclusion that serves all Indians irrespective of economic status or skill level, especially in rural communities [8-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI for Multilingual Inclusion workshop stresses providing public services in multiple languages to ensure equity [S27].
MAJOR DISCUSSION POINT
Multilingual digital public infrastructure for inclusive AI
F
Fred Werner
4 arguments179 words per minute976 words326 seconds
Argument 1
Education and skills training should be the primary use of a large funding pool for AI diffusion
EXPLANATION
Fred states that if a billion‑dollar fund were available, the most effective allocation would be toward education and skills development, because the talent gap is the biggest barrier to AI adoption in developing economies.
EVIDENCE
He says, “I think education skills … that would probably be the best spend of money to start there,” emphasizing the massive skills gap from grade school to graduate level after his visit to South Africa’s AI for Good Impact Africa event [217].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The skilling and education focus in AI diffusion is discussed as a needed economic model in the AI Skilling literature [S28].
MAJOR DISCUSSION POINT
Education and skills training should be the primary use of a large funding pool for AI diffusion
AGREED WITH
Doreen Bogdan-Martin, Rachel Adams, Brando Benefi
Argument 2
Rapid coordination of international AI standards (e.g., AI Standards Summit, Exchange Database) is critical for safe diffusion
EXPLANATION
Fred highlights how quickly the ITU and partners responded to the Global Digital Compact call by launching an International AI Standards Summit Series and an AI Standards Exchange Database, underscoring the need for fast, coordinated standard‑setting.
EVIDENCE
He notes that ITU and partners launched the International AI Standards Summit Series in less than three weeks after the call, held the first summit in 2024, and created the International AI Standards Exchange Database mentioned earlier by Doreen [173-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rapid launch of the International AI Standards Summit series is described in the multistakeholder standards cooperation overview [S23].
MAJOR DISCUSSION POINT
Rapid coordination of international AI standards (e.g., AI Standards Summit, Exchange Database) is critical for safe diffusion
AGREED WITH
Doreen Bogdan-Martin, Rachel Adams, Brando Benefi
DISAGREED WITH
Brando Benefi
Argument 3
Direct a billion‑dollar investment toward education and skills development to close the AI talent gap
EXPLANATION
Fred reiterates that the most impactful way to spend a large AI‑diffusion fund is to invest in education and skills across all levels, thereby closing the talent gap that hampers AI uptake.
EVIDENCE
He repeats that “education skills … would probably be the best spend of money” after observing the skills shortage during his South Africa trip [217].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of channeling large funds into education and skills for AI adoption is highlighted in the skilling and education discussion [S28].
MAJOR DISCUSSION POINT
Direct a billion‑dollar investment toward education and skills development to close the AI talent gap
Argument 4
AI for good solutions must be vetted for safety, security, ethics, sustainability and human rights before scaling
EXPLANATION
Fred points out that promising AI‑for‑good applications raise concerns about privacy, security, ethical use, environmental impact and respect for human rights. He argues that standards and governance are needed to embed these safeguards before large‑scale deployment.
EVIDENCE
He describes a voice-based blood-sugar detection prototype and then asks whether such solutions are safe, secure, ethical, respect human rights, involve Global South participation, and are energy-sustainable, suggesting standards as a way to embed these safeguards [150-155][164-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ensuring safe AI deployment and monitoring agents is highlighted as essential for AI-for-good solutions [S31].
MAJOR DISCUSSION POINT
Ensuring responsible AI for good through safety, ethics and standards
AGREED WITH
Doreen Bogdan-Martin, Rachel Adams, Brando Benefi
R
Rachel Adams
5 arguments146 words per minute850 words347 seconds
Argument 1
Widespread AI illiteracy creates a democratic gap that hinders meaningful participation
EXPLANATION
Rachel reports that a large proportion of South Africans lack basic understanding of AI, which creates a gap between citizens and AI‑driven public services, undermining democratic oversight and participation.
EVIDENCE
She cites a survey of over 3,000 South Africans in all 11 official languages, finding that two-thirds have no meaningful grasp of AI, with one-third never having heard of it and another third unable to explain it, leading to a “significant democratic gap” in public-service AI adoption [134-142].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Surveys indicating a democratic gap and the need for AI-enabled civic information are noted in the democratic gap analysis and AI democratization discussions [S1][S32].
MAJOR DISCUSSION POINT
Widespread AI illiteracy creates a democratic gap that hinders meaningful participation
AGREED WITH
Doreen Bogdan-Martin, Fred Werner, Brando Benefi
Argument 2
Inclusive standard‑setting processes must deliberately involve Global South stakeholders to avoid dominance by well‑resourced actors
EXPLANATION
Rachel stresses that standard‑setting must be inclusive, with meaningful participation from Africa, Latin America and Asia, supported by dedicated funding and leadership, to prevent the process from being captured by well‑resourced northern actors.
EVIDENCE
She points out the need for representation from the Global South in standards work, calling for deliberate funding, committee leadership and co-authorship to ensure inclusion in the development of generative-AI and agentic-AI standards [193-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder standards processes call for Global South participation, matching Rachel’s concern [S23].
MAJOR DISCUSSION POINT
Inclusive standard‑setting processes must deliberately involve Global South stakeholders to avoid dominance by well‑resourced actors
AGREED WITH
Doreen Bogdan-Martin, Fred Werner, Brando Benefi
Argument 3
Strengthen state institutions (e.g., competition commissions, human‑rights bodies) to protect citizens and manage labor displacement risks
EXPLANATION
Rachel proposes investing in the capacity of independent state bodies—such as competition, gender‑equality, human‑rights commissions and information regulators—to safeguard citizens against monopolistic practices and the looming risk of AI‑driven job loss.
EVIDENCE
She recommends allocating resources to build the capacity of these institutions so they can champion citizens’ rights in the face of big-tech monopolies and labour displacement [221-223].
MAJOR DISCUSSION POINT
Strengthen state institutions (e.g., competition commissions, human‑rights bodies) to protect citizens and manage labor displacement risks
Argument 4
AI diffusion requires integrated governance frameworks that go beyond infrastructure to include policy, regulation and oversight
EXPLANATION
Rachel argues that providing technical infrastructure alone is insufficient; effective AI diffusion must be paired with governance mechanisms that ensure democratic participation, accountability and protection against misuse.
EVIDENCE
She states that diffusion cannot be only about infrastructure and technical delivery; it must be scaled with governance efforts, highlighting the need for policy and oversight [143-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Governance Dialogue stresses that diffusion must be paired with policy and oversight mechanisms [S21].
MAJOR DISCUSSION POINT
Governance as a necessary complement to AI infrastructure
Argument 5
A principle‑based global consensus on AI (accountability, transparency, safety, human oversight) is preferable to strict regulatory convergence, allowing regional adaptation
EXPLANATION
Rachel suggests that instead of imposing uniform regulations, the international community should agree on core principles that can be adapted locally, enabling flexibility while maintaining shared values.
EVIDENCE
She notes that a global consensus around principles such as accountability, transparency, safety and human oversight is more useful than a single regulatory regime, and that regions may adopt gold or minimum standards accordingly [188-191].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for flexible, principle-based approaches rather than uniform regulation appear in the India AI Leap policy flexible approach notes [S18].
MAJOR DISCUSSION POINT
Principle‑based global AI consensus over uniform regulation
AGREED WITH
Doreen Bogdan-Martin, Brando Benefi
DISAGREED WITH
Brando Benefi
B
Brando Benefi
4 arguments109 words per minute1145 words625 seconds
Argument 1
The EU AI Act illustrates how clear risk boundaries and transparency build trust in AI deployment
EXPLANATION
Brando explains that the EU AI Act defines high‑risk AI uses, sets clear limits, and requires transparency, which together foster trust and enable responsible diffusion of AI technologies.
EVIDENCE
He notes that the Act identifies high-risk areas, establishes clear boundaries for regulated versus unregulated uses, and mandates transparency, thereby building trust in AI deployment [116-124].
MAJOR DISCUSSION POINT
The EU AI Act illustrates how clear risk boundaries and transparency build trust in AI deployment
DISAGREED WITH
Rachel Adams
Argument 2
Prioritize digital literacy and capacity building for civil‑society actors to ensure responsible AI adoption
EXPLANATION
Brando argues that improving AI literacy and building capacity among civil‑society organisations is essential for responsible diffusion, especially as AI development accelerates.
EVIDENCE
He states that “literacy, understanding, build consciousness, building capacity also among civil society actors is extremely important when we see a big acceleration of development of AI” [219-220].
MAJOR DISCUSSION POINT
Prioritize digital literacy and capacity building for civil‑society actors to ensure responsible AI adoption
AGREED WITH
Doreen Bogdan-Martin, Rachel Adams
Argument 3
Timely development and enforcement of AI standards is essential; mechanisms must be introduced to prevent private‑sector delays
EXPLANATION
Brando observes that standards are often postponed due to industry resistance and calls for mechanisms that set deadlines for standards adoption, ensuring that governance can be implemented effectively.
EVIDENCE
He explains that standards are sometimes deliberately delayed by private actors and that the European context is building mechanisms to impose time limits on standards development [201-202].
MAJOR DISCUSSION POINT
Implementing deadline mechanisms to avoid standards delays
AGREED WITH
Doreen Bogdan-Martin, Fred Werner, Rachel Adams
DISAGREED WITH
Fred Werner
Argument 4
AI can be weaponized for mass surveillance and repression in fragile contexts; safeguards are needed to protect civil liberties
EXPLANATION
Brando warns that AI technologies can enable pervasive control and surveillance, especially in institutionally fragile countries, and stresses the importance of safeguards to prevent human‑rights violations.
EVIDENCE
He highlights that AI can be used for mass surveillance, repression of freedoms, and pervasive control without public understanding, particularly in Global South contexts [202-204].
MAJOR DISCUSSION POINT
Preventing AI‑enabled mass surveillance in fragile states
D
Dr.Panneerselvam Madanagopal
2 arguments138 words per minute959 words414 seconds
Argument 1
Startups are AI‑native, agile players that can transform SMEs and large enterprises
EXPLANATION
Dr. Panneerselvam describes startups as having inherent AI expertise, talent, and agility, making them uniquely positioned to help small‑ and medium‑sized enterprises as well as large corporations adopt AI‑driven transformation.
EVIDENCE
He says startups come in as “AI natives” with significant technology understanding and agility, and that they can provide the needed transformation for SMEs and large enterprises [54-56].
MAJOR DISCUSSION POINT
Startups are AI‑native, agile players that can transform SMEs and large enterprises
AGREED WITH
Moderator, Dr. Panneerselvam Madanagopal
Argument 2
The Miti Startup Hub provides mentorship, market access, and substantial funding to empower deep‑tech startups
EXPLANATION
He outlines the Hub’s three‑M model—mentorship, market access, and money—detailing how it supports startups from ideation through scaling, and highlights the large funding pool available from government and private sources.
EVIDENCE
He describes the three M’s (mentorship, market access, money) and notes that the Hub can fund up to a thousand crores for startups, with an additional 8,000 crores from the India AI Mission, emphasizing that “there is absolutely no death of money in the Indian market” [61-70].
MAJOR DISCUSSION POINT
The Miti Startup Hub provides mentorship, market access, and substantial funding to empower deep‑tech startups
D
Dr. Panneerselvam Madanagopal
2 arguments0 words per minute0 words1 seconds
Argument 1
AI technology can overshoot the capacity of SMEs, creating a mismatch that startups must bridge by aligning solutions with business needs
EXPLANATION
Dr. Panneerselvam notes that many small and medium enterprises face a “technology overshoot” where AI capabilities exceed their ability to integrate them, and startups can act as intermediaries to tailor AI to practical business workflows.
EVIDENCE
He describes cases where technology overshoots SME needs, making integration difficult, and emphasizes the opportunity for startups to build an “AI bridge” between technology and business requirements [77-83].
MAJOR DISCUSSION POINT
Bridging the technology‑business gap for SMEs
Argument 2
Responsible AI deployment demands ethical awareness and proactive risk management
EXPLANATION
Dr. Panneerselvam stresses that while AI represents a tectonic shift, it carries significant responsibility; developers must be mindful of ethical implications and potential negative impacts to avoid misuse.
EVIDENCE
He remarks that the AI earthquake comes with a lot of responsibility and that we need to be extremely responsible in what we do with the technology [88-89].
MAJOR DISCUSSION POINT
Ethical responsibility in AI deployment
M
Moderator
2 arguments166 words per minute1387 words498 seconds
Argument 1
AI diffusion should be guided by a practical implementation playbook rather than a high‑level strategy, focusing on five interacting dimensions
EXPLANATION
The Moderator introduces the Global South AI Diffusion Playbook as an implementation guide covering infrastructure, data and trust, procurement institutions, skills and market shaping, emphasizing actionable steps over abstract strategy.
EVIDENCE
He describes the playbook as a framework built around five interacting dimensions and notes that it is intended as an implementation guide rather than a strategy document [42-44].
MAJOR DISCUSSION POINT
Implementation‑focused AI diffusion playbook
Argument 2
Startups serve as the primary transmission mechanism to convert AI capability into real economic impact
EXPLANATION
The Moderator highlights that moving from AI capability to tangible economic outcomes relies on startups, which connect government policy with entrepreneurial energy and enable innovations to scale from labs to markets.
EVIDENCE
He states that diffusion is about moving from capability to real economic impact and that startups are the obvious transmission mechanism, citing Dr. Panneerselvam’s experience with the Miti Startup Hub [45-46].
MAJOR DISCUSSION POINT
Startups as engines of AI economic impact
AGREED WITH
Dr. Panneerselvam Madanagopal
Agreements
Agreement Points
Skills and digital literacy are essential for AI diffusion
Speakers: Doreen Bogdan-Martin, Fred Werner, Rachel Adams, Brando Benefi
Skills are the engine of digital agency and essential for AI diffusion Education and skills training should be the primary use of a large funding pool for AI diffusion Widespread AI illiteracy creates a democratic gap that hinders meaningful participation Prioritize digital literacy and capacity building for civil‑society actors to ensure responsible AI adoption
All four speakers stress that building skills, digital literacy and education is a prerequisite for turning connectivity into meaningful agency, avoiding a democratic gap and enabling responsible AI use. Doreen links skills to digital agency [21-25]; Fred says a billion-dollar fund should first go to education and skills [217]; Rachel points to two-thirds of South Africans lacking AI understanding and the resulting democratic gap [134-142]; Brando calls for literacy and capacity building among civil-society actors [219-220].
Standards are crucial for interoperability, trust and safe AI diffusion
Speakers: Doreen Bogdan-Martin, Fred Werner, Rachel Adams, Brando Benefi
Standards ensure interoperability, embed trust, and combat deepfakes Rapid coordination of international AI standards (e.g., AI Standards Summit, Exchange Database) is critical for safe diffusion Inclusive standard‑setting processes must deliberately involve Global South stakeholders to avoid dominance by well‑resourced actors Timely development and enforcement of AI standards is essential; mechanisms must be introduced to prevent private‑sector delays
The speakers converge on the necessity of standards to make AI systems work together, build trust, and address misuse. Doreen highlights standards for interoperability and deep-fake mitigation [27-32]; Fred describes rapid international standards coordination [173-176]; Rachel stresses inclusive, globally representative standard-setting [193-196]; Brando warns about private-sector delays and calls for deadline mechanisms [201-202].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with multistakeholder calls for AI standards to ensure interoperability and trust, as highlighted in IGF 2023 discussions on the role of standards for AI diffusion [S56] and the push for common global standards to enable cooperation [S58]. It also reflects the broader consensus that standards underpin regulatory frameworks and provide guardrails for responsible AI [S59].
AI diffusion must be locally adapted, flexible and inclusive rather than one‑size‑fits‑all
Speakers: Doreen Bogdan-Martin, Rachel Adams, Brando Benefi
AI diffusion must be tailored to local contexts through flexible, inclusive approaches rather than a one‑size‑fits‑all model A principle‑based global consensus on AI (accountability, transparency, safety, human oversight) is preferable to strict regulatory convergence, allowing regional adaptation Prioritize digital literacy and capacity building for civil‑society actors to ensure responsible AI adoption
All three emphasize that AI solutions should respect diverse local realities, using flexible, principle-based frameworks and capacity-building rather than uniform mandates. Doreen warns against a one-size-fits-all model and calls for flexibility [2-5][6-7]; Rachel advocates a principle-based consensus that can be regionally adapted [188-191]; Brando underlines the need for literacy and capacity building as part of a locally-responsive approach [219-220].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent policy dialogues emphasize contextualized implementation over uniform solutions, such as India’s AI Leap framework advocating flexible, locally-tailored approaches [S68] and DPI stakeholder mapping stressing adaptation to local conditions [S69]. Inclusive AI discussions also note that one-size-fits-all models fail to address diverse challenges [S70].
Startups are the primary transmission mechanism to turn AI capability into economic impact
Speakers: Moderator, Dr. Panneerselvam Madanagopal
Startups serve as the primary transmission mechanism to convert AI capability into real economic impact Startups are AI‑native, agile players that can transform SMEs and large enterprises
Both the moderator and Dr. Panneerselvam agree that startups bridge the gap between AI technology and market deployment, driving economic benefits. The moderator frames startups as the transmission mechanism from capability to impact [45-46]; Dr. Panneerselvam describes startups as AI-native, agile agents that can transform businesses [54-56].
POLICY CONTEXT (KNOWLEDGE BASE)
National AI strategies highlight startups as engines of economic transformation, with India’s AI mission focusing on capacity building for startups to drive broader impact [S71] and industry leaders noting that token-generation capabilities of firms like Cisco will shape prosperity and security [S72].
Robust governance (standards, ethics, institutions) is needed beyond mere infrastructure
Speakers: Doreen Bogdan-Martin, Rachel Adams, Brando Benefi, Fred Werner
Standards ensure interoperability, embed trust, and combat deepfakes AI diffusion cannot be something that is only about putting in place the infrastructure … it must be scaled with governance efforts I insist, in contexts that might… an ethical approach is needed. Without ethical approaches, any rule will not be able to function. AI for good solutions must be vetted for safety, security, ethics, sustainability and human rights before scaling
All four stress that infrastructure alone is insufficient; governance mechanisms-standards, ethical frameworks, and strong institutions-are essential for trustworthy AI diffusion. Doreen links standards to trust and deep-fake combat [27-32]; Rachel notes diffusion must be paired with governance [143-144]; Brando argues ethical approaches are prerequisite for effective rules [206-209]; Fred calls for safety, security, ethics, and human-rights vetting of AI-for-good solutions [164-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance is identified as a critical layer beyond technical infrastructure in DPI strategies, requiring clear mandates and oversight [S49], while platform governance consensus underscores the need for ethical and institutional frameworks [S52]. Cross-sector AI governance agreements further stress standards and ethics as foundational [S66].
Similar Viewpoints
Both emphasize a pragmatic, implementation‑oriented approach (solutions, skills, standards) over abstract strategy documents. Doreen outlines the three S’s as practical pillars [11-13]; the Moderator describes the Global South AI Diffusion Playbook as an implementation guide, not a strategy [42-44].
Speakers: Doreen Bogdan-Martin, Moderator
Solutions, skills, and standards as practical pillars for AI diffusion Implementation‑focused AI Diffusion Playbook rather than a high‑level strategy
Both argue that clear, principled boundaries (whether via risk‑based regulation or overarching principles) are essential to foster trust in AI. Rachel calls for a global principle‑based consensus allowing regional adaptation [188-191]; Brando explains how the EU AI Act defines high‑risk areas and transparency to build trust [116-124].
Speakers: Rachel Adams, Brando Benefi
A principle‑based or risk‑boundary approach builds trust and enables responsible AI deployment The EU AI Act illustrates how clear risk boundaries and transparency build trust in AI deployment
Both highlight the urgency of coordinated standards work and the need for inclusive participation. Fred recounts the rapid launch of the International AI Standards Summit and Exchange Database [173-176]; Rachel stresses the necessity of meaningful Global South representation in standards development [193-196].
Speakers: Fred Werner, Rachel Adams
Rapid coordination of standards is essential for safe AI diffusion Inclusive standard‑setting processes must deliberately involve Global South stakeholders to avoid dominance by well‑resourced actors
Unexpected Consensus
Cross‑regional convergence on using principle‑based, risk‑aware frameworks rather than uniform regulation
Speakers: Brando Benefi, Rachel Adams
The EU AI Act illustrates how clear risk boundaries and transparency build trust in AI deployment A principle‑based global consensus on AI (accountability, transparency, safety, human oversight) is preferable to strict regulatory convergence, allowing regional adaptation
Despite coming from different regulatory cultures (EU legislative perspective vs African policy-making), both speakers converge on the idea that AI governance should rely on clear, principle-oriented boundaries rather than imposing a single, detailed regulatory regime. This alignment across continents was not anticipated given their distinct institutional backgrounds. Brando describes the EU AI Act’s risk-based approach and its trust-building effect [116-124]; Rachel advocates for a flexible, principle-based global consensus that can be regionally adapted [188-191].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple forums have advocated principle-based policies that can adapt across jurisdictions, including the UN-backed principle-based approach for online safety [S65] and calls for flexible governance to keep pace with rapid AI evolution [S67]. Evidence from AI regulatory debates shows preference for targeted interventions over sweeping legislation [S53][S54][S55][S64].
Overall Assessment

The discussion reveals strong convergence among speakers on five core pillars: (1) building skills and digital literacy; (2) developing and deploying inclusive, timely standards; (3) ensuring AI solutions are locally adaptable and flexible; (4) leveraging startups as engines of economic impact; and (5) embedding robust governance, ethics and institutional capacity alongside technical infrastructure.

High consensus – the majority of participants, representing multilateral agencies, regional bodies, academia and the private sector, articulate overlapping arguments across these pillars. This alignment suggests a solid foundation for coordinated policy action, joint funding mechanisms and multistakeholder initiatives to advance AI diffusion in the Global South.

Differences
Different Viewpoints
Role and timing of AI standards for trustworthy diffusion
Speakers: Doreen Bogdan-Martin, Brando Benefi, Fred Werner
Standards ensure interoperability, embed trust, and combat deepfakes Timely development and enforcement of AI standards is essential; mechanisms must be introduced to prevent private‑sector delays Rapid coordination of international AI standards (e.g., AI Standards Summit, Exchange Database) is critical for safe diffusion
Doreen argues that standards are essential to guarantee interoperability, trust and to fight deepfakes, positioning them as a core pillar of AI diffusion [27-32]. Brando counters that standards are often postponed by private-sector resistance, that some high-risk uses (e.g., under the EU AI Act) can be regulated without standards, and calls for deadline mechanisms to avoid delays [201-202]. Fred, by contrast, highlights how quickly the ITU mobilised a standards summit and database in less than three weeks, suggesting that rapid standards work is feasible [173-176]. The three speakers therefore disagree on how necessary, urgent and achievable standards development is.
POLICY CONTEXT (KNOWLEDGE BASE)
Standards bodies such as ISO position themselves as facilitators of policy implementation rather than policymakers, highlighting the timing of standard initiatives by national members [S61]. IGF sessions note the importance of coordinating standards development to match AI deployment timelines [S56][S73].
Regulatory approach – precise regulation versus principle‑based global consensus
Speakers: Brando Benefi, Rachel Adams
The EU AI Act illustrates how clear risk boundaries and transparency build trust in AI deployment A principle‑based global consensus on AI (accountability, transparency, safety, human oversight) is preferable to strict regulatory convergence, allowing regional adaptation
Brando stresses that the EU AI Act’s explicit high-risk categories, clear limits and transparency requirements provide the trust needed for diffusion and argues that precise regulation is indispensable [116-124][205-208]. Rachel, on the other hand, proposes a softer, principle-based global framework that can be adapted locally, warning that a single regulatory regime could dominate and limit regional autonomy [188-191]. The two positions diverge on whether concrete regulation or flexible principles should drive trustworthy AI diffusion.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at various regulatory panels reveal a consensus favoring principle-based, sector-specific interventions over comprehensive new AI statutes, as seen in UK and Singapore regulator perspectives [S53][S54][S55] and broader debates on balancing policy and innovation [S64].
Perceived speed of standards development – delays versus rapid coordination
Speakers: Brando Benefi, Fred Werner
Timely development and enforcement of AI standards is essential; mechanisms must be introduced to prevent private‑sector delays Rapid coordination of international AI standards (e.g., AI Standards Summit, Exchange Database) is critical for safe diffusion
Brando warns that standards are frequently postponed by industry actors and calls for mechanisms that impose deadlines to avoid governance gaps [201-202]. Fred points to the ITU’s ability to launch an International AI Standards Summit and an Exchange Database within three weeks of a call, presenting a contrasting view that rapid standards coordination is possible and already happening [173-176]. This unexpected clash shows differing perceptions of how quickly standards can be produced.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders have highlighted tension between the need for swift implementation and traditionally longer standards-development cycles, with calls for phased, rapid roll-outs to keep pace with technology [S60] and recognition of this tension in recent consensus statements [S62].
Unexpected Differences
Speed of standards development – perceived delays versus demonstrated rapid rollout
Speakers: Brando Benefi, Fred Werner
Timely development and enforcement of AI standards is essential; mechanisms must be introduced to prevent private‑sector delays Rapid coordination of international AI standards (e.g., AI Standards Summit, Exchange Database) is critical for safe diffusion
Both speakers operate within the same multistakeholder ecosystem, yet Brando warns that standards are often postponed by private-sector actors and need deadline mechanisms [201-202], whereas Fred cites a concrete example where the ITU launched an international standards summit and database in under three weeks, portraying standards work as already fast and effective [173-176]. This contrast was not anticipated given their shared institutional background.
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence of accelerated standards initiatives, such as timely AI standard releases reported at governance days, contrasts with concerns about lagging processes, echoing earlier observations on the speed-quality trade-off in standards work [S73][S62][S60].
Overall Assessment

The discussion revealed three main axes of disagreement: (1) the necessity, timing and implementation speed of AI standards; (2) whether AI governance should rely on precise regulatory frameworks (EU AI Act) or on flexible, principle‑based global consensus; and (3) differing perceptions of how quickly standards can be produced. While participants share common goals—universal AI access, trust, inclusion and capacity building—their divergent views on the mechanisms risk fragmenting efforts and could slow coordinated diffusion, especially across the Global South.

Moderate to high. The disagreements are substantive (standards vs regulation, speed of standards work) and could affect policy alignment and funding priorities, but there is still considerable overlap in overarching objectives, suggesting that consensus‑building mechanisms are feasible if the differing approaches are reconciled.

Partial Agreements
Both speakers agree that building digital skills is a prerequisite for AI diffusion. Doreen frames skills as the engine that turns connectivity into digital agency and cites national upskilling programmes [21-22]. Fred focuses on allocating large financial resources specifically to education and skills development, arguing that this would be the most effective use of a billion‑dollar fund [217]. Their shared goal is skill development, but they differ on the primary lever (policy‑driven upskilling programmes versus direct large‑scale funding).
Speakers: Doreen Bogdan-Martin, Fred Werner
Skills are the engine of digital agency Education and skills training should be the primary use of a large funding pool for AI diffusion Direct a billion‑dollar investment toward education and skills development to close the AI talent gap
Both aim to create trustworthy AI ecosystems. Doreen emphasizes technical standards as the vehicle for interoperability and deep‑fake mitigation [27-32]. Brando stresses that clear regulatory risk boundaries and transparency, as embodied in the EU AI Act, also generate trust [116-124]. They share the trust objective but propose different mechanisms—standards versus regulation.
Speakers: Doreen Bogdan-Martin, Brando Benefi
Standards ensure interoperability, embed trust, and combat deepfakes The EU AI Act illustrates how clear risk boundaries and transparency build trust in AI deployment
Both stress the importance of inclusive participation in AI governance. Rachel calls for deliberate inclusion of Global South actors in standards‑setting through funding and leadership roles [193-196]. Brando highlights the need to raise AI literacy and build capacity among civil‑society organisations so they can engage responsibly with AI [219-220]. While Rachel focuses on the standards process, Brando focuses on civil‑society capacity; both converge on the broader goal of inclusive, participatory AI diffusion.
Speakers: Rachel Adams, Brando Benefi
Inclusive standard‑setting processes must deliberately involve Global South stakeholders to avoid dominance by well‑resourced actors Prioritize digital literacy and capacity building for civil‑society actors to ensure responsible AI adoption
Takeaways
Key takeaways
Connectivity is the foundational infrastructure needed for AI access; without it, AI cannot reach a third of humanity that remains offline. Skills development is the engine of digital agency; widespread AI illiteracy creates a democratic gap that hinders participation and responsible use. Standards are essential for interoperability, trust, and combating deepfakes; inclusive, multi‑stakeholder standard‑setting must involve Global South actors. Start‑ups act as AI‑native, agile catalysts that can bridge technology and business needs, especially for SMEs and large enterprises. A coordinated playbook (Global South AI Diffusion Playbook) and existing initiatives (GIGA school connectivity, Digital Coalition, Skilling Coalition, AI Standards Exchange Database, Miti Startup Hub) provide concrete pathways for diffusion. Funding priorities should focus first on education, digital literacy, and capacity building for civil‑society and state institutions to manage risks such as labor displacement and misuse of AI.
Resolutions and action items
Launch of the Global South AI Diffusion Playbook as an implementation guide. Commitment to connect the hardest‑to‑connect communities: target of $100 billion in connectivity pledges, $80 billion already secured. Establishment of the Skilling Coalition (70 partners, 180 learning resources in 13 languages) to provide AI skills training. Creation of the AI Standards Exchange Database (850+ standards, including multimedia authenticity standards). Miti Startup Hub to continue providing mentorship, market access, and up to INR 1,000 crore funding for deep‑tech startups. Fred Werner’s suggestion to allocate a hypothetical $1 billion toward education and skills development in developing economies. Brando Benefi’s proposal to invest a similar amount in digital literacy and capacity building for civil‑society actors.
Unresolved issues
Concrete mechanisms for moving AI pilots to large‑scale deployment, especially in the Global South. How to operationalise trust and ethics frameworks in everyday AI applications beyond high‑level principles. Effective strategies to close the AI skills gap across primary, secondary, and higher education levels. Ways to ensure rapid yet inclusive development of standards without being delayed by private‑sector resistance. Approaches to mitigate labor displacement and protect vulnerable workers as AI diffuses. Balancing the EU AI Act’s risk‑based regulatory model with the needs and capacities of Global South jurisdictions.
Suggested compromises
Adopt a flexible, non‑one‑size‑fits‑all approach: use the EU AI Act’s clear risk boundaries for high‑risk uses while allowing lower‑risk applications to proceed with lighter oversight. Combine mandatory regulations for critical high‑risk AI with voluntary ethical frameworks for lower‑risk areas, ensuring both enforceability and innovation. Treat standards as either “gold” or “minimum” levels, allowing regions to adopt higher standards where possible while still meeting a baseline of safety and interoperability. Blend top‑down policy (e.g., national AI strategies) with bottom‑up support for startups and civil‑society to ensure inclusive diffusion.
Thought Provoking Comments
Solutions, skills, and standards – the three S’s – are the pillars we need to move from AI moonshots to real, inclusive impact. Connectivity is the bridge; without it a third of humanity stays offline.
She reframes AI diffusion as a concrete, three‑part framework rather than abstract ambition, linking infrastructure, human capacity, and trust mechanisms.
Set the agenda for the entire panel, prompting each speaker to address one of the S’s. It shifted the conversation from high‑level policy talk to actionable domains (connectivity, skilling, standards) that guided subsequent questions and responses.
Speaker: Doreen Bogdan‑Martin
Startups are the AI natives – they bring talent, agility and the ability to translate deep‑tech into real‑world solutions for SMEs and large enterprises alike. Our role at METI Startup Hub is mentorship, market access and money.
Highlights the unique position of startups as both innovators and diffusion agents, introducing a practical ‘AI bridge’ concept that connects technology to business needs.
Introduced the startup ecosystem as a critical diffusion mechanism, leading other panelists (e.g., Brando and Fred) to discuss how policy and standards can support these agile players. It broadened the discussion beyond government initiatives to private sector dynamics.
Speaker: Dr. Panneerselvam Madanagopal
The EU AI Act deliberately focuses on high‑risk use cases, leaving other applications unregulated. This clarity about where governance applies builds trust, but we also need clear boundaries and transparency for the rest.
Provides a nuanced critique of regulation, arguing that selective oversight can foster trust while avoiding over‑regulation, and stresses the need for clear, enforceable limits.
Prompted a deeper debate on the balance between regulation and flexibility, influencing Rachel to stress the importance of public awareness and Fred to discuss standards as the ‘details’ that operationalise such regulatory choices.
Speaker: Brando Benefi
Two‑thirds of South Africans have no meaningful grasp of AI; one‑third have never even heard of it. Without public understanding, massive private investment and public AI plans create a democratic gap that cannot be closed by infrastructure alone.
Grounds the diffusion challenge in concrete data on public awareness, shifting the focus from technology to societal capacity and democratic participation.
Redirected the conversation toward the human dimension of AI diffusion, leading Fred to emphasize skills gaps and standards, and reinforcing Doreen’s earlier point about the ‘skills’ pillar.
Speaker: Rachel Adams
We have an AI startup that can estimate blood‑sugar from a voice recording on a mobile phone – a potential game‑changer for diabetes, but also raises questions about privacy, ethics, and what else the technology might infer about a person.
Illustrates a concrete, high‑impact use case while simultaneously exposing ethical and privacy dilemmas, embodying the dual‑nature of AI for good and for harm.
Served as a vivid example that anchored abstract discussions about standards and ethics in a real‑world scenario, prompting Brando to stress the need for precise regulatory language and Rachel to call for participatory governance.
Speaker: Fred Werner
If I had a billion dollars to accelerate AI diffusion in developing economies, I would invest first in education and skills – from grade‑school to graduate‑school – because the skills gap is massive and determines whether AI benefits or harms societies.
Prioritises human capital over infrastructure or funding, reinforcing the earlier ‘skills’ theme and linking it to long‑term sustainable impact.
Reinforced the consensus that capacity‑building is the most effective lever, influencing Brando and Rachel to echo the importance of literacy and institutional capacity in their own brief answers.
Speaker: Fred Werner
India has made it clear that AI isn’t for everyone – the summit included children from schools and local participants, making people feel included. That sense of inclusion is the biggest lesson for global diffusion.
Distills a complex, large‑scale national effort into a single, human‑centric insight about inclusion, highlighting the social fabric needed for successful diffusion.
Provided a concrete model that the moderator and other panelists referenced as a benchmark, shifting the tone toward optimism about replicable inclusive practices.
Speaker: Rachel Adams
Overall Assessment

The discussion was steered by a handful of pivotal remarks that moved the dialogue from abstract policy aspirations to concrete, human‑centered actions. Doreen’s three‑S framework set the structural lens; Dr. Madanagopal’s startup ‘bridge’ introduced the private‑sector engine; Brando’s critique of the EU AI Act clarified the regulatory balance; Rachel’s South African survey grounded the debate in public awareness; Fred’s voice‑based health example illustrated the promise‑risk duality; his emphasis on education cemented skills as the primary lever; and Rachel’s observation of India’s inclusive approach offered a replicable model. Each of these comments sparked follow‑up reflections, reshaped participants’ priorities, and collectively shaped a nuanced conversation that blended infrastructure, capacity‑building, governance, and inclusive practice.

Follow-up Questions
How can we move from AI pilots to large‑scale deployment, and what role should European partnerships play with the Global South?
Understanding the pathway from proof‑of‑concepts to widespread impact is essential for translating AI investments into real economic and social benefits across diverse regions.
Speaker: Moderator (to Brando Benefi)
How crucial are trust and ethics for AI diffusion, and how can they be implemented in practice?
Trust and ethical safeguards are needed to prevent misuse of AI, build public confidence, and ensure that diffusion does not exacerbate societal harms.
Speaker: Moderator (to Rachel Adams)
Which specific AI use cases and standards are most effective in driving diffusion?
Identifying high‑impact applications and the standards that support them helps prioritize resources and accelerate responsible AI adoption.
Speaker: Moderator (to Fred Werner)
Is current AI governance truly participatory, or are Global North frameworks being imposed on the Global South?
Assessing the inclusiveness of governance processes is vital to avoid a top‑down approach that marginalises Global South perspectives.
Speaker: Moderator (to Rachel Adams)
How should we balance local adaptation of AI regulations with global harmonisation?
Finding the right mix between uniform rules and context‑specific adaptations affects compliance, innovation, and the ability to address region‑specific risks.
Speaker: Moderator (to Brando Benefi)
If you had a $1 billion fund to accelerate AI diffusion in developing economies, where would you allocate it?
Clarifying funding priorities (e.g., skills, infrastructure, standards) informs strategic investment decisions for maximum impact.
Speaker: Moderator (to Fred Werner)
If you had a $1 billion fund to accelerate AI diffusion, what would you invest in?
Understanding differing strategic emphases (literacy, capacity building, civil‑society engagement) helps shape complementary funding approaches.
Speaker: Moderator (to Brando Benefi)
What single lesson from India’s AI journey should be applied globally?
India’s large‑scale, inclusive diffusion experience may offer replicable models for other regions seeking equitable AI rollout.
Speaker: Moderator (to Rachel Adams)
How can the ‘technology overshoot’ problem for SMEs be addressed to align AI solutions with actual business needs?
SMEs risk adopting overly complex AI; research is needed on fit‑for‑purpose frameworks that match technology capabilities with real‑world business requirements.
Speaker: Dr. Panneerselvam Madanagopal
What is the impact of large‑scale startup funding (e.g., ₹1,000 crore, ₹8,000 crore) on AI diffusion and ecosystem development in India?
Evaluating the effectiveness of substantial public and private capital informs future investment strategies and policy design for startup‑driven diffusion.
Speaker: Dr. Panneerselvam Madanagopal
What is the current state and effectiveness of multimedia authenticity (deep‑fake detection) standards, and how can their development be accelerated?
Deep‑fakes undermine trust; understanding gaps in standards and pathways to faster adoption is critical for safeguarding information ecosystems.
Speaker: Fred Werner
What are the most effective approaches to close the global AI skills gap, from primary education to advanced training?
Skills are the engine of agency; identifying scalable curricula, delivery models, and partnerships is essential for equitable AI empowerment.
Speaker: Fred Werner; Doreen Bogdan‑Martin
How does low AI literacy affect democratic participation, and what interventions improve public understanding?
A large portion of the population lacks basic AI knowledge, creating a democratic gap; research is needed on literacy programs that enhance informed civic engagement.
Speaker: Rachel Adams
What mechanisms can ensure timely development of AI standards and prevent deliberate delays by private‑sector actors?
Deliberate postponement of standards hampers governance; procedural safeguards and time‑bound mandates are needed to keep standardisation on track.
Speaker: Brando Benefi
What safeguards are required to prevent AI‑enabled mass surveillance and repression in institutionally fragile contexts?
AI can be weaponised for surveillance; developing protective policy frameworks is crucial for protecting human rights in vulnerable societies.
Speaker: Brando Benefi
What policy tools can mitigate AI‑driven labour displacement, and how can state institutions be strengthened to protect workers?
Anticipating job loss is essential; research on regulatory, social‑security, and reskilling measures will help manage workforce transitions.
Speaker: Rachel Adams
How can standards be designed to embed ethics, trust, and interoperability while allowing regional adaptation (gold vs. minimum standards)?
Standards must be robust yet flexible to accommodate diverse regulatory environments and ensure ethical AI deployment worldwide.
Speaker: Doreen Bogdan‑Martin; Fred Werner; Rachel Adams
How can the impact of connectivity initiatives like GIGA and the Digital Coalition be measured and optimized?
Connectivity underpins AI access; developing metrics to assess reach, quality, and downstream AI adoption will guide future infrastructure investments.
Speaker: Doreen Bogdan‑Martin
What best practices in mentorship, market access, and capital provision most effectively scale deep‑tech startups?
Understanding which support mechanisms most successfully move startups from ideation to market informs ecosystem design for rapid diffusion.
Speaker: Dr. Panneerselvam Madanagopal
How can representation from the Global South be ensured in international AI standards bodies (ISO, IEC, ITU)?
Inclusive standard‑setting prevents dominance by a single worldview and ensures standards reflect diverse socio‑technical contexts.
Speaker: Rachel Adams
How can the Global South AI Diffusion Playbook be operationalised and its outcomes evaluated across its five dimensions?
Moving from a framework to measurable impact requires indicators, monitoring mechanisms, and feedback loops for infrastructure, data/trust, institutions, skills, and market shaping.
Speaker: Moderator (general)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026

How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how a European “code of practice” could serve as a flexible, co-legislative tool for governing the frontier risks of artificial intelligence while preserving innovation [2]. Brando Benifei explained that the code would be drafted together with civil society, developers, enterprises of all sizes and academia to produce rules that address existential and systemic threats to democracy and citizen freedoms [2-3]. He stressed that the framework must be clear yet adaptable, avoiding vague language while setting concrete objectives for risk mitigation [4]. Benifei also called on the European AI Office to be given sufficient powers to enforce the code, noting that many companies already comply with its provisions but that effective oversight is needed to build public trust [5-7].


In response, Sean argued that leaders and scholars must create conditions that enable companies to prioritize safety, a situation currently hindered by intense geopolitical competition [12-15]. He warned that CEOs feel constrained from taking extra safety steps and that this should be seen as a red alarm, urging coordinated action and even temporary slow-downs where critical [16-18]. Sean emphasized that such cooperation must include not only European and US firms but also Chinese actors, seeking an equal-footed global dialogue on AI risks [18-19].


Paola added that narrowing the gap between the perceived power of AI and its actual deployment requires focusing on specific use cases, because trust mechanisms differ between sectors such as medicine and customer service [21-24]. She suggested that defining context-appropriate trust controls will unlock both productivity and confidence in AI systems [24].


Returning to the broader agenda, Benifei warned against pitting safety against diffusion, insisting that both must proceed in parallel, especially in high-risk domains like military applications and loss-of-control scenarios [26-31]. He argued that responsibility for setting these limits lies with public institutions rather than businesses, and urged leaders to act without further delay [32-37].


The moderator concluded that innovation and trust can coexist, encouraging participants to review the code’s safety chapters as potential standards for other regions [38-41]. He reaffirmed the commitment to continue the dialogue and develop shared governance mechanisms for AI worldwide [42-43].


Keypoints


Major discussion points


A co-legislative “code of practice” is essential for AI risk mitigation and trust-building.


The EU proposes a flexible yet clear set of rules, developed together with civil society, industry, and academia, to address existential and systemic risks to democracy and fundamental rights [2-4][7].


Effective enforcement requires strong institutional support.


The European AI Office must be equipped to implement the code, ensuring that private actors who are already complying are matched by public-sector authority and resources [5].


Global cooperation and the creation of “safe-development conditions” are critical.


Leaders must enable AI firms-whether European, American, or Chinese-to take additional safety steps, share expertise, coordinate actions, and even pause development at critical junctures [12-19].


Context-specific use-case focus is needed to translate trust into practice.


Trust mechanisms must be tailored to the domain (e.g., medicine versus customer service), allowing organizations to deploy AI responsibly while unlocking productivity [21-24].


Urgent public-institutional action on high-risk AI applications, especially military and loss-of-control scenarios.


Such risks cannot be left to the private sector; governments must act swiftly and use forums like this summit to drive progress [28-34][35-37].


Overall purpose / goal


The panel’s purpose was to outline how AI can be governed through a collaboratively crafted code of practice that balances innovation with safety, builds public trust, and establishes concrete, enforceable standards-both within Europe and internationally-to protect democratic values and human rights.


Overall tone


The discussion began with a measured, collaborative tone emphasizing partnership and flexibility. As the conversation progressed, speakers-particularly Brando Benifei-adopted a more urgent and admonishing tone, stressing the need for immediate governmental action and global coordination to address high-risk AI domains. The shift reflects a move from outlining principles to calling for decisive, time-sensitive implementation.


Speakers

Paola


– Areas of expertise / roles: Gender advocate; Secretary General of the European Digital Media Observatory (EDMO)


– Title/Affiliation: Secretary General, EDMO[S1][S2]


Speaker 2


– Areas of expertise / roles: Moderator / host for AI Impact Summit events; Moderator for IGF policy network on AI


– Title/Affiliation: Moderator/Chair (AI Impact Summit, IGF)[S4][S6]


Brando Benifei


– Areas of expertise / roles: European Parliamentarian involved in AI policy and legislation


– Title/Affiliation: Member of the European Parliament (MEP)[S7][S8]


Sean


– Areas of expertise / roles: Scholar and governance expert on AI safety and beneficial development (as indicated in the transcript)


– Title/Affiliation: (not specified in external sources)


Additional speakers:


– Professor Bengio – referenced as providing explanation of the code-of-practice process (no external details available).


– Professor Banjo – mentioned regarding research on loss-of-control risks and AI safety (no external details available).


– Professor Eiger – referenced (pronunciation difficulty) in discussion of safety vs diffusion (no external details available).


Full session reportComprehensive analysis and detailed insights

Opening – Brando BenfeI


BenfeI opened the summit by framing the EU AI Act as a vehicle to manage the “frontier aspects” of AI while safeguarding democratic values and fundamental rights. He explained that, instead of codifying every technical detail, the Act will be complemented by a co-legislative Code of Practice drafted jointly with civil society, developers, SMEs, large enterprises and academia. This multi-stakeholder process is meant to produce rules that are clear yet adaptable to the rapidly evolving AI landscape and that target existential risks, which we also refer to as systemic risks to democracy[2-4][7][S51]. BenfeI also cited insights from Professor Eiger on the need for democratic safeguards [30].


Call for robust enforcement


He stressed that the European AI Office must be equipped with sufficient legal authority, funding and technical tools to enforce the Code effectively [5-7][S55][S56]. While many private actors already comply with the Code’s risk-mitigation provisions, strong public-sector oversight is essential to translate compliance into public trust[5-7][S55][S56].


Sean’s recommendation


Sean shifted the discussion to the conditions required for safe AI development. He warned that CEOs of leading AI firms feel constrained by intense geopolitical competition, which limits their ability to adopt extra safety steps despite a willingness to do so [13-15]. He called for regulatory, financial or collaborative mechanisms that enable companies to prioritise safety, share expertise, coordinate actions and, where necessary, temporarily slow down development at critical junctures [16-18]. Crucially, Sean argued that this coordination must be global, involving European, US and Chinese actors as equals [19][13-18][S53].


Paola’s use-case focus


Paola introduced a use-case-centred perspective, arguing that the main bottleneck is not the technology itself but organisations’ ability to trust AI in the appropriate context. She highlighted the stark contrast between domains such as medicine and customer service, where trust controls differ markedly, and suggested that defining domain-specific trust mechanisms will simultaneously unlock productivity and confidence in AI systems [21-24][S30][S65].


BenfeI’s second contribution


Returning to the broader agenda, BenfeI warned against treating safety and diffusion as opposing goals, stating: “We must not treat safety and diffusion as opposing goals.” [32-33] He reiterated that both safety and diffusion must progress in parallel, especially in high-risk areas such as military AI and loss-of-control scenarios, which require international cooperation to avoid catastrophic outcomes [26-31]. He emphasized that responsibility for setting limits lies with public institutions, not private businesses, and issued an urgent appeal to political leaders to act without further delay [32-37][S33][S34].


Moderator’s closing


The moderator concluded by reaffirming that innovation and trust are compatible. He invited participants to examine the Code’s safety chapters, which could serve as reference standards for other jurisdictions, and thanked the panelists for their contributions [38-43][40-41][S22].


Key take-aways


* A multi-stakeholder, co-legislative Code of Practice is proposed to deliver clear yet flexible AI risk-mitigation rules and to foster public trust.


* International cooperation is essential; leaders must create conditions that allow AI developers worldwide-including Europe, the US and China-to prioritise safety despite competitive pressures.


* Trust mechanisms should be tailored to specific use cases, recognising that domains such as healthcare demand different safeguards from those in customer-service contexts.


* Public institutions, not private firms, should steer governance of high-risk AI applications such as military use and loss-of-control scenarios.


* Innovation and trust can coexist; the Code’s safety chapters can act as benchmarks for other regions, supporting a synergistic relationship between progress and security.


Proposed actions


1. Empower the European AI Office with the necessary legal authority, funding and technical tools to enforce the Code effectively [5-7][S55].


2. Develop policy frameworks-such as incentives, liability regimes or joint-research funds-that create the conditions for companies to adopt additional safety measures despite geopolitical competition [13-18][S53].


3. Organise international forums that bring together European, US and Chinese stakeholders on an equal footing to negotiate coordinated AI-safety standards [12-19][S33][S34].


4. Advance sector-specific trust guidelines, beginning with high-impact domains like healthcare, to bridge the gap between perceived AI power and responsible deployment [21-24][S30].


5. Draft and adopt clear international norms for military AI and loss-of-control risks, anchored in public-institutional leadership [26-31][S33][S68].


6. Promote the Code’s safety chapters as reference standards for other jurisdictions, encouraging global harmonisation of AI safety practices [40-41][S22].


Unresolved issues


* Defining precise mechanisms for creating industry-friendly safety conditions.


* Achieving concrete global consensus on equal participation of all major AI actors.


* Detailing regulatory approaches for military AI and loss-of-control scenarios.


* Mapping implementation pathways for the Code across diverse legal systems.


* Operationalising domain-specific trust controls without stifling innovation.


In sum, the panel displayed strong agreement on the need for a flexible, co-legislative Code of Practice, urgent international cooperation, and context-aware trust mechanisms, while acknowledging moderate divergence over who should lead high-risk AI governance and the best technical route to building trust. The discussion sets a clear agenda for future policy work: equip public institutions, align global stakeholders, and tailor safeguards to specific domains, thereby ensuring that AI development proceeds safely, responsibly and with public confidence.


Session transcriptComplete transcript of the session
Brando Benifei

So from different places in the world as a possible way of how to deal with the frontier aspects of AI development. Because instead of detailing in the legislative act every aspect of the risk mitigation that we ask now to the big developers, we decided to put in the AI act the provision of having this code of practice that would come, as Professor Bengio explained very clearly, from a co -legislative process involving civil society, developers, small, medium, big enterprises and academia in an exercise that would allow to build a more adherent to the… present situation and evolution. of the AI landscape of a set of rules to actually prevent existential risks, but also we call them systemic risks that deal with our democracies, with our freedom as citizens.

I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent risks of damaging our democratic processes by spreading misinformation or contrasting the cyberbullying or the criminal actions through the use of AI. And we… I think we built a very clear framework. because I think it’s very important to be clear, not to have vague proposals that are very loosely interpretable, but maintaining a certain degree of flexibility, we are clear on what we want to pursue. However, I think it will be very important, and so I need to subscribe to what Professor Banjo said at the end of his speech, that this is our effort from the Parliament side that we provide the European AI Office all the means to actually implement this code of practice, because it’s true that, as it was said, many companies are already complying with many of the risk mitigation aspects that are in the code of practice, but we need to be sure that we can, again, be at the same level of this very power.

private actors to do our part in making the rules that we decided applicable, effective, and so build trust. In the end, to conclude, this is our objective. We want the code of practice to contribute in building trust among our citizens on the fact that we can innovate without sacrificing human rights and protection of our fundamental values. Thank you.

Speaker 2

Thank you very much, Brando. Now, we still have very few minutes left, so I would like to exploit the opportunity of your presence to ask you, maybe if you can say in one minute, Sean, you have already said this, but maybe you can reformulate or come up with one recommendation for the leaders. at this summit on the way that we can govern AI in the future? What would you say to them?

Sean

In one minute, I would say the role of our leaders, the role of us as scholars, the role of us as governance experts is to create the conditions for the safe and beneficial development of AI. Right now, I do not believe those conditions entirely exist because exactly of the things that the CEOs of the leading companies say. They say they would like to be able to take additional steps, but under the competitive geopolitical pressure they’re in, they do not feel that they are able to. We should be hearing that. That should be a red alarm bell for us. And so what we need to do is figure out how do we create the conditions where it is possible for them.

to take these additional steps, to put additional focus on safety, to share expertise if needed, to coordinate and potentially even to slow down before critical points. And that doesn’t just mean European companies, it doesn’t just mean US companies, it also means our colleagues in China who are making such impressive progress. We need to figure out what is a way in which we can bring everyone to the table as equals and figure out how to cooperate on this challenge of our time.

Speaker 2

Paola?

Paola

I would say focus on the use cases. So right now there’s a gigantic gap between sort of our perception of this incredible power of the technology and how quickly organizations can deploy it. And the bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service. And so I think when we start to focus on context, the right use cases, what trust controls look like in those domains, in local context, that’s where we unlock not only productivity, but trust in the same breath.

Brando Benifei

Well, in my opinion, we need to, again, as it was said earlier by Professor Eiger, it’s very difficult for me to figure the pronunciation. But anyway, we need to not contrast, not put in contrast safety at the highest terms and the focus on diffusion, on action. On impact, the title of this summit. I think this can go in parallel and it must go in parallel because there are. areas of deployment of AI where without international cooperation we are facing huge risks. We hope that the code of practice will be a way to enlarge this discussion and build a reference point as I said but we need to go even further. We have issues regarding military use of AI.

We have issues regarding the loss of control risks that also Professor Banjo has been looking a lot at with his research that are in need of further cooperation. I don’t think this will come from the business but not because they are bad. It’s not their role. It must come from the public institutions and so we need to send this message to our leaders. Don’t lose any more time. You need to sit down and use these occasions to do progress. We need that. and we do not need to lose any more time on this.

Speaker 2

Thank you very much. So I would like to close this very interesting panel simply to say that what we have tried to discuss and conclude in this session is the fact that innovation and trust can go together and we can find different ways to make sure that trust is ensured or enabled in a particular country and in a particular continent, but we will need to continue working together and we are also happy to have presented to you some elements of the Code of Practice. Please take a look at that Code of Practice, in particular look at the safety chapters, and you will see that these are probably standards to which other countries can sign up to.

And thanks a lot for your participation. We look forward to continuing this discussion with you and with all the colleagues in this summit. Thank you very much and thanks to our panelists. Thank you very much. Thank you. you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The EU AI Act will be complemented by a co‑legislative Code of Practice drafted jointly with civil society, developers, SMEs, large enterprises and academia, creating clear yet adaptable rules that target existential (systemic) risks to democracy.”

The knowledge base states that the AI Act includes a provision for a Code of Practice produced through a co-legislative, multi-stakeholder process involving academia and industry (see [S14] and [S15]).

Confirmedhigh

“The European AI Office must be equipped with sufficient legal authority, funding and technical tools to enforce the Code effectively.”

EU Commission plans to establish a European Artificial Intelligence Office with a separate budget and enforcement mandate, confirming the need for authority, funding and tools ([S83]; enforcement importance echoed in [S84]).

Confirmedmedium

“CEOs of leading AI firms feel constrained by intense geopolitical competition, which limits their ability to adopt extra safety steps despite a willingness to do so.”

A discussion on international AI safety notes that geopolitical competition hampers coordination on minimum safety standards, aligning with the reported constraint on CEOs [S88].

Confirmedhigh

“Safety and diffusion of AI should progress in parallel, especially in high‑risk areas such as military AI and loss‑of‑control scenarios, requiring international cooperation to avoid catastrophic outcomes.”

UN Security Council deliberations highlighted existential risks and the need for international cooperation to prevent catastrophic outcomes, supporting the claim about parallel safety and diffusion in high-risk domains ([S87]; further emphasis on cooperation in [S88]).

Confirmedmedium

“Responsibility for setting limits on AI lies with public institutions, not private businesses.”

Policy analyses stress that enforcement agencies and public institutions, rather than private firms, are essential for implementing standards and limits on AI [S84].

Additional Contextlow

“The Code of Practice aims to address existential/systemic risks to democracy.”

The EU Council’s ‘general approach’ to the AI Act emphasizes safeguarding fundamental rights and democratic values, providing additional context for the focus on systemic risks [S73].

External Sources (88)
S1
How prevent external interferences to EU Election 2024 – v.2 | IGF 2023 Town Hall #162 — Paula Gori:Thank you very much. Spoiler, I’m not the Minister of Truth, and I’ll tell you why. Hello, everybody. I’m Pao…
S3
Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes — Audience: I’m a retired American librarian over here for my younger Norwegian-American children. On LinkedIn, I have a w…
S4
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S5
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S6
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S8
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Brado Benefai- (Appears to be the same person as Brando Benifei, mentioned in introduction) -Brando Benifei- Member of…
S9
Open Forum #72 European Parliament Delegation to the IGF & the Youth IGF — – Brando Benifei: Member of European Parliament (mentioned but not in speakers list)
S10
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S11
Building Trustworthy AI Foundations and Practical Pathways — So both Devayan and Alok did a good job summarizing the overall work. So these are a bit of technical details. I’ll prob…
S12
Taking Stock — Audience: Yes, thank you Chengetai. My name is Wouter Natus, I represent the Dynamic Coalition on Internet Standards, Se…
S13
Indias AI Leap Policy to Practice with AIP2 — – Brando Benefi- Rachel Adams – Fred Werner- Brando Benefi Brando advocates for precise regulatory frameworks with cle…
S14
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — So from different places in the world as a possible way of how to deal with the frontier aspects of AI development. Beca…
S15
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — – Brando Benifei- Paula Goldman – Brando Benifei- Sean O’Heigeartaigh Benifei advocates for a comprehensive legislativ…
S16
Opening of the session — Greater international cooperation is necessary in the context of threats.
S17
Omnipresent Smart Wireless: Deploying Future Networks at Scale — It is strongly suggested that regulations play a crucial role in managing and protecting personal data. The importance o…
S18
Geneva Manual exercise group 2 — Additionally, Orhan critically questions the adequacy of current information-sharing partnerships, suggesting that vital…
S19
IGF Parliamentary track – Session 2 — Audience: What I want you to ask in my mother language, please, in Espanol. Aprovechando que practicamente la mitad d…
S20
Conversation: 02 — So trust has to be a big part of it.
S21
Is Geopolitical ‘Coopetition’ Possible? — Furthermore, Sefcovic advocates for strategic autonomy in Europe, highlighting the lessons learned from the COVID-19 pan…
S22
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion revealed significant consensus across diverse stakeholders on fundamental questions about AI standards. A…
S23
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Adopted by the Council of Europe, includes modules for risk analysis, stakeholder engagement, impact assessment, and mit…
S24
High Level Leaders Session 2 | IGF 2023 — A legally binding act, it offers a robust structure for enforcement, complete with penalties for contraventions. Alongsi…
S25
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — However, concerns are raised about weak enforcement cultures in developing countries if they were to adopt ex-ante regul…
S26
Harnessing Digitalisation for Greener Supply Chains in LDCs — The weak implementation of laws and regulations is identified as a significant problem in developing countries. This wea…
S27
Opening of the session — El Salvador: Thank you, Chair. El Salvador, thank you for convening this session. For my country, it is essential to …
S28
Plenary: Sustainability at Risk: Drawing Insights from Climate Talks to Elevate Cybersecurity — The analysis also underscores the significance of maintaining trust in societal systems and the role of education and aw…
S29
Procuring modern security standards by governments&industry | IGF 2023 Open Forum #57 — In conclusion, the adoption of modern internet standards is crucial for ensuring safety, security, and efficient connect…
S30
World Economic Forum Town Hall on AI Ethics and Trust — Trust requires context and cannot be evaluated without specific use cases. Botsman argues that asking whether people tru…
S31
From principles to practice: Governing advanced AI in action — **Systemic Societal Risks**: Broader societal impacts, particularly profound labor market disruption that could create s…
S32
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S33
Military AI: Operational dangers and the regulatory void — Meanwhile, less technologically advanced states fear the use of military AI against them when they cannot develop this t…
S34
AI governance needs urgent international coordination — AGIS Reports analysisemphasises that as AI systems become pervasive, they create significant global challenges, includin…
S35
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — How to ensure the code of practice remains flexible yet clear enough to be effectively enforceable
S36
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Stakeholders should examine the Code of Practice, particularly the safety chapters, as potential standards for internati…
S37
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — The analysis emphasises the significance of multi-stakeholder engagement in policy processes, specifically in the contex…
S38
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S39
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S40
Laying the foundations for AI governance — – The need for collaboration between industry and regulators Artemis Seaford: That is a great question. So there is a m…
S41
WS #82 A Global South perspective on AI governance — Gian Claudio: Thank you very much. I hope you hear me well. Yeah, good. So yeah, indeed, the AI Act was a bit everyw…
S42
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S43
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S44
Agentic AI in Focus Opportunities Risks and Governance — Evidence:Reference to ISO 42001 standard as a good foundation but noting the three-year timeline for ISO controls develo…
S45
From Technical Safety to Societal Impact Rethinking AI Governanc — Combine technical safety measures with broader institutional and social safeguards rather than viewing them as competing…
S46
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Diverse areas of enforcement discussed based on different conducts pursued by different countries. In terms of problem-…
S47
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Another important aspect emphasized in the provided information is the need for collaboration between different authorit…
S48
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S49
360° on AI Regulations — Despite these concerns, Ian Bremmer remains hopeful that the United States and other nations can engage with China in a …
S50
AI and international peace and security: Key issues and relevance for Geneva — AI’s role in the military domain has become an increasingly important area of focus, as technology advances and military…
S51
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Arguments:Code of practice aims to build citizen trust that innovation can proceed without sacrificing human rights and …
S52
Artificial General Intelligence and the Future of Responsible Governance — So we need to be in close collaboration in order to mitigate these risks.
S53
Smart Regulation Rightsizing Governance for the AI Revolution — Shirastava emphasizes the importance of the entire AI ecosystem, including both large and small players, sharing documen…
S54
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Code of practice should emerge from co-legislative process involving all stakeholders to address systemic and existentia…
S55
HIGH LEVEL LEADERS SESSION I — Institutions should have the capacity for enforcement to ensure adherence to any rules that are set in place
S56
Technology and Human Rights Due Diligence at the UN | IGF 2023 Open Forum #163 — Furthermore, the analysis advocates for the effective enforcement of human rights due diligence tools. It is argued that…
S57
UNSC meeting: Strengthening UN peacekeeping — The Kingdom of the Netherlands addressed key aspects of UN peacekeeping in this speech. Firstly, they acknowledged the c…
S58
Open Forum #35 Addressing International Crimes Enabled by Cyber Operations — Both Reitsak and Karimian acknowledged that traditional law enforcement institutions often lack sufficient technical cap…
S59
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 part 6 — Malawi cites the World Bank’s emphasis on the importance of building institutional capacity and the need for strong inst…
S60
Regional cooperation for safer online consumer markets (UNCTAD) — In conclusion, the rise of online shopping brings concerns about the safety of products and the lack of information avai…
S61
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — Frederick Makamure Shava: Madam President, Excellencies, and all protocols observed, I commend the Secretary-General, …
S62
WS #279 AI: Guardian for Critical Infrastructure in Developing World — The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgr…
S63
Opening of the session — El Salvador: Thank you, Chair. El Salvador, thank you for convening this session. For my country, it is essential to …
S64
Procuring modern security standards by governments&industry | IGF 2023 Open Forum #57 — In conclusion, the adoption of modern internet standards is crucial for ensuring safety, security, and efficient connect…
S65
World Economic Forum Town Hall on AI Ethics and Trust — Trust requires context and cannot be evaluated without specific use cases. Botsman argues that asking whether people tru…
S66
From principles to practice: Governing advanced AI in action — **Systemic Societal Risks**: Broader societal impacts, particularly profound labor market disruption that could create s…
S67
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S68
9821st meeting — France expresses concerns about the use of AI in military applications, particularly regarding human control. This highl…
S69
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S70
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a …
S71
Keynotes — # EuroDIG 2025 Opening Session: Safeguarding Human Rights by Balancing Regulation and Innovation The session establishe…
S72
Adoption of the agenda and organization of work — Japan also emphasises the importance of safeguards in international cooperation to ensure peace, justice, and strong ins…
S73
EU Council calls for promoting safe AI that respects fundamental rights — The Council has established itsposition, known as the ‘general approach,’ regarding the Artificial Intelligence Act with…
S74
BREAK OUT ROOM 3: The Declaration for the Future of the Internet: Principles to Action — Audience:Hello, is this on? Yes. So thank you very much, Tamea. My name is Keith Drezik, I work for VeriSign. VeriSign h…
S75
Opening plenary session and adoption of the agenda — The technological landscape is evolving rapidly.
S76
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S77
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S78
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Different governments and countries are adopting varied approaches to AI governance. The transition from policy to pract…
S79
Human Rights Council Fortieth session — – 20. The context for privacy and the links between autonomy, privacy and necessary measures in a democratic …
S80
WS #255 AI and disinformation: Safeguarding Elections — Roxana Radu: Yeah, very briefly on the first question, I agree with Babu, there’s a need to have more transparency ov…
S81
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Additional measures are necessary to safeguard democracy and ensure its integrity. Efficient enforcement of approaches i…
S82
AI Safety at the Global Level Insights from Digital Ministers Of — “We were having a… conversation and one of the things I said when I spoke there was that we are going to need new demo…
S83
European Commission to establish European AI Office for EU AI Act enforcement — TheEuropean Commissionis preparing to establish the European Artificial Intelligence Office, which will be crucial in en…
S84
Policymaker’s Guide to International AI Safety Coordination — Gobind argues that while standards, regulations, and legislation are important, they are meaningless without institution…
S85
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Building trust, adapting models, and addressing funding challenges are necessary steps in driving positive change. Natio…
S86
How to make AI governance fit for purpose? — Shan Zhongde: Our chairman, President Xi, has paid great importance to the development of AI. I think this is the founda…
S87
Artificial intelligence (AI) – UN Security Council — The 9821st meeting on Artificial Intelligence (AI) at the Security Council covered the topic of existential risks posed …
S88
Comprehensive Discussion Report: The Future of Artificial General Intelligence — International cooperation on minimum safety standards is needed, but geopolitical competition makes coordination difficu…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Brando Benifei
3 arguments113 words per minute601 words319 seconds
Argument 1
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei)
EXPLANATION
Brando argues that the AI Act should rely on a code of practice developed through a co‑legislative process involving civil society, developers, SMEs and academia. This approach aims to create clear but adaptable rules that mitigate existential and systemic risks while fostering public trust in AI.
EVIDENCE
He explains that instead of detailing every risk mitigation in the legislation, the AI Act includes a provision for a code of practice created through a co-legislative process with multiple stakeholders, designed to be clear yet flexible and to prevent existential and systemic risks while building trust among citizens [2-4][7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Act’s provision for a co-legislative code of practice is discussed in [S14]; Brando’s advocacy for a comprehensive legislative framework through co-legislative processes and codes of practice is highlighted in [S15]; his call for precise yet flexible regulation is noted in [S13].
MAJOR DISCUSSION POINT
Co‑legislative code of practice for AI risk mitigation
AGREED WITH
Paola
DISAGREED WITH
Sean
Argument 2
Stress the necessity of international cooperation to address huge deployment risks and urge leaders to act promptly (Brando Benifei)
EXPLANATION
Brando emphasizes that many AI deployment scenarios pose massive risks that can only be managed through international cooperation. He calls on political leaders to act quickly and use summit occasions to make progress on these issues.
EVIDENCE
He notes that without international cooperation there are huge risks in AI deployment, cites issues such as military AI use and loss-of-control risks, and urges public institutions to lead and leaders not to lose any more time [28-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for greater international cooperation on AI risks is emphasized in [S16] and [S17], and Brando’s view that such cooperation must come from public institutions rather than businesses is reflected in [S15].
MAJOR DISCUSSION POINT
Urgent international cooperation on AI deployment risks
AGREED WITH
Sean
Argument 3
Highlight the urgency of addressing military AI use and loss‑of‑control risks, asserting that public institutions—not businesses—must lead the response (Brando Benifei)
EXPLANATION
Brando points out that high‑risk AI applications, especially in the military domain and loss‑of‑control scenarios, require regulation driven by public institutions rather than private companies. He stresses that this responsibility lies with governments and public bodies.
EVIDENCE
He mentions specific concerns about military AI and loss-of-control risks, stating that these issues need further cooperation and must be addressed by public institutions rather than businesses [30-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brando’s argument that public institutions should lead governance of high-risk AI, especially military applications, is documented in [S15].
MAJOR DISCUSSION POINT
Public institutions leading high‑risk AI governance
AGREED WITH
Sean
DISAGREED WITH
Sean
S
Speaker 2
2 arguments75 words per minute239 words190 seconds
Argument 1
Encourage review of the Code of Practice safety chapters as potential standards for other countries (Speaker 2)
EXPLANATION
Speaker 2 invites participants to examine the safety sections of the Code of Practice, suggesting that these chapters could serve as benchmarks that other nations might adopt.
EVIDENCE
During the closing remarks, they specifically ask the audience to look at the safety chapters of the Code of Practice, noting that they could become standards for other countries to sign up to [40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls to examine the safety chapters of the Code of Practice as possible international standards are made in [S14].
MAJOR DISCUSSION POINT
Safety chapters as international standards
AGREED WITH
Brando Benifei
Argument 2
Conclude that innovation and trust can coexist, urging continued collaborative work and adoption of the Code of Practice to sustain this balance (Speaker 2)
EXPLANATION
Speaker 2 summarizes the panel by stating that innovation and trust are not mutually exclusive and calls for ongoing collaboration and the use of the Code of Practice to maintain this synergy.
EVIDENCE
In the final remarks they stress that innovation and trust can go together, that continued joint work is needed, and that the Code of Practice should be used to support this balance [38-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s conclusion that innovation and trust can go together through the Code of Practice is recorded in [S15].
MAJOR DISCUSSION POINT
Innovation and trust synergy
S
Sean
1 argument173 words per minute210 words72 seconds
Argument 1
Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
EXPLANATION
Sean argues that current conditions do not enable AI developers to take extra safety steps because of intense geopolitical competition. He calls for the creation of conditions that let companies focus on safety and for a worldwide cooperative framework involving Europe, the United States and China.
EVIDENCE
He states that leaders, scholars and governance experts must create safe development conditions, notes that CEOs feel constrained by competitive pressure, describes this as a red alarm, and calls for mechanisms that let firms add safety measures, share expertise and possibly slow down at critical points, emphasizing inclusion of Europe, the US and China [12-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
O’Heigeartaigh’s emphasis on creating conditions for firms to add safety measures despite competitive pressures appears in [S20] and [S15]; the need for worldwide cooperation involving Europe, the US and China is also discussed in [S15].
MAJOR DISCUSSION POINT
Conditions and international cooperation for safe AI development
AGREED WITH
Brando Benifei
DISAGREED WITH
Brando Benifei
P
Paola
1 argument136 words per minute103 words45 seconds
Argument 1
Emphasize narrowing the gap between perceived AI power and deployment by tailoring trust controls to specific domains such as medicine versus customer service (Paola)
EXPLANATION
Paola stresses that the biggest bottleneck is understanding how to trust AI in the appropriate context. She advocates focusing on concrete use‑cases and designing domain‑specific trust controls, which will unlock productivity and confidence.
EVIDENCE
She points out the large gap between the perception of AI’s power and how quickly organisations can deploy it, explains that trust depends on context-different for medicine than for customer service-and argues that focusing on the right use-cases and context-specific trust controls will unlock productivity and trust [21-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Paula Goldman’s observation that trust controls must be domain-specific-e.g., medicine versus customer service-is highlighted in [S14] and [S15].
MAJOR DISCUSSION POINT
Context‑specific trust and use‑case focus
AGREED WITH
Brando Benifei
DISAGREED WITH
Brando Benifei
Agreements
Agreement Points
The code of practice should be a clear, flexible, multi‑stakeholder tool that builds trust and sets safety standards for AI deployment.
Speakers: Brando Benifei, Speaker 2
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei) Encourage review of the Code of Practice safety chapters as potential standards for other countries (Speaker 2)
Both speakers stress that a co-legislative code of practice, especially its safety chapters, can provide clear yet adaptable rules that mitigate risks and foster public trust in AI [2-4][7][40].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors the EU GPAI Code of Practice emphasis on clarity, flexibility and enforceability while serving as a multi-stakeholder framework for safety standards [S35] and its potential as an international benchmark [S36]. Multistakeholder policy processes are highlighted as key to compliance and commitment [S37].
International cooperation and swift political action are essential to manage high‑risk AI applications, including military use and loss‑of‑control scenarios.
Speakers: Brando Benifei, Sean
Stress the necessity of international cooperation to address huge deployment risks and urge leaders to act promptly (Brando Benifei) Highlight the urgency of addressing military AI use and loss‑of‑control risks, asserting that public institutions—not businesses—must lead the response (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
Both speakers call for urgent, coordinated action across nations and institutions to tackle systemic AI risks, especially in military and loss-of-control contexts [28-37][12-19].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for rapid political coordination aligns with discussions on AI’s military implications and responsible use at the IGF and UN forums [S50] and reflects calls for global cooperation in the International AI Safety Report and related panels [S48]. Earlier IGF sessions also identified high-risk AI as a priority for international regulation [S39].
Innovation and trust are not mutually exclusive; continued collaboration and the Code of Practice can ensure both progress and confidence in AI systems.
Speakers: Brando Benifei, Speaker 2
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei) Conclude that innovation and trust can go together, urging continued collaborative work and adoption of the Code of Practice to sustain this balance (Speaker 2)
Both emphasize that a well-designed governance framework allows AI innovation to proceed while maintaining public trust [7][38-43].
POLICY CONTEXT (KNOWLEDGE BASE)
A systems-thinking perspective stresses that technology, regulation and public trust must be coordinated to foster innovation safely [S42]. Industry actors have expressed a desire for clear regulatory frameworks that support innovation [S40], and governments are seen as integrators of technical and societal safeguards [S45].
Focusing on concrete, domain‑specific use cases and trust controls is key to unlocking productivity and confidence in AI.
Speakers: Paola, Brando Benifei
Emphasize narrowing the gap between perceived AI power and deployment by tailoring trust controls to specific domains such as medicine versus customer service (Paola) Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei)
Both point to the importance of context-specific safeguards-whether through use-case focus or a flexible code of practice-to build trust and realize AI benefits [21-24][3].
POLICY CONTEXT (KNOWLEDGE BASE)
Domain-specific safety controls are advocated as practical standards, with references to ISO 42001 and NIST voluntary frameworks that can fill gaps before broader standards mature [S44]. Combining technical safety with targeted institutional safeguards is recommended to boost productivity while maintaining trust [S45].
Similar Viewpoints
Both see the Code of Practice, particularly its safety sections, as a cornerstone for establishing clear, adaptable AI risk‑mitigation standards that inspire trust [2-4,7][40].
Speakers: Brando Benifei, Speaker 2
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei) Encourage review of the Code of Practice safety chapters as potential standards for other countries (Speaker 2)
Both argue that AI risks require swift, coordinated international action and that public institutions should lead the response to high‑risk applications [28-37][12-19].
Speakers: Brando Benifei, Sean
Stress the necessity of international cooperation to address huge deployment risks and urge leaders to act promptly (Brando Benifei) Highlight the urgency of addressing military AI use and loss‑of‑control risks, asserting that public institutions—not businesses—must lead the response (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
Both stress that trust must be built through context‑specific safeguards—whether via domain‑focused controls or a flexible, multi‑stakeholder code of practice [21-24][3].
Speakers: Paola, Brando Benifei
Emphasize narrowing the gap between perceived AI power and deployment by tailoring trust controls to specific domains such as medicine versus customer service (Paola) Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei)
Both convey that AI innovation can proceed alongside robust trust mechanisms when guided by a well‑designed governance framework [7][38-43].
Speakers: Brando Benifei, Speaker 2
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei) Conclude that innovation and trust can go together, urging continued collaborative work and adoption of the Code of Practice to sustain this balance (Speaker 2)
Unexpected Consensus
Both a policy‑maker (Brando Benifei) and an academic expert (Sean) converge on the need for public institutions—not private firms—to lead the governance of high‑risk AI, despite their different professional lenses.
Speakers: Brando Benifei, Sean
Highlight the urgency of addressing military AI use and loss‑of‑control risks, asserting that public institutions—not businesses—must lead the response (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
While Brando explicitly calls for public-institution leadership, Sean focuses on creating conditions for firms to act safely; together they imply that effective safety will ultimately depend on public-sector frameworks and coordination, an alignment not obvious from their distinct roles [30-34][12-19].
Overall Assessment

The panel shows strong convergence on three core themes: (1) the Code of Practice as a flexible, multi‑stakeholder instrument for safety and trust; (2) the necessity of swift, coordinated international action, especially for high‑risk AI such as military applications; (3) the belief that innovation and trust can be pursued together through collaborative governance and context‑specific safeguards.

High consensus – most speakers echo each other’s key points, indicating a shared understanding that effective AI governance requires a blend of clear standards, international cooperation, and domain‑tailored trust mechanisms. This alignment suggests that future policy initiatives are likely to build on the Code of Practice, prioritize rapid multilateral engagement, and emphasize trust‑by‑design in AI deployments.

Differences
Different Viewpoints
Who should lead high‑risk AI governance (public institutions vs industry conditions)
Speakers: Brando Benifei, Sean
Highlight the urgency of addressing military AI use and loss‑of‑control risks, asserting that public institutions—not businesses—must lead the response (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
Brando says that military AI and loss-of-control risks require action by public institutions and that businesses are not the proper actors, urging leaders to act quickly [30-34][35-37]; Sean argues that the main obstacle is competitive pressure on CEOs and calls for creating market conditions that enable companies to add safety steps, implying a central role for industry [13-15][16-19].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on leadership echo the broader discourse on government-industry collaboration, where regulators are positioned to harmonise safety standards while industry seeks regulatory certainty [S40][S45][S47].
Preferred mechanism to build trust – a broad co‑legislative code of practice vs domain‑specific use‑case trust controls
Speakers: Brando Benifei, Paola
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei) Emphasize narrowing the gap between perceived AI power and deployment by tailoring trust controls to specific domains such as medicine versus customer service (Paola)
Brando promotes a multi-stakeholder code of practice that sets general, clear yet flexible rules for AI risk mitigation [2-4][7]; Paola argues that trust should be achieved by focusing on concrete use-cases and designing sector-specific controls, noting different trust needs in medicine versus customer service [21-24].
POLICY CONTEXT (KNOWLEDGE BASE)
The EU GPAI Code of Practice exemplifies a broad, flexible legislative tool, whereas ISO/NIST-style standards illustrate domain-specific trust mechanisms, highlighting the trade-off between comprehensive codes and targeted controls [S35][S44][S45].
Assessment of current industry compliance – companies already complying vs companies constrained by competition
Speakers: Brando Benifei, Sean
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
Brando notes that many companies are already complying with many risk-mitigation aspects of the code of practice, suggesting readiness to adopt the rules [5]; Sean counters that CEOs feel unable to take additional safety steps because of geopolitical competition, describing this as a red-alarm bell and a barrier to further compliance [13-15][16-17].
POLICY CONTEXT (KNOWLEDGE BASE)
While many firms express willingness to adopt regulation, competitive pressures can limit compliance, a tension noted in discussions on industry-regulator collaboration [S40] and competition-law analyses of digital markets [S46].
Unexpected Differences
Public‑institution‑led governance vs industry‑centric safety conditions
Speakers: Brando Benifei, Sean
Highlight the urgency of addressing military AI use and loss‑of‑control risks, asserting that public institutions—not businesses—must lead the response (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
While both are AI governance experts, Brando insists that only public institutions should drive regulation of high-risk AI, especially military applications, whereas Sean focuses on shaping market conditions so that companies can voluntarily adopt safety measures-an unexpected divergence in the perceived primary actors for AI risk mitigation [30-34][13-19].
POLICY CONTEXT (KNOWLEDGE BASE)
The literature stresses governments as the coordinating hub for technical and societal safeguards, contrasting with industry-driven safety regimes that may prioritize market considerations [S45][S47][S40].
Overall Assessment

The panel shows moderate disagreement centered on who should steer high‑risk AI governance and on the preferred technical approach to building trust. Brando pushes for public‑institution‑led, co‑legislative codes and urgent international action, while Sean emphasizes creating industry‑friendly conditions and broader global cooperation. Paola’s domain‑specific trust‑control proposal also diverges from Brando’s broad code approach. Despite these differences, all participants converge on the shared goal of trustworthy AI and agree that cooperation—whether political or market‑based—is essential.

Medium level of disagreement: substantive differences in governance locus and implementation mechanisms, but not outright conflict. The implications are that any future AI governance framework will likely need to blend public‑institutional oversight with industry incentives and sector‑specific safeguards to achieve consensus.

Partial Agreements
All speakers share the overarching goal of aligning AI innovation with trust and safety—Brando through a code of practice, Sean by shaping industry‑friendly safety conditions, Paola via domain‑specific trust controls, and Speaker 2 by stating that innovation and trust can go together and calling for continued collaboration [7][12-19][21-24][38-43].
Speakers: Brando Benifei, Sean, Paola, Speaker 2
Advocate a co‑legislative, multi‑stakeholder code of practice that is clear yet flexible, to prevent existential and systemic risks and build citizen trust (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean) Emphasize narrowing the gap between perceived AI power and deployment by tailoring trust controls to specific domains such as medicine versus customer service (Paola) Conclude that innovation and trust can coexist, urging continued collaborative work and adoption of the Code of Practice to sustain this balance (Speaker 2)
Both agree that international cooperation is essential—Brando emphasizes urgent political action to manage deployment risks [28-37]; Sean also calls for global cooperation but frames it as creating conditions for firms across Europe, the US and China [12-19].
Speakers: Brando Benifei, Sean
Stress the necessity of international cooperation to address huge deployment risks and urge leaders to act promptly (Brando Benifei) Call for establishing conditions that allow companies to prioritize safety despite competitive geopolitical pressure, and for global cooperation that includes Europe, the US, and China (Sean)
Takeaways
Key takeaways
A multi‑stakeholder, co‑legislative Code of Practice is proposed to provide clear yet flexible rules for AI risk mitigation and to build citizen trust. International cooperation is essential; leaders must create conditions that allow companies worldwide (including Europe, the US, and China) to prioritize safety despite geopolitical competition. Trust mechanisms should be tailored to specific use‑cases and domains (e.g., medicine vs. customer service) to bridge the gap between perceived AI power and responsible deployment. Public institutions, not private firms, should take the lead on high‑risk AI applications such as military use and loss‑of‑control scenarios. Innovation and trust are not mutually exclusive; ongoing collaboration and adoption of the Code’s safety chapters can align them.
Resolutions and action items
Encourage the European AI Office to be equipped with the necessary tools and authority to implement the Code of Practice. Invite leaders to review the safety chapters of the Code of Practice and consider them as reference standards for other jurisdictions. Call for the establishment of conditions (regulatory, financial, or collaborative frameworks) that enable AI developers to adopt additional safety measures. Promote the organization of international forums that bring together European, US, and Chinese stakeholders to discuss coordinated AI governance. Continue the dialogue on context‑specific trust controls, with follow‑up work on sector‑specific guidelines.
Unresolved issues
Specific mechanisms for creating the “conditions” that allow companies to prioritize safety under competitive pressure. Concrete steps for achieving global consensus and equal participation among Europe, the US, and China. Detailed regulatory approaches for military AI and loss‑of‑control risks. Implementation pathways for the Code of Practice across diverse national legal systems. How to operationalize trust controls for different domains without stifling innovation.
Suggested compromises
Maintain a balance between stringent safety requirements and the diffusion/action goals of AI deployment, allowing both to progress in parallel. Adopt a flexible Code of Practice that provides clear objectives while permitting adaptation to evolving technological contexts. Focus on sector‑specific trust frameworks rather than a one‑size‑fits‑all approach, enabling tailored safeguards without over‑regulating all AI applications.
Thought Provoking Comments
We decided to put in the AI Act a provision for a code of practice that would be co‑legislative, involving civil society, developers, SMEs, large enterprises and academia, to build rules that are adaptable to the evolving AI landscape and address existential and systemic risks to democracy.
Introduces the novel governance mechanism of a co‑created code of practice rather than rigid legislation, highlighting inclusivity and flexibility as a way to keep pace with rapid AI development.
Sets the foundational framework for the discussion, prompting other panelists to consider how such a collaborative approach can be operationalised and leading to subsequent remarks about implementation, trust, and the need for public‑sector leadership.
Speaker: Brando Benifei
The CEOs of leading AI companies say they would like to take additional safety steps, but competitive geopolitical pressure prevents them; this should be a red alarm bell, and we must create conditions that allow them to focus on safety, share expertise, coordinate, and possibly slow down at critical points – involving Europe, the US, and China as equals.
Challenges the assumption that industry will self‑regulate, exposing the tension between market competition and safety, and calls for a global, cooperative governance model.
Shifts the conversation from European‑centric policy to a broader international coordination imperative, influencing Brando’s later emphasis on global cooperation and prompting the audience to think beyond regional solutions.
Speaker: Sean
Focus on the use cases. The bottleneck is not the technology but how organisations trust it in the right context—medicine versus customer service require different trust controls; addressing context unlocks productivity and trust simultaneously.
Introduces a pragmatic, sector‑specific lens that moves the debate from abstract risk mitigation to concrete deployment scenarios, highlighting the importance of contextual trust mechanisms.
Redirects the dialogue toward practical implementation challenges, encouraging participants to consider domain‑specific standards within the broader code of practice and enriching the discussion with actionable insights.
Speaker: Paola
We must not contrast safety with diffusion; they must go in parallel. There are deployment areas where, without international cooperation, we face huge risks—military AI, loss‑of‑control risks. Public institutions, not businesses, must lead this effort and we cannot afford to lose more time.
Reframes the perceived trade‑off between safety and innovation as a false dichotomy, stresses urgency, and expands the scope to include military and loss‑of‑control risks, calling for decisive public‑sector action.
Reinforces the earlier call for global cooperation, intensifies the urgency of the discussion, and serves as a turning point that moves the panel from describing the code of practice to demanding concrete, time‑sensitive political action.
Speaker: Brando Benifei
Innovation and trust can go together; the safety chapters of the Code of Practice can become standards that other countries sign up to. Continued collaboration is essential.
Synthesises the panel’s diverse points into a unifying message, positioning the Code of Practice as a bridge between innovation and trust and inviting broader adoption.
Provides closure by linking all previous insights—co‑legislative process, international cooperation, use‑case focus, and urgency—into a concrete next step, reinforcing the collaborative tone for future work.
Speaker: Speaker 2 (moderator)
Overall Assessment

The discussion was shaped by a series of escalating insights that moved from outlining a collaborative European governance model to exposing the global competitive pressures that hinder safety, then to grounding the debate in sector‑specific trust challenges, and finally to a rallying call for urgent, coordinated public‑sector leadership across borders. Each pivotal comment broadened the scope of the conversation, introduced new dimensions of complexity, and prompted participants to re‑evaluate assumptions, culminating in a consensus that innovation, safety, and trust are mutually reinforcing goals that require an inclusive, international framework.

Follow-up Questions
How can the European AI Office be given sufficient authority, resources, and tools to effectively implement the AI Code of Practice?
Ensuring the Office can enforce the code is crucial for translating legislative intent into practical, enforceable standards and building public trust.
Speaker: Brando Benifei
What mechanisms or policy frameworks can create conditions that allow private AI developers to adopt additional safety measures despite competitive and geopolitical pressures?
Addressing the tension between market competition and safety is essential to enable companies to prioritize responsible AI development.
Speaker: Sean
How can global AI stakeholders—including European, US, and Chinese companies—be brought to the negotiation table as equals to cooperate on AI safety and governance?
International coordination is needed to manage cross-border AI risks and prevent a fragmented regulatory landscape.
Speaker: Sean
What domain‑specific trust controls and validation methods are appropriate for different AI use cases, such as healthcare versus customer service?
Tailoring trust mechanisms to context ensures that AI systems are both effective and safe in varied operational environments.
Speaker: Paola
What international governance approaches are needed to address the military use of AI and the associated loss‑of‑control risks?
Military AI poses existential and security threats that require coordinated public‑institutional action beyond private sector self‑regulation.
Speaker: Brando Benifei
Which safety standards from the AI Code of Practice can be extracted and adapted as benchmarks for other countries to adopt?
Identifying transferable standards facilitates global harmonisation and helps non‑EU nations align with proven safety practices.
Speaker: Speaker 2

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building the Next Wave of AI_ Responsible Frameworks & Standards

Building the Next Wave of AI_ Responsible Frameworks & Standards

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with the moderator stressing that the most pressing challenge for AI is to ensure its impact is safe, responsible, ethical, inclusive and explainable, and that effective safety benchmarks must emerge from real-world deployment rather than isolated research labs [4-6][8-13]. He introduced the RAISE Index – a tool that lets organisations test AI systems against a practical framework – and described Telangana’s Data Exchange sandbox that gives startups access to government data for validating benchmarks [13-15][16-19]. He also warned that benchmarks cannot remain static checklists because AI capabilities outpace regulatory cycles, so continuous evolution of the framework is essential [24-27].


Highlighting India’s role, the moderator argued that the country’s multilingual, large-scale and resource-constrained environment is a competitive advantage for shaping global AI standards [28-35]. The RAISE Index, he said, harmonises requirements from the EU AI Act, NIST, Singapore and UK guidelines into a single portable assessment [39-42], and is designed as an iterative, living methodology that will be updated through pilot phases and stakeholder consultation [44-48].


Arundhati Bhattacharya then explained that Salesforce created a “humane and ethical use” office in 2014 to review every product before market launch, reflecting an early commitment to trust and accountability [58-66]. She argued that AI’s dual-use nature – from deep-fakes to life-saving medical research – necessitates a global compact with transparent information exchange to curb bad actors [67-78]. The need for a shared framework, she concluded, is vital to ensure AI serves humanity rather than being misused [82-83].


Karna Chokshi described how her startup embeds governance guardrails directly into AI products, making compliance a core feature rather than a lengthy PDF, and treats human-in-the-loop as a first-class capability [96-102]. By productising these controls, the company has enabled tens of thousands of MSMEs to build voice-agent hiring tools within minutes, demonstrating that integrated governance drives mass adoption [103-106].


Ankush Sabharwal emphasized that sovereign data control – through on-premise AI appliances and client-chosen data centres – is the primary trust factor for sensitive sectors such as defense and finance [108-110]. He linked trust to explainability, accuracy (aiming for 99.9 % reliability), and the use of compliance APIs that turn regulatory checklists into reusable services, advocating purpose-led innovation that balances risk with speed [141-148][151-170].


The moderator thanked the panelists, and Kazim Rizvi closed by urging participants to adopt India’s first Responsible AI readiness tool, reinforcing the call for responsible-by-design practices across the ecosystem [193-198][202-209].


Keypoints

Major discussion points


Dynamic, co-created safety benchmarks are essential – The moderator stressed that AI safety benchmarks must emerge from real-world deployment, be co-created with industry, academia and government, and remain “living infrastructure” that evolves faster than regulatory cycles. The RAISE Index is presented as a concrete, phase-based methodology for this purpose. [4-12][13-28][44-48]


India’s innovation hubs are shaping global responsible-AI standards – Initiatives such as the Telangana Data Exchange give startups sandboxed access to government data for real-world testing, while the RAISE Index harmonises requirements from the EU AI Act, NIST, Singapore and UK guidelines, positioning India as a leader in inclusive, high-scale AI deployment. [16-23][28-38][39-44]


Corporate commitment to trust and a global compact – Salesforce described its “Office for the humane and ethical use of technology” that vets every product before launch and highlighted trust as its top corporate value, calling for a worldwide agreement to curb misuse of AI. [58-66][112-118]


Start-up and MSME operational challenges and product-centric solutions – Panelists argued that governance, compliance and observability should be baked into the core product (e.g., via built-in guardrails and APIs), that human-in-the-loop remains vital, and that data sovereignty (on-prem or edge AI) is a key trust factor for enterprise customers. [96-102][108-110][151-170]


Choosing between large and small language models – The discussion turned to the trade-offs of using powerful LLMs for speed of innovation versus smaller, more cost-effective models for latency and risk management, with a recommendation to start with LLMs and later transition to SLMs as needs mature. [191-193]


Overall purpose / goal of the discussion


The panel aimed to synthesize lessons from the Global AI Summit and chart a practical roadmap for “responsible AI” – ensuring that AI systems are safe, ethical, inclusive and explainable by developing adaptable benchmarks, sharing real-world testing infrastructures, and fostering cross-sector collaboration (government, academia, industry, startups) to embed trust and governance from the design stage onward.


Overall tone and its evolution


Opening: Formal, optimistic and forward-looking, emphasizing the importance of holistic, responsible AI. [4-7]


Middle: Becomes more pragmatic and technical, with concrete examples of frameworks, data-exchange sandboxes, and product-level governance, reflecting the challenges faced by startups and large enterprises. [16-23][96-102][108-110]


Later: Shifts to a collaborative, solution-focused tone, highlighting shared values like trust and the need for a global compact, while also acknowledging fatigue and the intensive nature of the work. [112-118][194-209]


Overall, the conversation maintained a constructive and collaborative spirit, moving from high-level vision to detailed implementation concerns, and concluding with a hopeful but realistic acknowledgment of the effort required to achieve responsible AI at scale.


Speakers

Moderator


– Role/Title: Session Moderator


Kamesh Shekar


– Role/Title: Moderator of the panel; Youth Ambassador at The Internet Society [S13][S14]


Arundhati Bhattacharya


– Role/Title: Senior Executive, Salesforce (leads the Office for Humane and Ethical Use of Technology)


Karna Chokshi


– Role/Title: Founder / CEO of a voice-AI startup (provides voice agents for enterprises) [S6][S7]


Ankush Sabharwal


– Role/Title: Founder / CEO of Vada AI (developer of Vada GPT desk-AI appliance) [S1]


Kazim Rizvi


– Role/Title: Founding Director, The Dialogue [S4][S5]


Additional speakers:


Fani – referenced in the closing remarks; no role or title provided.


Sarj – mentioned as taking over the session; no role or title provided.


Sahish – thanked in the closing remarks; no role or title provided.


Full session reportComprehensive analysis and detailed insights

The moderator opened the session by welcoming the audience and framing the final discussion of the Global AI Summit around the need to make AI “safe, responsible, ethical, inclusive and explainable,” emphasizing that these goals must be pursued holistically [1-6]. He highlighted that the week’s sessions demonstrated how AI can deliver intended benefits while avoiding unintended harms, and he stressed that governments, innovation hubs, academia and startups each play a critical role in developing safe and ethical AI [7-10]. Notably, he asserted that “government innovation hubs sit at this critical intersection between policy intent and operational reality, surfacing failure modes and trust gaps” [7-10]. He also argued that safety benchmarks must arise from real-world deployment rather than isolated research labs, because benchmarks created in isolation tend to fail [8-10][11-12].


Building on this premise, the moderator introduced the RAISE Index – a co-created, phase-based assessment framework developed by ICOM and The Dialogue over the past eighteen months [13-14]. He emphasized that the released tool is the first edition of a Responsible-AI readiness framework and will evolve through pilot phases and stakeholder consultation [13-14][44-48]. The index quantifies the safety and responsibility impact of AI during both development and deployment, and a QR code displayed on the screen enables participants to test their own AI solutions against the framework [14-15]. He highlighted three further requirements: benchmarks must be practical, they must constitute “living infrastructure” that evolves faster than regulatory cycles-a principle he termed “continuous-learning” [24-27][44-48], and they must be embedded through pilot phases and ongoing stakeholder feedback [24-27].


ICOM’s pioneering role was then highlighted. The moderator described ICOM as “the first-of-its-kind innovation AI entity out of Telangana,” noting its contributions to high-stakes deployments in health, agriculture, climate and financial inclusion [36-38]. This underscores ICOM’s unique position as a real-world testbed for the RAISE Index and related safety benchmarks.


The moderator next described Telangana’s Data Exchange, a first-of-its-kind digital public infrastructure that offers startups sandboxed access to government data sets [16-18]. This sandbox enables startups to validate benchmarks against real data, use-cases and constraints before full deployment [19-20]. He linked this to the reality that startups move quickly and therefore need early-warning mechanisms within the RAISE framework to detect and remediate risks [21-23][24-27].


Turning to India’s global positioning, the moderator argued that the country’s multilingual, large-scale and resource-constrained environment – shared by many developing nations – is a competitive advantage for shaping inclusive AI standards [28-33][34-35]. He cited India’s ability to demonstrate responsible AI in high-stakes, high-scale deployments across health, agriculture, climate and financial inclusion [36-38]. The RAISE Index, he explained, harmonises requirements from the EU AI Act, NIST AI Risk Management Framework, Singapore’s guidelines and the UK AI Assurance, providing a single portable assessment for organisations operating in multiple markets [39-42][43-44]. Its methodology is open and adaptable for other jurisdictions [43-44].


Arundhati Bhattacharya (Salesforce) recounted that the company launched an “Office for the humane and ethical use of technology” in 2014, which reviews every product and process before market release [58-61]. She positioned trust as Salesforce’s number-one corporate value and described a cloud-native “trust layer” that actively checks for bias, toxicity, hallucination and data leakage [112-118][129-136]. She warned that AI’s dual-use nature – from deep-fakes to life-saving medical research – demands a global compact with transparent information exchange to curb malicious actors, emphasizing that such a compact must be a collective, not unilateral, effort [62-78][67-68][82-83].


Karna Chokshi described her startup’s approach of embedding governance guardrails directly into the AI product at every stage – input, reasoning, tool-calling and output – and treating human-in-the-loop as a first-class feature rather than a failure point [96-99][100-102]. By productising these controls, the company has enabled ≈30,000 companies (≈3 lakh overall) to build voice-agent hiring tools within minutes, illustrating how integrated governance drives mass adoption [103-108][107-108]. She advocated converting regulatory compliance into reusable APIs, making compliance an infrastructure layer rather than a lengthy PDF, and set privacy-preserving defaults: “customer data should not be used by default to train LLMs… it should be an optional add-on” [155-168][165-174].


Ankush Sabharwal highlighted the importance of data sovereignty for sectors such as defence and finance. His firm offers on-premise AI appliances – the VadaGPT super-computer capable of petaflop-scale processing – to give clients full control over data and audit trails [108-110][111-112]. He stressed that trust in these high-stakes contexts is built on ultra-high accuracy (99.9 % reliability), observability and purpose-driven AI, and that compliance can be delivered through APIs that encode regulatory checklists [141-147][151-160][161-170]. He also noted that the company is “risk-averse” and “not 100 % safe,” even while delivering the stated accuracy for critical clients [141-147].


When asked about model selection, Karna suggested starting with large language models (LLMs) to accelerate innovation and later transitioning to smaller, domain-specific language models (SLMs) for latency and cost benefits [191-193]. Ankush echoed a similar staged approach but noted that 80-90 % of interactions in his company’s hiring platform are handled by classic NLP, reserving generative AI for complex queries, thereby adopting a composite AI strategy that balances accuracy, efficiency and risk [147-152][147-152].


Kamesh Shekar reinforced the notion that responsible AI can be a marketable value proposition, encouraging companies to embed trust and governance as differentiators that appeal to customers [103-106][111]. He also pointed out that the panel’s discussion had moved from high-level policy framing to concrete technical solutions, underscoring the need for collaborative, ecosystem-wide effort [84-89][90-95].


In the closing segment, Kazim Rizvi thanked the participants, acknowledged the fatigue after a week of intensive events, and urged the audience to adopt India’s first Responsible-AI readiness tool – the RAISE Index – as a “responsible-by-design” practice [193-199][202-209]. He announced forthcoming Dialogues on AI policy and encouraged continued engagement, framing the summit’s legacy as a catalyst for ongoing responsible-AI work [206-209].


Key agreements that emerged across the panel were: (i) safety benchmarks must be co-created, practical, and continuously updated, derived from deployment realities [8-12][24-27][45-48]; (ii) trust is a foundational pillar, manifested through dedicated trust layers, ultra-high-accuracy models and purpose-driven design [112-118][129-136][141-147]; (iii) global collaboration and harmonisation of standards are essential, exemplified by the RAISE Index and calls for a worldwide compact [62-68][39-42]; and (iv) governance should be embedded directly into AI products, moving from checklists to built-in guardrails and API-based compliance [96-99][155-168][103-108].


Points of disagreement included: (i) the preferred technical route to trustworthy governance – Karna’s product-centric guardrails versus Ankush’s emphasis on sovereign on-premise hardware [96-99][108-110]; (ii) the optimal balance between LLMs and classic NLP/composite AI, with Karna advocating an LLM-first roadmap and Ankush stressing the dominance of classic NLP for most use-cases [191-193][147-152]; and (iii) the relative weight of a policy-level global compact versus technical benchmark tools like the RAISE Index for preventing AI misuse [62-68][13-14][39-42].


Take-aways and action items (each sentence ends with a citation):


1. Scan the QR code and use the RAISE Index to assess AI systems [14-15].


2. Treat the index as a living framework that will evolve through pilots and stakeholder feedback [24-27][44-48].


3. Embed trust layers and governance guardrails into AI product architectures rather than treating them as separate documentation [96-99][112-118].


4. Expose compliance requirements as open-source, reusable APIs with privacy-preserving defaults [155-168][165-174].


5. Consider on-premise or edge AI solutions for sovereign data needs [108-110][111-112].


6. Adopt a purpose-led innovation approach – define the problem first, then select the appropriate model (LLM or SLM) and ensure high accuracy before scaling [191-193][147-152].


7. Contribute to a global compact that combines transparent information exchange with technical standards [62-68][39-42].


Thought-provoking remarks that shaped the discussion included the moderator’s assertion that “benchmarks must emerge from deployment reality, not just research labs” [8-10]; the introduction of the RAISE Index as a unifying metric [39-42]; Arundhati’s call for a worldwide compact to stop bad actors [62-68]; Karna’s insistence that governance be built into the core product with guardrails at every stage [96-99]; and Ankush’s emphasis on data sovereignty as the cornerstone of trust [108-110].


Follow-up questions for future inquiry:


– How can a global compact be operationalised to curb AI misuse? [62-68]


– What mechanisms will ensure continuous evolution of safety benchmarks? [24-27]


– How can compliance be transformed into reusable APIs at scale? [155-168]


– What governance differences arise between public and private AI deployments? [84-89]


– How should trust layers be designed to mitigate bias and hallucination in cloud-native services? [112-118][129-136]


– How does data sovereignty affect adoption in critical sectors? [108-110]


– What default privacy settings should govern the use of customer data for model training? [165-174]


– How can human-in-the-loop be positioned as a feature rather than a failure point? [100-102]


– How effective is the RAISE Index across diverse regulatory regimes? [13-14][44-48]


Overall, the discussion demonstrated strong consensus on the importance of trust, co-created benchmarks and global collaboration, while revealing nuanced disagreements on the technical pathways to achieve these goals. The panel concluded with a clear call to action: adopt the RAISE Index, embed responsible AI into product design, and pursue a multi-layered strategy that combines policy, standards and innovative infrastructure to realise safe and inclusive AI at scale.


Session transcriptComplete transcript of the session
Moderator

Thank you. Good afternoon, everyone. I know it’s Friday afternoon, almost end of a fantastic Global AI Summit. And good afternoon to my fellow distinguished panelists. I think the topic of this particular panel, it’s probably the apt one to wrap up this Global AI Summit because the most important arc in this innovation, the innovation of AI is making sure… the impact of the AI is safe, responsible, ethical, inclusive, and explainable, right? And it has to be holistic at the end of the day. I think there’s a lot that we have learned over the course of this week, listening to a number of different thought leaders talking about how AI could be channeled in a manner where it delivers the intended impact without getting into unintended consequences.

I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing this safe and ethical AI, right? Starting with benchmarks must emerge from deployment reality. And not just research labs. Safety benchmarks fail. when developed in isolation, the most effective ones come from institutions building, deploying, and maintaining AI at scale, right? Government innovation hubs sit at this critical intersection between policy intent and operational reality, surfacing failure modes and trust gaps. The second most important element in this framework is to ensure these safety benchmarks are co -created with the industry and with academia and the research institutions. ICOM and the dialogue developed one of its kind index called the RAISE Index over the last year and a half that we have been working together, which is the first of its kind in quantifying the impact or the quantifying the value or the quantifying the impact of AI within deployment and during development on the safety and responsibility matrix.

And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to the entire framework and you could even test your respective AI solutions or AI systems that you might be developing or you already have in production, test it against that and then see what the index comes back and tells you. The third is making benchmarks practical. And in Telangana, we have launched Telangana Data Exchange, which is first of its kind, digital public infrastructure, within the realm of AI. It provides startups access to government data sets in a sandboxed environment. This is where benchmarks get validated and time tested. Startups can test their AI systems against actual data, actual use cases, actual constraints before deployment.

The third is we all understand and recognize that startups move at a rapid pace. So when startups are deploying AI solutions, there’s a number of risks that emerge. And we are providing this index again as part and parcel of the whole startup ecosystem that we are building. And as a result, we expect them to detect any early warning signs within this framework and continue to improve this. The last is benchmarks and frameworks must be living infrastructure, not static checklists, right? AI. Capabilities evolve faster than regulatory cycles. Static benchmarks become. Hubs must institutionalize continuous benchmark evolution. This raised index methodology includes phase -based assessment, ensuring benchmarks remain relevant to company maturity stages. So if you take this broader framework of making sure, how do we make sure AI systems are safe and responsible and ethical, the question comes down to how is India leveraging its innovation hubs and its leadership position in shaping the global dialogue on inclusive and responsible AI.

What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed for high resource, homogenous environments. India operates in the context that most of the developing world shares, which is multilingual populations, infrastructure, and innovation. It has infrastructure constraints, massive scale, and the imperative to serve both economic growth and social inclusion. This is not a limitation. This is a significant competitive advantage that India has in shaping the global standards. Number two is demonstrating responsible AI in high stakes, high scale deployments, which we are offering. ICOM, the first of its kind innovation, AI innovation entity out of Telangana, with its research and co -innovation pillar, helps build AI solutions for healthcare, agriculture, climate, financial inclusion, where failures have immediate societal impact.

When we document how these systems are designed, tested, and governed, we contribute frameworks that have been validated under real world complexity, not just lab conditions. This particular RAISE index is India’s contribution to global standardization. You will notice the more you dig into this index, the index harmonizes requirements across leading global frameworks. Be it EU AI Act, NIST AI Risk Management Framework, the Singapore Mass Guidelines or the UK AI Assurance. We brought it all together into a single portable assessment. Organizations operating in multiple markets can use one assessment to evaluate alignment with diverse regulatory escalations. The methodology is open and adaptable for other jurisdictions. And I would leave you with last but very important point of institutionalized continuous learning in responsible AI practice, right?

Most frameworks are static standards. ICOM believes in creating systems with ongoing feedback, tracking system performance over time, updating benchmarks as models evolve, incorporating new research. And Raise Index is designed as an iterative framework. What we are releasing today is the first edition and it will continue to evolve through pilot phases. stakeholder consultation and it’s not a one time standard we all know AI is an evolving technology and this has to evolve but our intent and goal and hope is this would keep pace with the pace with which the technology is moving and that is very critical and that’s a common responsibility that we all hold be it technologists, be it policy makers be it think tanks or be it researchers or start ups it is we all have to come together as an ecosystem to ensure the technology that we put out there with the intent of doing benefit for the society does exactly that without any unintended consequences so I think we are up for a fantastic panel and you guys absolutely would enjoy the conversation that is going to be held now.

Thank you.

Kamesh Shekar

Thank you so much, sir, for setting the context. And I think like that deep, like sets the perfect context for us to like pick up the conversation from there, which is going to be like we are discussing today in terms of like reimagining like responsible AI. What are we trying to like do today in this panel is to like, you know, understand like what are the shifts that are needed like when it comes to responsibilities with evolving innovations and like how we can take the needle forward when it comes to responsibilities. I would like to start with Ms. Arunthati Bhattacharya here. Thank you so much, ma ‘am, for taking the time. It’s absolutely a pleasure to host you.

And first question is to you, ma ‘am, is that is like as you are a global enterprise leader, how do you see the balance between the rapid AI innovation with there is a need for a trust and accountability and customer protection as well? So how do you see that balance

Arundhati Bhattacharya

So, you know, in the company that I work for, Salesforce, we started our AI journey in 2014. And in 2014, we also set up within the company an office for the humane and ethical use of technology. So this is an office, by the way, which goes through every one of our products, every one of our processes, before it is allowed to make its debut in the market. Because we realized very early on that while technology and AI could give us many advantages, it would also be used by bad actors for doing things which it was never intended for. And that is true of every single thing that, you know, we come up with. Whether it be a new medicine, whether it be nuclear energy, whether it be anything that we come up with, it can have its good use.

It can also be used for the wrong reasons. And that is something that we must come together in a global compact in order to defeat and in order to stop. Again, this has to be a global compact. It’s not something that one country or one organization or one effort can probably ensure. Because unless and until we have sufficient transparent information exchange, unless and until we all say together that this is not something that we will allow, it would be very difficult for us to stop the bad actors. It’s not easy. Today you see the kind of deep fakes that are there, stuff that we never thought of in our childhood, families having safe words amongst themselves.

It’s not something that was there at all. But today, in fact, I was asking a colleague from the US. And he was saying, yes, we do have a safe word in the family because we don’t know when somebody is going to get a call that’s going to sound like me. And it’s going to say that I’m in the hospital and I need so much money. Please come and get me. And it might be somebody entirely different trying to scam you. So we do have safe words. Now, imagine the extent to which we have gone, where we are having to teach children that these are the ways that you can be sure and you can be safe.

Now, this is not something that we want, because obviously, AI is also something that can speed up things like medical research. It can actually speed up skilling. It can speed up many things which enable us and empower us to come up to potential. So a technology this powerful should not and cannot be stopped because bad actors are misusing it. And therefore, it’s up to all of us to come up with a framework. A global compact, again, as I say, a framework that will enable us. to ensure that we are all of us together trying to stop the bad actors and ensuring that this is being used for the good of humanity.

Kamesh Shekar

Excellent point, ma ‘am. I think a very interesting aspect is your starting remark in terms of putting together an office on the humane aspect, which actually shows that it’s not only the technical side can solve the problem when we talk about responsibility, it’s also organizational ethics and organizational ethos which kind of brings that kind of essence to it. And great submission on the global compact, and I think that’s something that we should all strive towards, and I hope the summit will kickstart that process for us as well. I’ll come back to you, ma ‘am. I know you have a hard stop, but I’ll do come back to you for one more question. But now I would like to go to Karna here.

Thank you so much, Karna, for joining. We did hear from ma ‘am in terms of what can be actually done in terms of… Thank you. from how larger organizations are looking from this. But I would like to pick up your brains in terms of as a startup and an MSME, what are the operational challenges that you guys face when you are trying to balance this equation of responsibility versus innovation? And also you guys are looking at it from a four -sidedness and new technologies. So any thoughts there would be

Karna Chokshi

make the AI technology which comes with a lot of power be a bit more Enterprise software ish in terms of compliance governance observability. So we that’s what we do is which means the way we believe is if governance looks like a 200 page PDF for all companies MSM is to figure out we will see them struggle and our our idea is it should be a part of the core product as a lot of us are building solutions for customers governance should be the core product like we believe product is it product as it product as it and that allows mass adoption and the way we do it is so governance to product as it we just writing into the prompt is just the first line of defense.

It should be the core part through the entire agentic lifecycle. Which means. At the time you’re giving it an input and it’s reasoning there are guardrails it check before it does some tool calling which is like hey i’m gonna write uh to the crm or i’m gonna talk to uh one of your customer on this topic there is again guardrails before that and even when you do an output there needs to be guardrail and the guardrails should be a part of the core product and that is important to drive mass adoption and secondly the way we think is knowing we build voice agents for companies uh we still believe human in the loop is a first class feature not a failure point which means you should design the system that it doesn’t in the intent to give an answer it doesn’t give wrong answers it’s okay to figure out when it should transition from a fully autonomous to an assisted agent to fully autonomous to a human and that principles of using humans in the right place should be the core product of our product and that that productization has allowed us so we also have another company up now which is a hiring platform which allows around 3 lakh companies.

Now, because what we saw beautifully when we productize a lot of these, every year, every month in fact, 3 ,000 MSMEs are building voice interview agents on their own. They’re not even realizing because we have productized it that at the back of it, there are three agents they are creating and training for their recruiting process and they’re deploying it and within a matter of five minutes. So, and that has driven to adoption of 30 ,000 companies who are doing it on their own and if we want the entire India, all companies to leverage it, more and more as software, agent -based software builders, we productize it, the better the adoption will be.

Kamesh Shekar

That’s an excellent point, right? Like, I think like this is something that like we kind of like also keep speaking is that productization of responsible AI from a value proposition perspective, right? Like how can responsible AI be embedded as a value proposition towards the product that you’re building, which also is one of the selling points for like whatever that is like taken. That’s a great, great point. So I’ll definitely come back to you, but I would like to go to Ankush and then like I’ll come back to ma ‘am again. is like quickly like Ankush wanted to like understand you guys build AI systems so what are the governance challenges that you see most are like you know difference when it’s for public and private

Ankush Sabharwal

yeah I think one is control I think when it’s about sovereign AI so it’s not just the data residency which matters to our client they want the complete control no one else other government no other party should be able to even see that sniff that audit that so I think that is something which our clients ask for it and that’s why though we work with almost all the cloud providers and but we let the decision be with our clients like which data center they want us to hold and now we see the huge demand of on premise solution and that’s why it’s now even we had seen the need of the edge ai day before yesterday with nvidia we have launched vada gpt desk ai appliance so that’s a supercomputer itself that process is around one petaflops floating point instructions and you know 4db hard disk and you know and that can run a model with one one trillion parameter huge right so but our vada gpt model is just half a billion parameters so means they can do multiple models multiple use cases just one box and we’ll be announcing that soon we we’re working with the defense and now there’s a huge need to have not just on not just in india not just on premise it’s just in the room on the desk right now when the army is doing critical meetings so they don’t want the data to even go out of the room so even that kind of but with a complete processing complete sovereignty and they also don’t want to limit the use cases also right so they want to start with minutes of meeting a change and the aspirations keep increasing so we needed to have a super computer thanks to NVIDIA who’s powering our box there so I think that is the major part rest we all know about explainability inclusivity and privacy and purpose so I think this is something where I think that’s why many many data centers are coming up in the country there is a need of having our own data center here

Kamesh Shekar

that’s excellent like I think like what you’re trying to underline is the trust over the solutions and that’s coming through the sovereignty of the data more they have control over it more it is

Ankush Sabharwal

that’s correct so now tagline is AI with purpose and trust trust is of course important for any relationship like vendor so I think with AI the trust is more important because they are trusting us they are giving us data to create the models so that’s why many new companies are coming up you know of course I thank and welcome them to come to the table but I think now the old players are still being valued so the work is still concentrated here though the deliveries are taking time and all that but there is definitely now need and we need to I think my message to all the new startups and AI startups is yes innovation you have to keep showing doing but show the trustworthy part of it said about observability I think that’s very very important so enterprises want more of trust scale security than the innovation I’m not saying don’t do the innovation but the trust part is very important especially when AI comes

Kamesh Shekar

that’s a great great important submission so but ma ‘am over to you I think like you have to leave in five so like any closing remarks that you would like to like you know provide

Arundhati Bhattacharya

no the one thing that I wanted to talk about was trust because that’s what was being discussed Trust in Salesforce. Trust is our number one value. We have five values. The first is trust. The second is customer success, followed by innovation, equality and sustainability. But trust is definitely number one. Now, having said that, we are number one in trust. We are also a cloud native company. OK, so we do not have on prem systems. And we also believe that basically it is important for us to adopt asset like models, mainly because today the need for storage and compute is so high, given the fact that AI is able to handle trillions and trillions of data points.

And the more you have data points, the better your answers will be. Of course, not for everything. You don’t need to boil the ocean for every single thing. But where there are really deep questions that will benefit from the diversity and the extent of the data, it is very important. For us to have the right kind of compute and storage facility. Now, obviously, you know, if you’re going to have that kind of storage and compute facilities that is entirely on -prem, it also means a pretty high amount of investment into the hardware resource. And India is not very well known for having deep pools of resources. So given the fact that we necessarily have to have capital -like models, it’s important for us to find ways and means of ensuring logical security and trust.

And there are ways of doing this. There are several ways of doing this. One of the reasons why, by the way, we were behind Copilot in bringing our enterprise -level offerings to the market was because we were working very hard on the trust layer. Because the trust layer is not only about access. It’s also about ensuring. not only that your data doesn’t go out, but also the fact that your data doesn’t have any toxicity, that your data doesn’t have bias, that your data is not hallucinating. And by the way, the bigger the data, the amount of data, the more is the tendency to hallucinate. And obviously, you don’t want something as important as this to hallucinate and give you a right wrong answer.

So TrustLayer actually performs a number of these actions, which is all meant towards ensuring that the results that come are not only responsible, they are trustworthy. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Ankush Sabharwal

and we created it. We launched it when we have seen, and I’m still not saying we are 100 % safe, but I’ve seen the world is now okay to have inaccuracies, right? So we are a bit risk averse, right? We are not that risk takers when the whole world was okay. Because we have the client, so you see our clients IRCTC, LIC, NPCI, and Army Defense, they used to expect 99 .9 % accuracy. When the whole world is okay, was okay getting wrong answers from these general purpose LLMs, they got more convinced and most of our clients before 12 GPT days, so that was classic NLP. I liked your point where we don’t have to answer everything, right? So God really is important.

But now most of our clients have gone to Genia. not just gen not only gen a only thing we do composite ai so we still follow the conversation the classic nlp based intent classification entity extraction you would not believe so 80 percent 80 to 90 percent of our interactions happen classic nlp without gen ai because we think we all are different we are not right so so when say in one of them say irctc say four million people come to irctc if i open the dashboard they’re only eight to ten intense people you have to book cancel change board station whatever so 80 percent use cases if someone is saying i want to travel from bangalore to delhi tomorrow there is no gen ai involved i just have to call nlu is involved that old model works just cause the api gets the data right no gen ai if someone say hey i have three pets then how do i do if it is one pet that is a policy that we know it says i have three pet can i carry in my train right so probably that answer is not there in classic nlp for that we you do the rack base with barrager bar gpt so I think if we safety is important I think that should be the core of design and then composite air don’t do just Gen AI because Gen AI is easily available and don’t use Gen AI because you have money to buy GPUs and burn the tokens so idea is do purpose led innovation begin with end in the mind I have told this line I think 10 times today first see what problem you are solving and then you see which solution then which model if model is available use the available model if not build the

Kamesh Shekar

that’s an excellent point thank you so much Ankush for making the time and quickly moving to Karna any closing remarks that you would have and also whatever you want to add to your previous point

Karna Chokshi

yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and and we are all used to software working in a deterministic manner, right? So it has to exactly do this. Now when it comes to this topic of large processes for large enterprises, I think compliance is one area which is super hard to think about, right? AI is probabilistic, but compliance, you always want it to be correct. So I think what to enable the ecosystem, what we believe is we are converting compliance into APIs. So what I mean by that is, so we’re deploying voice agent in one of the large mutual fund houses, all the compliance for that industry are checkbox.

So every company can pick what compliances they need. They just need to take the APIs which they want to ensure and that makes the entire ecosystem flourish and these APIs should ideally get open sourced in the market. So there is enough level of validation across all players that, hey, this SEBI guideline, this is an API which you can invoke into your agent and agent will follow it. And this has pressure test. This takes this burden of ensuring AI works 100 % correct in all use cases, which is not the power of the technology. But if we don’t think like that, then we’ll become very restrictive in its application. While we work a lot on making it P99 accuracy, but there is always the probabilistic chance of it.

And I think the second point we should think about is I think the human state of mind works well in default versus optional. What I mean by that is whatever is the default selection in any of the things you do has 90 % adoption or 80 % adoption and whatever is the change is a 20%. So the way we think about it is a lot of things should be a default. Yes. So customer data should not be used by default to train LLMs or models. It should be an optional add -on rather than the other way around, which you see. Right. Because that’s how most. startups, MSMEs, businesses would otherwise ignore it and the scale of innovation will not happen if that’s not the default state.

And lastly, explainable is extremely important because as models are making decisions, how do you know why this decision was made? And if we make that more as a core output of the API and not think of it as, oh, if something breaks, we will figure out how it works. You will not enable your partners to be a decision maker with you when you’re designing AI solutions for them. So that’s what we focus on. We focus on how do we make a PAT technology, P99 available for enterprises and or governance is the prime topic which comes on why, what is the missing element to get mass adoption and that’s something which I want the entire ecosystem to embrace.

Can we make it an API? Can compliance, governance be more of an infrastructure rather than a paperwork? Because if it is that, then we’re going to slower adoption in India than maybe other parts of the world.

Kamesh Shekar

That’s a great point. Thank you so much, Karna. But we have very few minutes left and we have one panelist who has dedicated full time for us. So, like, you know, kudos to that. So, opening up to the floor, any questions? I think, like, we can take two questions, given the time frame. Any questions to Karna? Anybody? Yeah. They’re all very clear. Yeah. Hi. Good evening. Hello. So, my question is related to small language models which are becoming increasingly popular. Within the developer… community so for businesses like yourself yeah do you see a profitable path ahead for slms or do we continue depending on this llms which i think will be raised to the bottom

Karna Chokshi

yeah no great question i think we think about it a lot and a lot of customers of our ask the question hey would you be in using slm will we use an llm i think the place where we are we all will benefit from the flexibility of llms because frankly most companies are deploying their first or second actual large -scale deployment i think it is helpful to leverage the power of the larger models at that time and over time you will learn what actually is needed in it and over time you can transition llms to an slm where you get the advantage of sometimes latency sometimes cost depending on what your use case optimizes for but i think in the interest of speed of innovation it’s okay to just use llm figure out where the value is getting coming to your business and then explore through the journey of an SLM model which can give you additional advantages Thank you

Kamesh Shekar

Anyone else? Awesome So thank you I would request now Sarj to take it over

Moderator

Thank you so much Thank you so much Thank you so much Kamesh Thank you so much to all of our panel members I think it’s been a really really interesting discussion on how where responsible AI is now and its future particularly with artificial intelligence going ahead I’ll call Mr. Kazan Rizvi the founding director of the dialogue to give the closing remarks for the session Kazan

Kazim Rizvi

This works, this doesn’t work Thank you I think this mic works Yeah, okay, great. Thanks a lot, Sahish. And thank you, Kamesh. Thank you to all those who stayed back till now. I think we are crossing the limit of event fatigue. I know a lot of us are quite tired and sort of very, very sort of exhausted, too many events. But I think the last one week has been fantastic. We’ve had the pleasure and the honor of hosting a few events over the last week. But I think specifically on Responsible AI, as Fani was talking in the beginning, the Dialog and ICOM have developed India’s first tool to assess Responsible AI readiness. So we urge and we encourage and we motivate all of you guys to sort of look into that.

But thank you, Kamesh, for moderating. Thank you to all our speakers for joining in. I think it’s important that we all work towards building Responsible AI practices from the beginning by design. I think that’s something which, you know, even the tool will encourage. So please have a look at that. But all… of you have a good evening I think for what is left of the AI summit it’s been a fantastic summit and hopefully all of us got to learn a lot I did myself but look forward to seeing you all soon dialogue will be hosting multiple conversations on AI policy and we encourage you all to join that but until then have a good evening enjoy your weekend and thank you to all our panelists again thank you thank you Thank you.

Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The moderator opened the session by welcoming the audience and framing the final discussion of the Global AI Summit around the need to make AI safe, responsible, ethical, inclusive and explainable, emphasizing that these goals must be pursued holistically.”

The knowledge base records that the moderator opened the panel by framing the discussion on responsible AI and wrapping up the Global AI Summit, confirming the emphasis on safety, responsibility and ethics [S3] and notes the moderator’s introduction of the responsible AI assessment tool [S2].

Confirmedhigh

“He highlighted that the week’s sessions demonstrated how AI can deliver intended benefits while avoiding unintended harms, and he stressed that governments, innovation hubs, academia and startups each play a critical role in developing safe and ethical AI.”

Stakeholder roles are corroborated by the knowledge base, which states that government, industry, academia and civil society all have important roles in shaping AI standards and outcomes [S66].

Confirmedhigh

“The moderator introduced the RAISE Index – a co‑created, phase‑based assessment framework developed by ICOM and The Dialogue over the past eighteen months.”

The RAISE Index is identified in the knowledge base as a responsible AI assessment tool co-created by ICOM and The Dialogue, confirming its development background [S2].

Confirmedhigh

“He emphasized that the released tool is the *first edition* of a Responsible‑AI readiness framework and will evolve through pilot phases and stakeholder consultation.”

The source explicitly states that the released version is the first edit of an iterative, feedback-driven framework, confirming its “first edition” status and planned evolution [S70].

Confirmedmedium

“The index quantifies the safety and responsibility impact of AI during both development and deployment, and a QR code displayed on the screen enables participants to test their own AI solutions against the framework.”

A QR code was provided for participants to engage with the tool prototype and submit feedback, confirming its presence in the session [S73].

Additional Contextlow

“Benchmarks must be practical, constitute “living infrastructure” that evolves faster than regulatory cycles—a principle he termed “continuous‑learning”.”

The knowledge base notes that the framework is designed as an iterative system with ongoing feedback and updates, adding nuance that the “living infrastructure” concept aligns with the described continuous-learning approach [S70].

External Sources (74)
S1
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S3
Building the Next Wave of AI_ Responsible Frameworks & Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S4
Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112 — Kazim Rizvi:I hope I’m audible. Thank you to the chair, thank you to GIGANET and IGF for hosting us today in Kyoto on a …
S5
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — -Kazim Rizvi- Moderator/Host of the panel discussion This panel discussion on heterogeneous computing and AI infrastruc…
S6
Building the Next Wave of AI_ Responsible Frameworks & Standards — – Karna Chokshi- Ankush Sabharwal – Karna Chokshi- Arundhati Bhattacharya – Karna Chokshi- Arundhati Bhattacharya- Kaz…
S7
Building the Next Wave of AI_ Responsible Frameworks & Standards — Karna Chokshi introduced the revolutionary concept of “governance as product,” arguing that compliance should be a core …
S8
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S9
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S10
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S11
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S12
Building the Next Wave of AI_ Responsible Frameworks & Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S13
Artificial Intelligence &amp; Emerging Tech — Kamesh Shekar, Youth Ambassador at The Internet Society
S14
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This comprehensive panel discussion served as the closing session of the Global AI Summit, bringing together enterprise …
S15
Multistakeholder Partnerships for Thriving AI Ecosystems — This discussion focused on the critical role of multi-stakeholder partnerships in developing and deploying responsible A…
S16
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S17
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S18
How to make AI governance fit for purpose? — This comment elevated the discussion to a more philosophical level, moving beyond technical regulatory approaches to con…
S19
Do we really need frontier AI for everyday work? — By contrast,small language models (SLMs)and other compact architectures can be excellent when paired with clear scope, c…
S20
Why science metters in global AI governance — Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation…
S21
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — And if we see how these campaigns have actually resulted into the behavioral change in the taxpayer. So these two graphs…
S22
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S23
Democratizing AI Building Trustworthy Systems for Everyone — I think that’s a fantastic question. I’m going to start with a very broad context and then narrow it down to that specif…
S24
The strategic imperative of open source AI — As large language models (LLMs) begin to hit scaling plateaus, the frontier of innovation is shifting from raw model siz…
S25
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — A plausible next step is not the emergence of fully autonomous ‘AI diplomats’, but hybrid systems. In these setups, LLMs…
S26
Survive the AI jargon tsunami: Find shelter in your mother tongue — Contextual Engineering (CE) employs structured Data Units (DUs) within a Retrieval-Augmented Generation (RAG) pipeline t…
S27
WS #219 Generative AI Llms in Content Moderation Rights Risks — Marlene Owizniak: And before I open it up to the floor, I just wanted to highlight a few of the key risks that we found,…
S28
Building the Next Wave of AI_ Responsible Frameworks & Standards — Bhattacharya argues that while AI can be used for tremendous good like speeding up medical research and skilling, it can…
S29
Interim Report: — – Individuals – o Human dignity/value/agency (manipulation, deception, nudging, sentencing) – o Life, safety, security (…
S30
DC-CIV Evolving Regulation and its impact on Core Internet Values | IGF 2023 — The analysis recognizes the need for technical expertise in policy discussions. It emphasizes the importance of individu…
S31
Workshop 11: São Paulo Multistakeholder Guidelines – The Way Forward in Multistakeholder and Multilateral Digital Processes — The relationship between process steps needs better definition, particularly how the results of scoping issues and ident…
S32
US NTIA recommends policy reforms to foster accountability and trustworthiness in AI systems — The NTIA’sAI Accountability Policy Reportadvocates for increased openness in AI systems, independent inspections, and pe…
S33
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S34
Driving Indias AI Future Growth Innovation and Impact — The professor highlighted AI’s potential impact on employment as a critical concern that could undermine other AI initia…
S35
Secure Finance Risk-Based AI Policy for the Banking Sector — Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community,…
S36
Agentic AI in Focus Opportunities Risks and Governance — You conveniently sat the two cyber companies, cybersecurity companies, next to each other. So my name is Sam Kaplan. I’m…
S37
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — “Trust is our number one value”[54]. “But trust is definitely number one”[55]. “The first is trust”[56]. “Because the tr…
S38
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion revealed remarkably high consensus across diverse stakeholders on the fundamental need for AI standards, …
S39
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S40
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S41
Setting the Rules_ Global AI Standards for Growth and Governance — Summary:All speakers emphasize the importance of global cooperation and inclusive participation from diverse stakeholder…
S42
Secure Finance Risk-Based AI Policy for the Banking Sector — This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automat…
S43
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automat…
S44
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S45
Building the Next Wave of AI_ Responsible Frameworks & Standards — The Moderator emphasizes that safety benchmarks must emerge from deployment reality rather than just research labs. Gove…
S46
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — “we need a stable regulatory regime that makes that possible and puts humans at the center rather than just seeking to a…
S47
Panel Discussion Data Sovereignty India AI Impact Summit — Arguments:Government must establish sovereign guardrails and provide long-term policy stability for private investment T…
S48
Understanding the language of modern AI — Large Language Models (LLMs)are trained on vast datasets containing billions or trillions of words from across the inter…
S49
Do we really need frontier AI for everyday work? — By contrast,small language models (SLMs)and other compact architectures can be excellent when paired with clear scope, c…
S50
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists expressed excitement about AI’s capabilities and potentia…
S51
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S52
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — The tone is formal and corporate, maintaining a consistently optimistic and forward-looking perspective throughout. Toki…
S53
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S54
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S55
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S56
Driving Indias AI Future Growth Innovation and Impact — Government setting pace with startups and MSMEs following, while large enterprises struggle
S57
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Regulatory sandboxes are being recognized as effective solutions for navigating the complexities of data governance. Sta…
S58
Information Society in Times of Risk — The discussion maintained a consistently academic and collaborative tone throughout. It was professional and research-fo…
S59
WS #199 Ensuring the online coexistence of human rights&amp;child safety — The tone of the discussion was generally collaborative and solution-oriented, with panelists acknowledging the complexit…
S60
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S61
WS #103 Aligning strategies, protecting critical infrastructure — The tone was largely collaborative and solution-oriented. Speakers built on each other’s points and emphasized the need …
S62
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S63
Democratizing AI Building Trustworthy Systems for Everyone — The session was moderated by Justin Carsten, who opened by noting the significance of the summit and introducing each pa…
S64
Delegated decisions, amplified risks: Charting a secure future for agentic AI — – **Moderator**: Role mentioned as moderator of the session
S65
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — AI systems have the capacity to misalign with human expectations and the expectations of specific communities. Therefore…
S66
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S67
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — Juan Manuel Santos: Distinguished co-chairs, excellencies, ladies and gentlemen, like my fellow elder, Ellen Johnson, …
S68
Safe and Responsible AI at Scale Practical Pathways — Rohit Bardawaj from MOSPI (Ministry of Statistics and Programme Implementation) challenged the panel’s fundamental assum…
S69
MedTech and AI Innovations in Public Health Systems — A critical insight emerged from Mr. Shiv Kumar’s opening remarks: healthcare solutions are predominantly seeking problem…
S70
https://dig.watch/event/india-ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-frameworks-standards — Most frameworks are static standards. ICOM believes in creating systems with ongoing feedback, tracking system performan…
S71
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Owen Later:Fantastic. Hello, is this on? Can people hear me? Excellent, thank you. Good morning, everyone. My name is Ow…
S72
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S73
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Participants to provide feedback on the session and tool prototype via the provided QR code
S74
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Moderator
3 arguments45 words per minute1115 words1463 seconds
Argument 1
Co‑created, practical, living safety benchmarks are essential; they must emerge from real‑world deployment rather than isolated research labs.
EXPLANATION
The moderator stresses that safety benchmarks are most effective when they are derived from institutions that actually build, deploy, and maintain AI at scale, rather than being created in isolation in research labs. Co‑creation with industry and academia is also highlighted as a key element.
EVIDENCE
The moderator states that benchmarks must emerge from deployment reality and not just research labs, noting that safety benchmarks fail when developed in isolation and are most effective when coming from institutions building, deploying, and maintaining AI at scale [8-10]. He also adds that the second most important element is to ensure these safety benchmarks are co-created with industry and academia [12-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s claim is corroborated by S3, which emphasizes that safety benchmarks should be derived from deployment reality rather than isolated research labs, and notes the role of government innovation hubs in surfacing trust gaps.
MAJOR DISCUSSION POINT
Need for real‑world, co‑created safety benchmarks
AGREED WITH
Karna Chokshi, Arundhati Bhattacharya, Kamesh Shekar
Argument 2
The RAISE Index exemplifies India’s contribution, harmonising multiple global frameworks (EU AI Act, NIST, Singapore, UK) into a single, portable assessment tool.
EXPLANATION
The moderator introduces the RAISE Index as a unique Indian framework that quantifies AI safety and responsibility, and explains that it aligns and integrates requirements from major international AI regulations into one portable assessment.
EVIDENCE
He describes the RAISE Index as the first of its kind in quantifying AI impact on safety and responsibility, providing a QR code for access and testing against AI solutions [13-14]. Later he notes that the index harmonises requirements across leading global frameworks such as the EU AI Act, NIST AI RMF, Singapore Guidelines, and UK AI Assurance, bringing them together into a single portable assessment [39-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S3 details that the RAISE Index harmonises requirements across the EU AI Act, NIST AI RMF, Singapore Guidelines, and UK AI Assurance, creating a portable assessment; S2 also introduces the RAISE Index as India’s first responsible AI assessment tool.
MAJOR DISCUSSION POINT
India’s RAISE Index as a unified global AI benchmark
AGREED WITH
Arundhati Bhattacharya, Kazim Rizvi
Argument 3
Benchmarks must evolve continuously to keep pace with rapid AI capability growth; static checklists quickly become obsolete.
EXPLANATION
The moderator argues that AI capabilities evolve faster than regulatory cycles, making static benchmarks ineffective; therefore, benchmarks should be living infrastructure that are continuously updated.
EVIDENCE
He states that static benchmarks become obsolete as AI capabilities evolve faster than regulatory cycles, and that hubs must institutionalise continuous benchmark evolution, describing the methodology as phase-based and adaptable to company maturity stages [24-27]. He further emphasizes that most frameworks are static standards, whereas ICOM aims to create iterative, continuously evolving benchmarks [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S3 argues that static benchmarks become obsolete as AI capabilities outpace regulatory cycles, advocating for living, continuously updated benchmarks; S2 reinforces this by describing the RAISE Index as an iterative, phase‑based framework.
MAJOR DISCUSSION POINT
Continuous evolution of AI safety benchmarks
A
Arundhati Bhattacharya
3 arguments111 words per minute929 words498 seconds
Argument 1
Salesforce established an Office for Humane and Ethical Use of Technology in 2014, embedding trust checks into every product before market release; a global compact is needed to curb misuse by bad actors.
EXPLANATION
Arundhati explains that Salesforce created a dedicated office in 2014 to review every product for humane and ethical considerations before launch, and she calls for a worldwide agreement to prevent malicious use of AI.
EVIDENCE
She notes that Salesforce began its AI journey in 2014 and set up an Office for the Humane and Ethical Use of Technology that reviews every product and process before market release [58-61]. She then stresses the need for a global compact, arguing that only through transparent information exchange and collective agreement can bad actors be stopped [62-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both S3 and S2 report that Salesforce set up an Office for Humane and Ethical Use of Technology in 2014 to review every product before launch, illustrating proactive organisational ethics.
MAJOR DISCUSSION POINT
Organisational ethics office and need for global AI compact
AGREED WITH
Moderator, Karna Chokshi, Kamesh Shekar
Argument 2
Trust is Salesforce’s top corporate value; a dedicated “Trust Layer” ensures data security, bias mitigation, and prevents hallucinations, especially at scale.
EXPLANATION
Arundhati highlights that trust is the foremost value at Salesforce and describes a “Trust Layer” that safeguards data, reduces bias, and avoids hallucinations in AI outputs, particularly for large‑scale applications.
EVIDENCE
She lists trust as the number one of Salesforce’s five core values and states that the company is number one in trust [112-118]. She explains that the Trust Layer performs actions to ensure data does not leave the system, prevents toxicity, bias, and hallucinations, especially as larger data sets increase hallucination risk [129-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S3 notes that trust is Salesforce’s number‑one core value and describes a Trust Layer that safeguards data, mitigates bias, toxicity and hallucinations, particularly as dataset size grows.
MAJOR DISCUSSION POINT
Trust Layer as a safeguard for responsible AI
AGREED WITH
Ankush Sabharwal, Kamesh Shekar
Argument 3
Embedding responsible AI as a product value proposition creates market differentiation and drives adoption of trustworthy solutions.
EXPLANATION
Kamesh observes that positioning responsible AI as a core value proposition of a product can differentiate it in the market and encourage customers to adopt trustworthy AI solutions.
EVIDENCE
He remarks that productisation of responsible AI from a value-proposition perspective is a way to embed responsible AI into the product and make it a selling point [103-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S13 captures Kamesh Shekar’s point that productising responsible AI as a value proposition differentiates offerings and encourages adoption of trustworthy AI.
MAJOR DISCUSSION POINT
Responsible AI as a market differentiator
AGREED WITH
Moderator, Karna Chokshi, Kamesh Shekar
K
Kamesh Shekar
1 argument162 words per minute768 words283 seconds
Argument 1
Embedding responsible AI as a product value proposition creates market differentiation and drives adoption of trustworthy solutions.
EXPLANATION
Kamesh points out that framing responsible AI as a value proposition within a product can serve as a competitive advantage and encourage broader adoption of trustworthy AI offerings.
EVIDENCE
He comments that productisation of responsible AI from a value-proposition perspective allows responsible AI to be embedded as a selling point for products [103-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S13 captures Kamesh Shekar’s point that productising responsible AI as a value proposition differentiates offerings and encourages adoption of trustworthy AI.
MAJOR DISCUSSION POINT
Responsible AI as a market differentiator
AGREED WITH
Moderator, Karna Chokshi, Arundhati Bhattacharya
K
Karna Chokshi
4 arguments173 words per minute1177 words407 seconds
Argument 1
Governance should be baked into the core AI product (guardrails at input, reasoning, tool‑calling, and output stages) rather than a separate, lengthy PDF, enabling mass adoption.
EXPLANATION
Karna argues that governance mechanisms need to be integrated directly into AI products as built‑in guardrails throughout the model’s lifecycle, instead of being presented as extensive documentation, to facilitate widespread use.
EVIDENCE
She explains that governance should be part of the core product, with guardrails at input, reasoning, tool-calling, and output stages, rather than a 200-page PDF that companies would struggle with [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 and S3 both highlight Karna’s argument that governance must be integrated directly into AI products as built‑in guardrails, avoiding cumbersome documentation.
MAJOR DISCUSSION POINT
Embedded governance for scalable AI adoption
AGREED WITH
Moderator, Arundhati Bhattacharya, Kamesh Shekar
Argument 2
Human‑in‑the‑loop is a first‑class feature, not a failure point; systems must know when to defer to human operators.
EXPLANATION
Karna emphasizes that keeping a human in the loop should be treated as a core capability, ensuring that AI systems can hand over control to humans when necessary rather than being seen as a weakness.
EVIDENCE
She states that human-in-the-loop is a first-class feature, not a failure point, and that systems should be designed to transition from autonomous to assisted or human-controlled modes appropriately [99-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 and S3 reinforce that human‑in‑the‑loop should be treated as a primary capability, with systems designed to transition to human control when appropriate.
MAJOR DISCUSSION POINT
Human‑in‑the‑loop as essential design element
Argument 3
Converting regulatory compliance into reusable APIs lets enterprises plug‑in required safeguards, turning compliance into infrastructure rather than paperwork.
EXPLANATION
Karna proposes that compliance requirements be expressed as modular APIs, allowing enterprises to integrate necessary safeguards programmatically, thereby making compliance an infrastructural component rather than a bureaucratic burden.
EVIDENCE
She describes converting compliance into APIs that can be invoked by agents, citing examples such as SEBI guidelines being offered as APIs for mutual fund voice agents, and suggests open-sourcing these APIs to create a shared validation layer [155-160].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 and S3 describe Karna’s proposal to express compliance as modular APIs, citing examples such as SEBI compliance APIs for mutual‑fund voice agents.
MAJOR DISCUSSION POINT
API‑based compliance for AI systems
Argument 4
Initial deployments can leverage large language models for speed; over time, organizations can transition to smaller, domain‑specific models to gain latency and cost benefits.
EXPLANATION
Karna notes that early‑stage AI projects often use large language models to accelerate innovation, but as understanding deepens, firms can shift to smaller, specialized models that offer lower latency and cost.
EVIDENCE
She explains that many companies start with LLMs for rapid innovation and later transition to SLMs for advantages such as reduced latency and cost, depending on use-case requirements [191-193].
MAJOR DISCUSSION POINT
Strategic migration from LLMs to SLMs
AGREED WITH
Ankush Sabharwal
A
Ankush Sabharwal
3 arguments170 words per minute971 words342 seconds
Argument 1
Clients, especially in sovereign or defense contexts, demand full control over data and on‑premise AI appliances; VadaGPT provides a petaflop‑scale, on‑site supercomputer for such needs.
EXPLANATION
Ankush describes how customers in highly regulated sectors require complete data sovereignty, leading to the development of the VadaGPT on‑premise appliance that delivers petaflop‑level compute within the client’s premises.
EVIDENCE
He outlines that sovereign AI clients need total data control, prompting the launch of VadaGPT – a desk-top AI appliance delivering around one petaflop of compute and housing a half-billion-parameter model, suitable for on-premise, edge, and defense use cases [108-110].
MAJOR DISCUSSION POINT
On‑premise AI hardware for data sovereignty
Argument 2
Trust, security, and purpose‑driven AI are paramount; high‑accuracy (99.9 %) models, observability, and strict data handling are required for critical applications.
EXPLANATION
Ankush stresses that for mission‑critical sectors such as defense, finance, and public services, AI solutions must achieve near‑perfect accuracy, be highly observable, and enforce rigorous data security and purpose‑driven design.
EVIDENCE
He notes that clients like IRCTC, LIC, NPCI, and the Army expect 99.9 % accuracy, and that the company adopts a risk-averse stance, emphasizing trust, observability, and purpose-led AI development [141-147].
MAJOR DISCUSSION POINT
High‑accuracy, trustworthy AI for critical domains
AGREED WITH
Arundhati Bhattacharya, Kamesh Shekar
Argument 3
A composite AI approach—using classic NLP for routine tasks and generative AI only where it adds clear value—optimises accuracy and resource use.
EXPLANATION
Ankush explains that most interactions still rely on traditional NLP pipelines, reserving generative AI for cases where it truly adds value, thereby balancing performance, cost, and reliability.
EVIDENCE
He reports that 80-90 % of interactions are handled by classic NLP, while generative AI is employed selectively for more complex queries, emphasizing a composite AI strategy that prioritises accuracy and efficiency [147-152].
MAJOR DISCUSSION POINT
Hybrid use of classic NLP and generative AI
K
Kazim Rizvi
2 arguments87 words per minute279 words192 seconds
Argument 1
The dialogue and ICOM urge stakeholders to adopt the RAISE Index tool to assess responsible AI readiness and embed responsible practices by design.
EXPLANATION
Kazim calls on participants to use the newly released RAISE Index, positioning it as a practical instrument for evaluating and embedding responsible AI practices from the outset.
EVIDENCE
He mentions that the Dialogue and ICOM have developed India’s first tool to assess Responsible AI readiness and encourages everyone to explore it [202-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 and S3 mention the call to adopt the RAISE Index as a practical instrument for evaluating and embedding responsible AI from the outset.
MAJOR DISCUSSION POINT
Call to adopt the RAISE Index
AGREED WITH
Moderator, Karna Chokshi
Argument 2
Building responsible AI from the outset is a collective responsibility across technologists, policymakers, think‑tanks, and startups.
EXPLANATION
Kazim emphasizes that creating responsible AI requires coordinated effort from all ecosystem actors, highlighting the shared duty to embed ethical considerations early in the development cycle.
EVIDENCE
He states that it is important for everyone to work towards building Responsible AI practices from the beginning by design, underscoring a collective responsibility across technologists, policymakers, think-tanks, and startups [206-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S15 discusses multistakeholder partnerships as essential for thriving AI ecosystems, and S16 emphasizes the need for government‑led frameworks, both supporting the collective‑responsibility view.
MAJOR DISCUSSION POINT
Shared responsibility for responsible AI
AGREED WITH
Arundhati Bhattacharya, Moderator
Agreements
Agreement Points
Safety benchmarks should be co‑created, practical, and continuously evolving, emerging from real‑world deployment rather than isolated research labs.
Speakers: Moderator, Karna Chokshi, Kazim Rizvi
Co‑created, practical, living safety benchmarks are essential; they must emerge from real‑world deployment rather than isolated research labs. Governance should be baked into the core AI product (guardrails at input, reasoning, tool‑calling, and output stages) rather than a separate, lengthy PDF, enabling mass adoption. The dialogue and ICOM urge stakeholders to adopt the RAISE Index tool to assess responsible AI readiness and embed responsible practices by design.
All three speakers stress that effective AI safety measures must be grounded in actual deployment contexts, integrated directly into products, and kept up-to-date through iterative frameworks like the RAISE Index [8-12][24-27][45-48][96-99][155-160][202-204].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes calls for evidence-based AI governance and real-world testing highlighted in discussions on the limits of scientific certainty in AI risk assessment [S20] and aligns with the emphasis on measurable standards and benchmarking in global AI standards initiatives [S38, S41].
Trust is a foundational value for AI systems and must be embedded through dedicated mechanisms such as a Trust Layer or high‑accuracy, purpose‑driven models.
Speakers: Arundhati Bhattacharya, Ankush Sabharwal, Kamesh Shekar
Trust is Salesforce’s top corporate value; a dedicated “Trust Layer” ensures data security, bias mitigation, and prevents hallucinations, especially at scale. Trust, security, and purpose‑driven AI are paramount; high‑accuracy (99.9 %) models, observability, and strict data handling are required for critical applications. Embedding responsible AI as a product value proposition creates market differentiation and drives adoption of trustworthy solutions.
The speakers converge on the centrality of trust, describing concrete implementations-from Salesforce’s Trust Layer to mission-critical high-accuracy models and product-level value propositions-that safeguard data, mitigate bias, and assure reliability [112-118][129-136][141-147][103-106].
POLICY CONTEXT (KNOWLEDGE BASE)
The centrality of trust is reinforced by multiple policy statements, including the NTIA’s recommendation for accountability and trustworthiness in AI systems [S32], industry emphasis on a dedicated “trust layer” [S37], and broader analyses of trust as a prerequisite for AI adoption [S23, S33].
Global collaboration and harmonisation of AI standards are essential, requiring a compact or collective responsibility across stakeholders.
Speakers: Arundhati Bhattacharya, Moderator, Kazim Rizvi
Salesforce established an Office for Humane and Ethical Use of Technology in 2014, embedding trust checks into every product before market release; a global compact is needed to curb misuse by bad actors. The RAISE Index exemplifies India’s contribution, harmonising multiple global frameworks (EU AI Act, NIST, Singapore, UK) into a single, portable assessment tool. Building responsible AI from the outset is a collective responsibility across technologists, policymakers, think‑tanks, and startups.
All three emphasize the need for international cooperation: a global compact, a harmonised benchmark (RAISE Index), and shared responsibility among all ecosystem actors to ensure AI is used responsibly [62-68][39-42][206-208].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on worldwide standards mirrors the outcomes of multistakeholder processes at the IGF and UNESCO, which stress inclusive, coordinated governance frameworks for AI [S38, S39, S41] and the need for technical expertise in policy formulation [S30].
Embedding governance and ethical checks directly into AI product lifecycles is crucial for scalable, responsible deployment.
Speakers: Moderator, Karna Chokshi, Arundhati Bhattacharya, Kamesh Shekar
Co‑created, practical, living safety benchmarks are essential; they must emerge from real‑world deployment rather than isolated research labs. Governance should be baked into the core AI product (guardrails at input, reasoning, tool‑calling, and output stages) rather than a separate, lengthy PDF, enabling mass adoption. Salesforce established an Office for Humane and Ethical Use of Technology in 2014, embedding trust checks into every product before market release; a global compact is needed to curb misuse by bad actors. Embedding responsible AI as a product value proposition creates market differentiation and drives adoption of trustworthy solutions.
The speakers agree that responsible AI governance must be integrated into the product itself-through co-created benchmarks, built-in guardrails, dedicated ethical offices, and value-proposition framing-rather than treated as external documentation [8-12][96-99][58-61][103-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding ethical oversight aligns with NTIA’s push for lifecycle-wide accountability measures [S32] and broader calls for effective governance structures that integrate ethical reviews throughout development and deployment phases [S40, S22].
AI development can start with large language models for rapid innovation and later transition to smaller, domain‑specific models to improve latency and cost efficiency.
Speakers: Karna Chokshi, Ankush Sabharwal
Initial deployments can leverage large language models for speed; over time, organizations can transition to smaller, domain‑specific models to gain latency and cost benefits. A composite AI approach — using classic NLP for routine tasks and generative AI only where it adds clear value — optimises accuracy and resource use.
Both speakers highlight a staged AI strategy: begin with LLMs to accelerate development, then adopt SLMs or classic NLP where appropriate to balance performance, cost, and reliability [191-193][147-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic shifts from monolithic LLMs to specialized models are discussed in analyses of the AI innovation frontier, which note a move from size-driven progress to integration and efficiency considerations [S24] and describe this as a structural shift in AI system design [S42].
Similar Viewpoints
Both stress that safety and governance mechanisms need to be practical, integrated into products, and derived from real deployment contexts rather than abstract checklists [8-12][96-99].
Speakers: Moderator, Karna Chokshi
Co‑created, practical, living safety benchmarks are essential; they must emerge from real‑world deployment rather than isolated research labs. Governance should be baked into the core AI product (guardrails at input, reasoning, tool‑calling, and output stages) rather than a separate, lengthy PDF, enabling mass adoption.
Both underline trust as a non‑negotiable pillar for AI, requiring robust security, accuracy, and safeguards against misuse [112-118][129-136][141-147].
Speakers: Arundhati Bhattacharya, Ankush Sabharwal
Trust is Salesforce’s top corporate value; a dedicated “Trust Layer” ensures data security, bias mitigation, and prevents hallucinations, especially at scale. Trust, security, and purpose‑driven AI are paramount; high‑accuracy (99.9 %) models, observability, and strict data handling are required for critical applications.
All three advocate for international cooperation and shared standards to ensure AI is used responsibly worldwide [62-68][39-42][206-208].
Speakers: Arundhati Bhattacharya, Moderator, Kazim Rizvi
Salesforce established an Office for Humane and Ethical Use of Technology in 2014, embedding trust checks into every product before market release; a global compact is needed to curb misuse by bad actors. The RAISE Index exemplifies India’s contribution, harmonising multiple global frameworks (EU AI Act, NIST, Singapore, UK) into a single, portable assessment tool. Building responsible AI from the outset is a collective responsibility across technologists, policymakers, think‑tanks, and startups.
Unexpected Consensus
Trust as a foundational principle is emphasized equally by a large multinational corporation (Salesforce) and a startup serving mission‑critical sectors (defense, finance).
Speakers: Arundhati Bhattacharya, Ankush Sabharwal
Trust is Salesforce’s top corporate value; a dedicated “Trust Layer” ensures data security, bias mitigation, and prevents hallucinations, especially at scale. Trust, security, and purpose‑driven AI are paramount; high‑accuracy (99.9 %) models, observability, and strict data handling are required for critical applications.
Despite operating in different contexts-enterprise SaaS versus sovereign/defense AI-both stress that trust mechanisms (trust layer, high accuracy, observability) are essential, revealing a cross-sector convergence on trust requirements [112-118][129-136][141-147].
POLICY CONTEXT (KNOWLEDGE BASE)
The cross-sector emphasis on trust reflects industry-wide observations that trust is the primary driver of AI adoption and business success across both large enterprises and niche mission-critical providers [S33].
Overall Assessment

The panel shows strong convergence on four main themes: (1) safety benchmarks must be co‑created, practical, and continuously updated; (2) trust is a core value requiring dedicated technical layers and high‑accuracy models; (3) global collaboration and harmonised standards are essential; (4) governance should be embedded directly into AI products, with a pragmatic shift from large to smaller models as maturity grows.

High consensus across government, academia, and industry participants, indicating a shared commitment to responsible AI that blends regulatory alignment, technical safeguards, and market‑driven productisation. This alignment suggests that future initiatives are likely to build on the RAISE Index, trust‑centric designs, and API‑based compliance to advance responsible AI at scale.

Differences
Different Viewpoints
Method for achieving trustworthy AI governance and compliance
Speakers: Karna Chokshi, Ankush Sabharwal
Governance should be baked into the core AI product (guardrails at input, reasoning, tool-calling, and output stages) rather than a separate, lengthy PDF, enabling mass adoption. (Karna) [96-99] Clients, especially in sovereign or defense contexts, demand full control over data and on-premise AI appliances; VadaGPT provides a petaflop-scale, on-site supercomputer for such needs. (Ankush) [108-110]
Karna advocates embedding governance directly into AI products and exposing compliance as reusable APIs to streamline adoption, while Ankush stresses the need for on-premise, sovereign hardware solutions to retain full data control, indicating divergent technical routes to the same trust goal. [96-99][108-110]
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over governance methods are framed by NTIA’s recommendation for openness, independent inspections, and enforceable accountability mechanisms [S32] as well as by proposals for a global AI compact to coordinate responsible practices [S28].
Strategic use of large language models (LLMs) versus classic NLP/composite AI
Speakers: Karna Chokshi, Ankush Sabharwal
Initial deployments can leverage large language models for speed; over time, organizations can transition to smaller, domain-specific models to gain latency and cost benefits. (Karna) [191-193] 80-90 % of interactions are handled by classic NLP; generative AI is used selectively for complex queries, emphasizing a composite AI approach that prioritises accuracy and efficiency. (Ankush) [147-152]
Karna suggests a roadmap that starts with LLMs and later shifts to smaller models, whereas Ankush argues that the majority of use-cases remain best served by classic NLP, reserving generative AI for specific high-value tasks, reflecting different views on the role and timing of LLM adoption. [191-193][147-152]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between LLM-centric approaches and traditional NLP is highlighted in discussions about the scaling plateau of LLMs and the subsequent focus on integration and workflow embedding [S24], as well as in analyses of AI’s structural evolution toward adaptive, domain-specific systems [S42].
Preferred solution for preventing AI misuse: global policy compact versus technical benchmarks
Speakers: Arundhati Bhattacharya, Moderator
A global compact is needed to curb misuse by bad actors; transparent information exchange and collective agreement are essential. (Arundhati) [62-68] The RAISE Index exemplifies India’s contribution, harmonising multiple global frameworks into a single, portable assessment tool for responsible AI. (Moderator) [13-14][39-42]
Arundhati calls for an overarching international agreement to stop malicious AI use, while the Moderator promotes a technical, standards-based tool (RAISE Index) as the primary means to ensure responsible AI, indicating a policy-versus-technical approach divergence. [62-68][13-14][39-42]
POLICY CONTEXT (KNOWLEDGE BASE)
This dichotomy mirrors policy literature that advocates a global compact to deter malicious actors [S28] while also emphasizing the role of technical measurement and benchmarking in establishing trustworthy standards [S38].
Implementation of trust mechanisms in AI systems
Speakers: Arundhati Bhattacharya, Ankush Sabharwal, Karna Chokshi
Trust is Salesforce’s top corporate value; a dedicated “Trust Layer” ensures data security, bias mitigation, and prevents hallucinations at scale. (Arundhati) [112-118][129-136] Trust, security, and purpose-driven AI are paramount; high-accuracy (99.9 %) models, observability, and strict data handling are required for critical applications. (Ankush) [141-147] Customer data should not be used by default to train LLMs; it should be an optional add-on, emphasizing default-privacy settings. (Karna) [165-168]
While all three speakers prioritize trust, Arundhati emphasizes a built-in Trust Layer, Ankush focuses on high accuracy and observability, and Karna stresses default-privacy configurations, revealing differing technical emphases for achieving trustworthy AI. [112-118][129-136][141-147][165-168]
POLICY CONTEXT (KNOWLEDGE BASE)
Implementation challenges are addressed in sources that describe concrete trust-layer architectures [S37], the necessity of high-accuracy, purpose-driven models for reliability [S23], and NTIA’s call for independent verification of trust claims [S32].
Unexpected Differences
Open‑source API‑based compliance versus closed, sovereign on‑premise solutions
Speakers: Karna Chokshi, Ankush Sabharwal
Converting regulatory compliance into reusable APIs lets enterprises plug-in required safeguards, turning compliance into infrastructure rather than paperwork. (Karna) [155-160] Clients, especially in sovereign or defense contexts, demand full control over data and on-premise AI appliances; VadaGPT provides a petaflop-scale, on-site supercomputer for such needs. (Ankush) [108-110]
Karna advocates open, modular API solutions for compliance, promoting shared, transparent standards, while Ankush insists on closed, on-premise hardware to guarantee data sovereignty, an unexpected clash between openness and strict control. [155-160][108-110]
POLICY CONTEXT (KNOWLEDGE BASE)
The trade-off between open-source, API-driven compliance frameworks and closed, sovereign deployments is discussed in the strategic imperative of open-source AI for broader integration [S24] and NTIA’s emphasis on openness and transparency as pillars of trustworthy AI governance [S32].
Overall Assessment

The panel largely converged on the importance of trust, responsible AI, and the need for practical benchmarks, but diverged on the technical pathways to achieve these goals—ranging from product‑embedded guardrails and API‑based compliance to on‑premise sovereign hardware, and from reliance on large language models to classic NLP. A notable policy‑versus‑technical split emerged regarding global compacts versus standards tools. These disagreements highlight the challenge of aligning diverse stakeholder priorities (start‑ups, large enterprises, governments) and suggest that a multi‑layered approach combining policy coordination, open standards, and flexible technical implementations will be needed to advance responsible AI.

Moderate to high: while consensus exists on overarching objectives (trust, safety, responsible AI), the varied preferred mechanisms indicate potential friction in implementation, requiring careful coordination to avoid fragmented solutions.

Partial Agreements
All agree that responsible AI must be integrated into products and benchmarks should be practical and real‑world oriented, but differ on the primary mechanism: the Moderator stresses co‑creation and evolving standards, Karna focuses on built‑in guardrails, and Kamesh highlights market‑driven productisation. [8-10][12-13][96-99][103-106]
Speakers: Moderator, Karna Chokshi, Kamesh Shekar
Co-created, practical, living safety benchmarks are essential; they must emerge from real-world deployment rather than isolated research labs. (Moderator) [8-10][12-13] Governance should be baked into the core AI product (guardrails at input, reasoning, tool-calling, and output stages) rather than a separate, lengthy PDF, enabling mass adoption. (Karna) [96-99] Embedding responsible AI as a product value proposition creates market differentiation and drives adoption of trustworthy solutions. (Kamesh) [103-106]
Both emphasize trust as essential, yet Arundhati proposes a systematic Trust Layer within the product, whereas Ankush stresses achieving trust through ultra‑high accuracy, observability, and stringent data controls. [112-118][129-136][141-147]
Speakers: Arundhati Bhattacharya, Ankush Sabharwal
Trust is Salesforce’s top corporate value; a dedicated “Trust Layer” ensures data security, bias mitigation, and prevents hallucinations at scale. (Arundhati) [112-118][129-136] Trust, security, and purpose-driven AI are paramount; high-accuracy (99.9 %) models, observability, and strict data handling are required for critical applications. (Ankush) [141-147]
Takeaways
Key takeaways
Responsible AI benchmarks must be co‑created with industry, academia and government, and derived from real‑world deployment rather than isolated research labs. India’s RAISE Index consolidates multiple global AI governance frameworks (EU AI Act, NIST, Singapore, UK) into a single, portable, living assessment tool that will be continuously updated. Trust is a foundational corporate value; Salesforce’s “Office for Humane and Ethical Use of Technology” and its Trust Layer illustrate embedding trust checks (bias, hallucination, data security) into every product before release. Embedding governance, compliance and human‑in‑the‑loop mechanisms directly into AI products (guardrails at input, reasoning, tool‑calling, and output) is essential for mass adoption. Transforming regulatory compliance into reusable APIs turns compliance from paperwork into infrastructure, enabling plug‑and‑play safeguards. Data sovereignty and on‑premise AI appliances (e.g., VadaGPT) are critical for high‑stakes sectors such as defense, requiring full control, observability and ultra‑high accuracy. A pragmatic model‑selection strategy: start with large language models for speed, then migrate to smaller, domain‑specific models (SLMs) for latency, cost and reliability; combine classic NLP for routine tasks with generative AI only where value‑added. A collective, global compact is needed to curb misuse of AI; no single entity can achieve this alone. Stakeholders are urged to adopt the RAISE Index, test their AI systems against it, and contribute to its iterative evolution.
Resolutions and action items
Stakeholders (governments, startups, enterprises, academia) are encouraged to scan the QR code and use the RAISE Index to assess responsible‑AI readiness of their solutions. The RAISE Index will be released as a first edition and will undergo continuous evolution through pilot phases, stakeholder consultation, and iterative updates. Enterprises are advised to embed trust layers and governance guardrails directly into AI product architectures rather than treating them as separate checklists. Compliance requirements should be exposed as open‑source, reusable APIs that can be plugged into AI systems. Organizations handling sovereign or defense data are recommended to consider on‑premise AI appliances (e.g., VadaGPT) to meet control and security demands. Companies are asked to adopt a “purpose‑led” AI innovation approach: define the problem first, then select the appropriate model (LLM or SLM) and ensure high accuracy before scaling.
Unresolved issues
How a truly global compact on AI misuse can be operationalized and enforced across jurisdictions remains undefined. Standardization of compliance APIs across industries and regulatory regimes has not been finalized. The long‑term economic viability and market dynamics of small language models (SLMs) versus large language models (LLMs) were discussed but not conclusively resolved. Specific mechanisms for continuous stakeholder governance of the RAISE Index (e.g., governance bodies, funding, timelines) were not detailed. Balancing rapid startup innovation cycles with the need for extensive trust‑layer development continues to be a tension point.
Suggested compromises
Leverage LLMs for early‑stage deployments to accelerate innovation, then transition to SLMs for cost, latency and domain‑specific performance improvements. Make compliance APIs optional add‑ons with sensible defaults turned off (e.g., data‑use for model training), allowing enterprises to opt‑in as needed while avoiding over‑restriction. Combine classic NLP pipelines for high‑volume, low‑risk interactions with generative AI only for complex, high‑value queries, achieving a balance between accuracy and innovation. Offer both cloud‑native and on‑premise AI solutions, letting customers choose the level of data control and trust required for their use case.
Thought Provoking Comments
Benchmarks must emerge from deployment reality, not just research labs. The most effective ones come from institutions building, deploying, and maintaining AI at scale.
Highlights the limitation of purely academic benchmarks and stresses the need for real‑world, operational data to create meaningful safety standards.
Set the thematic foundation for the panel, prompting later speakers to discuss concrete tools (e.g., Telangana Data Exchange, RAISE Index) that bring benchmarks into practice.
Speaker: Moderator
The RAISE Index harmonizes requirements across leading global frameworks—EU AI Act, NIST AI RMF, Singapore Guidelines, UK AI Assurance—into a single portable assessment.
Introduces a unifying metric that could simplify compliance for multinational organizations, addressing fragmentation in AI regulation.
Shifted the conversation toward tangible solutions, leading participants to reference the index and consider its adoption in their own governance processes.
Speaker: Moderator
We set up an Office for the humane and ethical use of technology in 2014; trust is our number‑one value. A global compact is needed to stop bad actors from misusing AI.
Shows early corporate commitment to ethics, frames trust as a core corporate value, and calls for coordinated international action rather than isolated efforts.
Prompted Kamesh to emphasize organizational ethics, reinforced the theme of trust, and underscored the necessity of cross‑border collaboration.
Speaker: Arundhati Bhattacharya
Governance should be built into the core product with guardrails at every stage—input, tool‑calling, and output—and human‑in‑the‑loop must be a first‑class feature, not a failure point.
Proposes a product‑centric approach to responsible AI, moving governance from a checklist to an embedded, technical layer that scales with adoption.
Redirected the discussion toward practical implementation, influencing later remarks about API‑based compliance and encouraging startups to embed ethics directly into their offerings.
Speaker: Karna Chokshi
Clients demand full data sovereignty; we provide on‑premise AI appliances so that no external party can access the data. Trust is built through control.
Identifies data sovereignty as a critical trust factor for high‑stakes sectors (defense, finance) and presents a concrete hardware solution.
Expanded the conversation from policy to infrastructure, leading to dialogue on how trust can be engineered through technical controls rather than solely through guidelines.
Speaker: Ankush Sabharwal
Convert compliance into APIs; make privacy‑preserving defaults rather than opt‑outs; expose explainability as a core output of the API.
Reframes regulatory compliance as programmable infrastructure, reducing friction for developers and ensuring privacy and explainability are baked in by default.
Deepened the technical depth of the panel, offering a scalable path to mass adoption and influencing the audience to view compliance as an engineering problem.
Speaker: Karna Chokshi
Telangana Data Exchange is a first‑of‑its‑kind digital public infrastructure that gives startups sandboxed access to government datasets for benchmark validation.
Provides a concrete ecosystem example that bridges the gap between policy intent and real‑world testing, embodying the earlier call for deployment‑based benchmarks.
Illustrated how government innovation hubs can operationalize responsible AI, reinforcing earlier points about living benchmarks and encouraging startups to engage with public data resources.
Speaker: Moderator
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from abstract principles to concrete mechanisms. The moderator’s framing of real‑world benchmarks and the introduction of the RAISE Index established a shared problem space. Arundhati’s emphasis on trust and a global compact anchored the ethical dimension, while Karna’s product‑centric governance and API‑based compliance proposals offered practical pathways for implementation. Ankush’s focus on data sovereignty added a technical‑infrastructure layer to the trust narrative. Collectively, these comments shifted the panel from high‑level policy talk to actionable strategies, highlighting how standards, corporate structures, product design, and infrastructure must converge to achieve responsible AI at scale.

Follow-up Questions
How can a global compact be established to prevent misuse of AI by bad actors?
Arundhati emphasized the need for a coordinated international framework to stop bad actors, indicating that current efforts are insufficient and require further collaborative research and policy development.
Speaker: Arundhati Bhattacharya
How should safety benchmarks be continuously evolved to keep pace with rapidly advancing AI capabilities?
The moderator highlighted that static benchmarks become obsolete quickly, suggesting the need for ongoing research into dynamic, living benchmark frameworks.
Speaker: Moderator
How can compliance requirements be transformed into reusable APIs to facilitate mass adoption of responsible AI?
Karna proposed converting compliance checklists into API services, a concept that requires investigation into standardization, open‑sourcing, and ecosystem integration.
Speaker: Karna Chokshi
What governance challenges are unique to public‑sector versus private‑sector AI deployments?
Ankush identified differences such as data sovereignty and control, indicating a need for comparative studies on governance models across sectors.
Speaker: Ankush Sabharwal
What is the profitability and future role of small language models (SLMs) compared to large language models (LLMs) for businesses?
The audience asked about the business case for SLMs versus LLMs, pointing to a gap in research on cost, latency, and performance trade‑offs.
Speaker: Audience (directed to Karna)
How can trust layers be designed to ensure data security, bias mitigation, and hallucination reduction in cloud‑native AI services?
Arundhati described Salesforce’s TrustLayer and its functions, suggesting further study on effective design patterns for trust in large‑scale AI systems.
Speaker: Arundhati Bhattacharya
How does data sovereignty (on‑premise, edge AI) affect trust and adoption in critical sectors such as defense?
Ankush discussed client demands for on‑premise solutions, indicating a research need into the impact of sovereignty on trust and deployment decisions.
Speaker: Ankush Sabharwal
What default settings should be adopted for customer data usage in training AI models to balance innovation and privacy?
Karna argued that data should not be used by default for model training, highlighting a policy and technical research area on opt‑in/opt‑out mechanisms.
Speaker: Karna Chokshi
How can human‑in‑the‑loop be positioned as a feature rather than a failure point in AI systems?
Karna emphasized human oversight as a core product feature, suggesting investigation into design frameworks that integrate human control effectively.
Speaker: Karna Chokshi
How effective is the RAISE Index in harmonizing multiple global AI regulatory frameworks, and what further validation is needed?
The moderator introduced the RAISE Index as a unifying tool, implying the need for empirical validation across jurisdictions.
Speaker: Moderator
What operational challenges do startups and MSMEs face when embedding responsible AI while maintaining rapid innovation?
Karna described challenges around governance integration and mass adoption, indicating a research gap on scalable responsible‑AI practices for small firms.
Speaker: Karna Chokshi
How can responsible AI be productized as a marketable value proposition?
Kamesh highlighted the importance of embedding responsible AI into product value, suggesting a need for business‑model research on differentiation through ethics.
Speaker: Kamesh Shekar
How do multilingual, low‑resource environments affect AI safety and responsibility benchmarks?
The moderator noted India’s unique context of multilingual populations and infrastructure constraints, pointing to research on adapting benchmarks to such settings.
Speaker: Moderator

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

From India to the Global South_ Advancing Social Impact with AI

From India to the Global South_ Advancing Social Impact with AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session titled “AI for Skilling, AI for Impact, Skilling, Inspiring and Empowering the Next Generation” brought together government, industry, academia and youth innovators to discuss how artificial intelligence can be used to build skills and drive inclusive growth in India and the Global South [5-8][11-13]. Safin introduced three young innovators selected through the UA AI Initiative, noting that 15 000 youths have already been trained and that each would give a two-minute pitch [13-21][16-18]. The first pitch, “AI for Cardio,” described an offline desktop tool that lets primary-health-center staff upload ECGs and blood reports for AI-driven diagnosis, built on a fine-tuned Llama 3.11 model and already deployed in over 100 PHCs serving 1 000 patients [23-25]. In the subsequent fireside chat, Minister Jayan Chaudhary and Meta’s Aman Jain highlighted the Prime Minister’s view that AI will create, not eliminate, jobs and stressed the need for large-scale skilling to realise this potential [33-40]. Chaudhary argued that early adoption and “first-mover” advantage will expand the economic pie, citing examples from retail and agriculture, and warned that AI must increase productivity without making life harder for people [41-48][50-55]. He also emphasized that AI can generate new roles such as contextualisation specialists for India’s linguistic diversity and that the sector’s growth will depend on inclusive, humane deployment of technology [55-59]. Addressing equity, both Jain and Chaudhary noted that AI-enabled tools can identify and support students with special needs, improve teacher sensitisation, and extend services to remote regions through initiatives like the Skill India Assistant and the PM Setu programme for ITIs [68-71][100-124]. Chaudhary called for industry to move beyond closed hiring networks, urging partnerships that co-design curricula, provide trainers and create sector-relevant courses, especially in ITIs where outdated programmes such as Hindi stenography persist [102-110][132-138]. Bhutachandra Shekhar described the Anuvadini project, which translates skill-related visual content into 22 Indian languages and creates audio-based learning for low-literacy workers, illustrating how multilingual AI can overcome language barriers [250-256][262-267]. He further argued that AI should complement human intelligence, citing a story about a soap-box inspection to show Indian ingenuity and stressing the need to convert skills into earnings for sustainable livelihoods [282-287][291-295]. Deepak Bagla outlined the Atal Innovation Mission’s school-level hackathons and “innovation labs” that have engaged millions of students, and advocated a unified dashboard linking schools, incubators, mentors and policymakers to scale collaboration [284-293][300-307]. The panel concluded that public-private partnerships, multilingual AI tools and coordinated skilling ecosystems are essential for India to become a global AI leader and to ensure that AI benefits reach all segments of society [66-69][398-401]. Overall, participants agreed that investing in youth skilling, inclusive AI development and industry-government collaboration will translate AI advances into broad economic impact and social empowerment [44-46][437-441].


Keypoints


Major discussion points


Scaling AI-driven skilling for youth and showcasing grassroots innovations – The session was launched to “explore how AI skilling and youth-led innovation can drive inclusive growth” and to report that “about 15,000 youth have already been skilled… with a commitment to empower 100,000 youth on generative AI” [5-8][12-13]. Young innovators demonstrated concrete solutions such as the offline “AI for Cardio” diagnostic tool [23-25], the “Prasima AI” autonomous agent for MSMEs [158-169], and the “Ayurveda GPT” multilingual manuscript assistant [178-183].


Impact of AI on employment and the emergence of new job categories – A moderator asked whether AI would “take away jobs” and referenced the Prime Minister’s view that technology creates opportunities [38-40]. Panelists argued that early adopters gain a larger “pie,” that AI will generate entirely new roles (e.g., contextualisation, AI-coaching, multilingual support) and that productivity gains must be balanced with humane outcomes [41-58].


Ensuring AI benefits reach under-represented and remote populations – The conversation turned to inclusive access, with questions about “people with disabilities, far-flung areas” [68-70] and responses highlighting early-screening tools for special-needs students, teacher-sensitisation, the Skill India Assistant, and AI-enabled language-agnostic interfaces [71-80][85-95][82-84].


Public-private-academic collaboration and concrete policy mechanisms – Government officials stressed the need for data-sharing across ministries, a shift in mindset toward collaboration, and partnerships with industry and academia [214-222][394-398]. Specific initiatives cited included the PM Setu funding for ITIs, the Skill India Digital Hub, and the Atal Innovation Mission’s massive school-level hackathon and “dashboard” vision for coordinated innovation labs [112-124][284-304][401-414].


Vision of India as a global AI leader and the role of multilingual, locally-relevant models – Speakers highlighted India’s linguistic diversity as a testing ground for AI, noting work on omnilingual models, edge-computing solutions, and the “AI coach” [98-99][85-95]. The UN representative reinforced that India’s experience can guide the Global South in bridging the AI divide [360-368].


Overall purpose / goal of the discussion


The summit aimed to (1) mobilise and scale AI-based skill development for the next generation, (2) showcase youth-driven AI applications that address real-world challenges, (3) examine how AI will reshape the labour market while stressing inclusive, humane outcomes, and (4) chart a collaborative roadmap among government, industry, academia, and civil society to embed AI responsibly across India’s education, health, and economic ecosystems.


Tone of the discussion


Opening (0-5 min): Highly enthusiastic and celebratory, with repeated thanks and applause as the host welcomed participants and highlighted achievements [1-5][12-13].


Mid-session (5-30 min): Shifts to inquisitive and analytical as moderators pose probing questions about job displacement, inclusion, and policy; panelists respond with a mix of optimism and caution, using data-driven arguments [38-58][68-80].


Later segment (30-60 min): Becomes pragmatic and solution-focused, detailing concrete programmes (PM Setu, Skill India, Atal Innovation Mission) and calls for concrete industry-government partnerships [112-124][284-304][394-418].


Closing (60-end): Returns to a hopeful, rally-the-troops tone, emphasizing collective responsibility, “the future is now,” and a unifying call to invest in youth and AI [349-353][437-442].


Overall, the tone moves from celebratory introduction, through thoughtful debate, to a decisive, forward-looking call for coordinated action.


Speakers

Darren Farrant – Role: Director, United Nations Information Center India and Bhutan; Expertise: International development, AI policy, Global South collaboration. [S1][S2]


Jayant Chaudhary – Role: Honourable Minister of State Independent Charge for Skill Development and Entrepreneurship; Minister of State for Ministry of Education; Expertise: Skills development, education policy, AI impact on workforce. [S3][S5]


Ayurveda GPT Member – Role: Member of the Ayurveda GPT team; Expertise: AI-driven retrieval of Ayurvedic manuscript content, multilingual language models for traditional knowledge. [S6]


Safin Matthew – Role: Vice President, 1M1B; Host of the AI for Skilling session; Expertise: AI skilling initiatives, program coordination. [S8]


Manav Subodh – Role: Founder & CEO, 1M1B; Panel moderator; Expertise: AI entrepreneurship, youth empowerment, AI skilling strategy. [S11]


Bhutachandra Shekhar – Role: CEO, Anuvadini; Chief Commercial Officer, AICT; Expertise: AI for language translation, skill-content digitisation, multilingual education solutions. [S12]


Deepak Bagla – Role: Mission Director, Atal Innovation Mission; Expertise: Grassroots innovation ecosystems, school-level AI labs, national hackathon coordination. [S13][S14]


Ashish Pratap Singh – Role: CEO, Prasima AI; Expertise: Autonomous AI agents for MSME productivity, large-language-model-based workflow automation. [S15]


Aman Jain – Role: Senior Director & Head of Public Policy, India, Meta; Expertise: Public policy for AI, industry-government collaboration on AI skilling. [S18]


Pankaj Kumar Pandey – Role: IAS, Principal Secretary, Government of Karnataka (Education & Personal & Administrative Reforms); Expertise: Government AI skilling, e-governance, inter-departmental data integration. [S20][S21]


Nandakishor Mukkunnoth – Role: Young innovator (presenter); Expertise: AI-powered offline cardiology diagnostics for primary health centres.


Additional speakers:


Sri Jayan Chaudhary – Role: Honourable Minister of State Independent Charge for Skill Development and Entrepreneurship; Minister of State for Ministry of Education (mentioned as the minister introduced by Safin Matthew).


Rishikesh Patankar – Role: Vice President, National Skill Development Corporation (NSDC); Expertise: Scaling skilling programmes, employability initiatives.


Full session reportComprehensive analysis and detailed insights

Opening & Programme Overview – Safin Matthew opened the session, welcoming participants to “AI for Skilling, AI for Impact, Skilling, Inspiring and Empowering the Next Generation”, a joint initiative of Meta and the 1 Million for 1 Billion Foundation aimed at leveraging artificial intelligence to build skills, foster innovation and create future-ready talent at scale in India and the Global South [5-8]. He outlined the URI skilling initiative, which targets 100 000 youth for generative-AI and large-language-model training; within two months the programme had already reached about 15 000 young people [12-13][13-14]. The agenda was to showcase grassroots innovators selected through the UA AI Initiative and to discuss how AI-driven skilling can drive inclusive growth [11-13][16-18].


Innovator Pitch 1 – “AI for Cardio” – Nandakishor Mukkunnoth demonstrated an offline desktop application that enables primary-health-centre staff to upload ECG images and blood-report PDFs for instant AI-based diagnosis. The system uses a fine-tuned Llama 3.11 model (trained on 800 GPUs) and has been deployed in more than 100 PHCs, serving over 1 000 patients; the work was published in the British Medical Journal[23-25].


Fireside Chat – Minister Jayant Chaudhary & Aman Jain


AI and jobs – Aman Jain recalled the Prime Minister’s statement that the notion of AI “taking away jobs” is misplaced and that technology creates new opportunities [38-40]; Chaudhary counter-pointed that early adoption yields a first-mover advantage that expands the economic “pie”, citing examples from retail and agriculture and warning that AI must raise productivity without making life harder [41-48][50-55].


Inclusion – Chaudhary described AI-enabled early-screening tools for special-needs students, teacher-sensitisation programmes, and the Skill India Assistant, an online portal that provides multilingual, step-by-step guidance to remote users [71-78][68-70]. He also referenced the NE-India “Skill India Digital Hub” and the PM Setu programme, which earmarks ₹60 000 crore for ITI clusters to be co-governed by industry partners [112-124][114-124].


Accessibility – Aman Jain introduced Ray-Ban’s “Be My Eyes” glasses that assist visually-impaired users [82-84]; later he highlighted Meta’s AI Coach, a multilingual assistant that runs on low-cost devices, following Chaudhary’s mention of Sarvam’s edge-computing model [98-99][85-95].


Language barriers – Both speakers stressed that India’s linguistic diversity (over 22 official languages and many dialects) makes it a unique test-bed for omnilingual models, and they called for language-agnostic AI to ensure inclusive growth [85-95][S81].


Industry partnership ask – Chaudhary urged the private sector to abandon closed hiring networks, co-design curricula for ITIs, and provide trainers, linking these actions to the PM Setu funding [100-108][114-124].


Innovator Pitch 2 – “Prasima AI” – Ashish Pratap Singh presented an autonomous AI agent built on Meta’s Scout and Maverick foundational models that automates tender extraction, CRM queries and calendar management for MSMEs. The solution saves 15 000 minutes per month, achieves 99.9 % compliance, and delivers a pay-back period of six to nine months, generating roughly ₹41 lakhs of revenue [158-169]. Although the scaling question was posed by moderator Manav Subodh to NSDC chief Rishikesh Patankar, Singh answered it directly [158-169].


Innovator Pitch 3 – “Ayurveda GPT” – A member of the Ayurveda GPT team showcased a multilingual LLM that answers queries directly from Ayurvedic manuscripts, cites sources in real time and demonstrated a live conversation with a virtual “Rishi” [178-183]. The presenter invoked NEP 2020 as a policy backdrop [S81] and advocated a national skill-census (as an alternative to a caste-census) to better map capabilities [S37].


Leadership Panel – Moderated by Manav Subodh, the panel comprised Pankaj Kumar Pandey, Rishikesh Patankar, Bhutachandra Shekhar, Darren Farrant and Deepak Bagla.


Data-integration reforms – Pandey reported that the Karnataka government held a “Second-in-Command” workshop for all departmental IT cells to promote inter-departmental data sharing and collaboration, a step toward linking weather, energy and agricultural datasets for AI-driven public services [214-222].


Anuvadini project – Shekhar described the Anuvadini visual-arts library that translates skill-related visual content into 22 Indian languages and creates audio-based learning for low-literacy workers; he cited a painter’s inability to use a translated manual while working and delivered a “soap-box” example contrasting a European company’s $300 M ICR engine with an Indian farmer’s simple fan solution, underscoring frugal, locally-adapted innovation [250-256][262-267].


Global-South perspective – Farrant emphasized that India’s experience with multilingualism, data diversity and grassroots innovation offers a template for other Global-South nations, warning that without large-scale reskilling programmes the AI divide could exacerbate job losses [360-368][S1][S76].


Atal Innovation Mission – Bagla outlined a network of 10 000 school-level “tinkering labs”, a record-breaking 96-hour hackathon that generated 2.5 million prototypes and entered the Guinness Book of World Records, and proposed a unified digital dashboard to connect schools, incubators, mentors and policymakers [284-304][411-414].


NSDC scaling view – Singh (representing NSDC) highlighted employability-focused, lifelong-learning pathways and sector-wide opportunities for AI-driven skilling [158-169].


Collaboration calls – Across the panel, speakers urged public-private-academic partnerships: Pandey on breaking data silos, Chaudhary on open hiring and industry-led ITI curricula, Bagla on a shared dashboard, and all stressed the need for co-governance of ITI clusters under PM Setu [100-108][112-124][284-304][221-222].


Closing Remarks – Manav Subodh concluded that AI leadership is defined not by models or compute but by people, skills and opportunity. The speakers thanked institutional partners (Lloyd Business School, GIMS) and the impact-summit team, and issued a final call to invest in youth as an investment in broad economic and social empowerment [437-441].


Key take-aways


1. Launch of a large-scale AI skilling initiative targeting 100 000 youth, already reaching 15 000.


2. Consensus that AI can expand the economic pie and create new job categories, provided early adopters act.


3. Emphasis on multilingual, offline and accessibility-focused tools for persons with disabilities and remote communities.


4. Necessity of public-private-academic partnerships, open hiring practices and industry-led curriculum design within ITI clusters funded by PM Setu.


5. Requirement to break data silos across ministries and to create a shared digital dashboard for innovation labs.


6. Positioning of India as a model for the Global South in bridging the AI divide. [41-48][68-70][100-108][112-124][284-304][411-414][360-368]


Session transcriptComplete transcript of the session
Safin Matthew

Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. I’d like to welcome everyone to this special session titled AI for Skilling, AI for Impact, Skilling, Inspiring and Empowering the Next Generation by Meta in collaboration with 1M1B, 1 Million for 1 Billion Foundation. India stands at a defining moment in its AI journey. Not just building technology, but building skills, innovation, capacity and future -ready talent at scale. As AI transforms industries and societies, the real question is, how do we equip young people? with the skills, platforms, and opportunities to innovate with AI and create meaningful impact. To introduce to all of you, I’m Safin Matthew. I’m a vice president at 1M1B and your host for the session.

Today’s session brings together leaders from the government, industry, academia, innovation ecosystems, and global institutions to explore how AI skilling and youth -led innovation can drive inclusive growth in India and across the global south. The session also builds on the URI initiative for skilling and capacity building led by META in partnership with India AI, AICT, and 1M1B, an initiative that’s focused on nurturing and scaling youth innovation using AI across the country with a commitment to empower 100 ,000 youth on generative AI and large language models. And I’m pleased to share that in the last two months, once the initiative kicked off, about 15 ,000 youth have already been skilled through the program, and we are looking at scaling it up in the coming months.

In a few months. We have a few innovators present here today. In fact, three of the inspiring young innovators who are here to show us how they’re using AI for good and especially innovating using large language models. The innovators you will hear from today have been identified through the UA AI Initiative for Skilling and they have been identified through a hackathon and a hunt for startups who are using LLMs in a very creative manner. So these young innovators are not just learning AI. They are applying it to address pressing societal needs across India. And each innovator will do a short pitch of two minutes. I’d like to begin by inviting one of the innovators to go ahead and present his pitch, AI for Cardio.

Let’s have a round of applause as we welcome the young innovator.

Nandakishor Mukkunnoth

Good morning. My name is Nandakishor. Hello, everyone. In India, there are… There are around 30 ,000 primary health centers out there. so imagine a farmer having chest pain going to this primary health center what they are going to do, they will take an ECG and a blood report but the problem is there is no in house cardiologist they have to send it to a central hub then return the results back to the primary health center so the problem is it’s around 30 to 40 minutes delay is happening delay means the mortality rate is going high so we build AI for cardio a desktop application that works completely offline where the medical practitioner can upload ECG image along with blood reports to get the final diagnosis so it’s powered by LAMA 3 .11 division model we fine tuned on 800 GPUs and it has been published in one of the most reputed medical journal in the world called British Medical Journal so this one is actually have an interpretation system called cross model attribution system but the model is actually giving the idea where the model is actually focusing on you can see on the image there is a red mark that the model is actually more focusing on that part so we actually implemented on around 100 plus PHCs and helping 1000 plus patients so the motto is simple, wherever you are even you are in a rural area, the life should have been saved, thank you

Safin Matthew

Thank you so much, I think that deserves a round of applause excellent use of AI for the masses, thank you so much Now we have the Honourable Minister here with that we can begin with an insightful fireside chat that aims to explore India’s vision for AI skilling and how collaboration between the government, academy and industry can unlock large scale potential and opportunity, now it’s my privilege to invite on to the stage Sri Jayan Chaudhary Sir The Honourable Minister of State Independent Charge for Skill Development and Entrepreneurship and Minister of State for Ministry of Education and joining him for a fireside conversation is Mr. Aman Jain, the Senior Director and Head of Public Policy, India Meta Applause Applause Applause Applause Applause Applause

Aman Jain

Firstly, it is incredible to see so many people in the room still. It’s been five days, and I feel like my first reaction when I came today was that, you know, it seems like more people every single day. So it’s incredible. Thank you, everyone, for being here. I hope the traffic will get better where you’re exiting. But thank you for being here. Firstly, thank you to the Honorable Minister and guest who’s graced us with his presence. You know, one of the things that’s become very, very clear at this event, and especially in the last five days, Honorable Prime Minister in his remarks also spoke about, you know, a lot of the importance of AI and what we want to be able to do with AI is essentially going to become a function of skilling.

And we are. We are lucky to have a dynamic minister in charge for that very, very important portfolio. So. So I had a couple of questions, and we could just hear your thoughts on them. Just to start off, I’ll ask the sort of – I don’t want to be provocative, but just make it interesting. Why not? So make it interesting. So, you know, and because I referenced the Prime Minister’s remarks, you know, at the beginning of the summit, you know, he did say that, look, AI taking away jobs, the very notion is kind of misplaced, you know, stating that technology actually creates new opportunities rather than eliminating them. So I want to know what are your thoughts on this, because that’s obviously top of mind for folks that, you know, with more proliferation of AI, are we going to end up losing jobs?

And then, you know, depending on how you think about it, also from your vantage point, then what would be your advice to you?

Jayant Chaudhary

I think it comes down – when any new tech comes in, if as a society we adapt to it early, and if you’re a first mover, second mover, maybe even the third mover, then you’re in an advantageous position and the size of the pie will go up. So as currently we are not seeing that because AI is not adopted at scale yet. It’s the promise of it, the idea of it, the multi -dimensional nature of it that is exciting everyone. And everyone in the room knows whether I’m a farmer, I’m a student, I’m a professional, I’m an entrepreneur, I’m an accountant, I’m a strategy consultant, I’m a student. It’s going to affect all of you in very personalized and intimate ways.

You’re going to be using it and you’re going to be affected by it. So I think India is in a position where after this event, we’ve created a huge mass of people that are going forward on this, that are engaging with this without fear. There is no fear. Yes, there’s confidence. And with time, we’ll be able to, with our architecture, build trust because trust becomes very important when you are… giving away a lot of space to technology but it is inevitable. If you look at the offline online retail for instance as an example, people still like going to the small shop the Kinara shop and having that conversation but at the same time you can see a dramatic shift towards the online model.

The impact that internet had, it probably took away a lot of jobs but if the share of the pie went up, the possibilities today using social media monetization that I see, I have gone to villages and I see people I would ask them earlier what are you doing, they would say have you done BA pass, I would say what are you doing or even an MBA and they say what are you doing now and they say I am doing agriculture, I came back. Basically I lost out or I gave up and I said okay now I have no choice I have my two acre, I have to till that land till I die. Now when I go to the villages you see young boys and girls walking with a stick.

So they have been able to monetize and create a new space for themselves. I do believe AI will come up with a whole set of new jobs. context mapping for instance we are just assuming that large enterprises will take 500 agents, who is going to train those agents so that the process flow actually gets automated, who is going to contextualize even now in India the voice that speaks in the lift doesn’t seem like a voice that is familiar they still have not been able to get the kind of the language, the nuances and India is so so diverse, so for any AI model to represent all of us as Indians, it will take time, so that contextualization is a story where I think you are going to see a lot of people at the grassroots getting opportunities our startup system is very robust and the best part is that with a huge population that is savvy, that is adept adaptive, that is trained and skilled the probability is higher that the best new ideas of the future are going to come from India this is what that event is about seeding that ambition in every young person and when those enterprises get created there will be job creation but will every job be the same as it was 10 years ago it isn’t even now the catch is that we are told every time technology comes in it’s supposed to make your life easier but everyone ends up working harder so this is the question in the room AI will make us all more productive will we be able to be more humane and will we value our experiences as human beings more as a society or will life become harder for us this is where the tagline of the event can we become happier citizens can we engage with our governance models in a more transparent manner can we take out more time for more productive aspects of our life the blurring between technical and non -technical can we make the world a better place can we make the world a better place can we make the world a better place can we make the world a better place qualified people, I believe that would be great because it offends me when we say white collar, blue collar.

That itself is offensive because what are we trying to say? So I think those things will get blurred because opportunities are immense. You don’t need to be have knowledge of programming to become a coder or to create apps, to create products. That is the beauty of this AI.

Aman Jain

Absolutely. You know, you said said something on the pie increases and as an example and just to corroborate that point further, we’ve seen that in retail for instance where overall the pie has increased in size and so e -commerce actually is just 7 % of the total retail in India and as we go towards a 5 trillion, 7 trillion, 8 trillion dollar economy, that retail continues to grow sort of a fair bit. To that, while they receive a lot of criticism and they must evolve to better practices. and social security benefits for the big economy. But if you look at the aggregating platforms, were those small dhabas and restaurants actually getting any business? And could they have survived? And if not these aggregating platforms, what other tool would have come that would have changed?

So we just sit here thinking that life will be the same, it will not be the same. There will be something, something all the time, there is going to be flux and that’s the dynamic nature of a globalized world. Absolutely. You know, you mentioned the theme of the summit, the theme is AI for all. So your thoughts on how we can use AI to ensure that it, you know, skilling and the benefits of AI reaches, you know, what would be called traditionally underrepresented groups. So, you know, whether it could be people with disabilities, it could be people in far -flung areas or, you know, in the Northeast or anywhere else across the country. How do we make sure that AI and the benefits of AI and skilling, along with it, reaches every part of the country?

Jayant Chaudhary

AI impact and the kind of products we’re already seeing some of them are displayed here have a tremendous possibility for people with special background, handicapped disabled people and one of the challenges in the education system is that we need to screen and identify those students earlier so that a customized more sensitive environment can be created for them in the classroom so one aspect is teacher sensitization does a teacher have the capacity are there tools out there which is why we tried a precious app in our schools and a second iteration is now being rolled out I’m sure there can be a layer of or some kind of augmented augmentation using AI but if you’re screening early because in the Indian case our if you look ask me how many school going students in India are categorized as with special needs less than 1 % and what is the actual figure probably 6 -7 -8 % so and why are children then dropping out?

This is one of the reasons, one of the biggest reasons why if children are not completing school because that school is not able to capture their unique capability. So no child should be left behind. The best teachers were the ones who didn’t teach the best kids, who paid the most attention to the weakest kids, children who are not following, right? Every child is important in that classroom. And now with AI, there are so many teacher tools out there that individual journeys can be mapped, can be analyzed, and corrective action can be taken real time. So that is the power of how AI at scale can transform our capabilities and competence. Northeast, tough geographies, again, AI has solutions there.

On the Skill India Digital Hub, we’ve tried, Meta has been partnering us and we created Skill India Assistant, which again is making the journey easier for anyone who comes to the portal. Skill India Digital Hub is now a DPI we are now going to add more and more data layers on it and try and create more value for researchers create an open stack if you can IIT Madras is working in similar fashion on the Education Centre of Excellence in Education that also includes elements of skilling but the idea there that is being proposed is also to create a full education stack. So all of the ed tech solutions all these new start ups, all the vibrancy that we are seeing in this summit and in this room those players can now come on board and partner government in our journey to change lives

Aman Jain

I probably should have started with this thank you for visiting our booth you were just there and we had the honour of also hosting the Honourable Prime Minister on day one and I was there and he was very sort of engaged and what we had shown to him was this feature on the Ray -Ban glasses Meta Ray -Ban glasses is called Be My Eyes. And that exactly is that point that how it helps, you know, that is a specific feature for people with, you know, impaired vision and or blindness. But there are so many different sort of use cases where AI can truly help.

Jayant Chaudhary

Language is so important. That language, people will slowly move away from this parochial mindset of its pronunciation is not good or it does not speak my language. It does not speak Tamil, does not speak Kannada, does not speak Hindi. It’s going to go away. Our way of thinking will then migrate to what is he saying? What is she trying to communicate? The idea that she is talking about. That is most important. The medium, which is just a language, should not matter. Those barriers will go away very easily with AI. That capability is already there. You are working on that. Sarvam has come up with a very small edge computing model that is not expensive and can run on your, you know, any device that you own.

Aman Jain

and because you mentioned Sarvam as well we are working with them on essentially what we call the AI coach and again the focus there is on multilingual, omnilingual, how many more sort of languages we can add and that’s I think also a fairly sort of good framing I think the Indian government did at this event in essentially saying that look it’s about it’s not necessarily a race for frontier models but it’s more about models that work for you here and that should be a focus. You briefly touched upon it so I wanted sort of you know we’ve got many organizations represented here you know what would be sort of your advice or clarion call for industry to partner with you as you’re thinking of advancing many of these skilling initiatives how can industry partner more with you in your work?

Jayant Chaudhary

Yeah so my one ask really is that enterprises are created and value is created and value is created when you are able to widen your engagement with your clients, with your new markets, with a new base of employees. But if you do a real analysis of corporates in India, as they have come to this point, they are still hiring on closed networks. It’s the elephant in the room. They will hire based on trust and faith, which must be high. So we need our industry partners. We need to move away from the first term and fix criteria. Because the same industry that is going to get to the IIT when I want to hire, when you come back and say, I’m going to do this, then we now see the system that people pick from the qualifications and the degrees and more of funding and skills and confidence, real employment.

So that we need to do a state -of -the -art business development. We don’t want our ideas and qualifications in our colleges, our engineering colleges, our state universities to be closed institutions. They need to open up their doors, have wider debates, and our industry partners need to really interact with them. And try and create models where the next IT can be generated from their institutions, rather than their own lives. So it should not be about ownership, it should be about participation, it should be about capacity. Using our academic infrastructure. One new scheme that has come in that I’d make a pitch for is PM Setu. For the first time, 60 ,000 crores, it’s a lot of money.

60 ,000 crores is being put in our ITIs. So our ITIs are the grassroots organizations, government ITIs, there’s maybe more than 3 ,000 in the country. They are going to get benefit from this. We’re going to create clusters, it’s not going to one ITI. The idea is not just you create a swanky building or a lab and let it be. It is also incorporating ideas of governance. Can these become institutions? and create a network. So it’s going to be five ITAs in a cluster working together, aligned to the local economy, to the needs of the MSME there, and with a partnership where an industry partner will be onboarded as part of the governance of that institution. We want industry to say, we will run these five ITAs.

We will design new courses in those ITAs. We will look at our trainers. Globally, if you look at the skill ecosystem, the people who are working in industry are the people who are going to their, in the TAF, for instance, in Australia, which is our equivalent of ITI. People who are currently working in industry are going there and working and training. Similar in European guilds. People who know cutting -edge practice, they know what the employers want. They are the ones who are actually going and teaching in these institutions. In our system, who is teaching? The state is the one hired by the state. The state 30 years ago, who perhaps his trade is carpentry, now he has to teach.

AI, welding, electronics, circuitry. So if you yourself don’t have that domain knowledge as a trainer, as an instructor, your capacities are limited. So we need to create a repository of trainers that again industry will come in. We need to create new courses. Do you know we still teach Hindi stenography? It’s a one year program. You should wonder why we are teaching it and why are children going and learning it? Because they need the certificate. No one will tell you this for recruitment, which is why they are doing that program. All of this needs to change. So while we are talking new technology, we are talking AI, we are seeing the visible impact at the grassroots.

We need a lot of rejuvenation in our educational institutions.

Aman Jain

Absolutely. I know you are short on time and so again I would like to thank you for taking the time. I think in a few minutes you have really shown a very important role in the development of the education system. very exciting vision and also clear way for how industry should partner at Meta. We obviously believe in this a lot and we are already working with your ministry with the skilled assistant. We hope to do more and you’ll see that in the coming weeks and months. And to everyone else in the room as well, I’ll again mention this, it is really a privilege to have such a dynamic minister in charge of what will become probably the biggest sort of area for disruption over the next few years, which is going to be scaling education and so on.

So thank you again for your time.

Safin Matthew

Thank you so much, sir and Aman. That was a fantastic conversation and if I may, please request yourself one photograph with all the panelists. If I can request all the panelists to come on stage for a group picture. I’m not sure from morning it’s going on. Yeah. Thank you so much. While we get ready for the next panel, I would like to request our other two innovators to come forward and pitch their innovations as well. So I’d like to first invite Prasima AI, if you can come forward and present your pitch.

Ashish Pratap Singh

So good evening. My name is Ashish Pratap Singh. I am the CEO of Prasima AI. My father runs an MSME business in Lucknow. wherein all the data is actually scattered across email, spreadsheets and WhatsApp leading to 35 % productive time loss and 10 -15 % in revenue leakages. But this is actually an all India problem leading to 8 lakh crore plus of annual cost overruns across Indian MSMEs. There are 7 crore plus MSMEs across India. So how we have solved the problem is by building an autonomous AI agent that can think, act and execute on your behalf. Users can get work done by the agent by giving it simple commands to do work like tender extraction, tender tracking, CRM querying and calendar management.

Under the hood, we have used meta foundational models, particularly Scout and Maverick because in our internal evals, we have found them to be particularly good at reasoning, planning. orchestration and tool usage results. We have achieved 15 ,000 minutes plus saved monthly with 99 .9 % compliance accuracy. What sets us apart is we have reduced the productive time loss from 35 % to nearly zero with a six to nine month payback period for our clients. Currently, our revenue standards stand at 41 lakhs in the last six months. At the end, I would like to thank 1M1B and Meta AI for this opportunity to collaborate with them. Thank you so much.

Safin Matthew

And all the MSM is here. He’s someone you could reach out to for an interesting solution. Next, I would like to invite Ayurveda GPT, who’s got a very interesting solution. If you visited, I think, hall number 14, you would have seen a part of the solution presented there as well. They have a stall there. Yeah. Nand Keshav. Sorry. Ayurveda GPT.

Ayurveda GPT Member

you can just simply query to that particular model and it will give you answer right from the manuscript along with the dedicated source. So further, there are a lot of government initiatives being there, but there hasn’t been a specific model which directly rooted from the manuscript. So this was the initiative that we kind of launched. And further, our current model, you can directly see it on the screen itself. That’s a demo where I’m having a real -time conversation with a Rishi related to the manuscript. So yeah, thank you.

Safin Matthew

So are you guys ready to take it to the global level? Thank you so much. That was a fantastic initiative taking Ayurveda to the global stage using AI. Thank you so much. Now we move on to the leadership dialogue titled Empowering Youth and Driving Innovation Through AI Skilling. May I please request our respected panelists to join us on the stage for the discussion. Mr. Pankaj Kumar Pandey, IAS Principal Secretary, Government of Karnataka, Department of Education, Department of Personal and Administrative Reforms. Let’s welcome with a round of applause. Department of e -governance Mr. Rishikesh Patankar, Vice President NSDC, Mr. Bhutachandra Shekhar CEO Anwadni and CCO of AICT Mr. Darren Farron, Director United Nations Information Center India and Bhutan I think Mr.

Deepak Bagla who is the Mission Director for Hotel Innovation Mission, he will just join us in a few minutes he is in the other room and the discussion will be moderated by Manav Subodh, the Founder and CEO of 1M1B

Manav Subodh

Hello everyone in the room, my name is Manav, you know and trust me I didn’t change my name yesterday I was always Manav until the Prime Minister just you know, casted the Manav vision for us and it’s yeah, what a vision for all of us to take AI, but you know thanks to my team my parents I did, they made me one of quite some time back. So welcome everyone. We have a very high energy panel today and very limited time. And we’ll try to make it interesting and I’ll try to utilize the maximum we can out of the short time that we have amongst these distinguished people with me. So, you know, I’ll start off with, you know, AI is the, they say it’s the new internet.

AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if the past technology patterns we have seen that there have been a few countries who have made it and the rest of us consumed it, that needs to change. And India is going to change. And when the youngest population collides with the most powerful technology, which is artificial intelligence, we’re going to have creators and consumers. And this is the opportunity. This is the opportunity for India. to have AI creators like what we just saw, Ayurveda, GPT. These are local innovations that we need to see. So I’ll start with the question first to Mr.

Pankaj Pandey. And Mr. Bagla would be joining us, but Pankaj has been leading. He’s the Principal Secretary of E -Governance at Government of Karnataka. And there’s a lot of action the government is taking in Karnataka, especially on the skilling front. So the question, Pankaj, to you is, what role is the government of Karnataka playing to skill the government workforce and make sure that we are aligning the government official also with what’s the trajectory that the country is taking?

Pankaj Kumar Pandey

So thanks a lot and congratulations to the young innovators for having presented these three concepts. And it’s very well done. My compliments to them. If I look at the government… See, the government, the verticalization or in terms of protecting your own territory and the information is very direct feeling that, okay, this entire data and this entire data set belongs to me is extremely high amongst the department. And the government is one of the institutions where we create a huge amount of data. You take energy, you take agriculture, horticulture, various departments. Now, for a good and targeted delivery of the services of the government, we need to ensure that these data sets talk to each other.

And therefore, one thing which has to change is the change in the mentality of the people working in the government that we have to talk to each other, we have to collaborate with each other. We need not just create data, but we have to collaborate and ensure that this data set is used for the purpose for which it is meant. And, for example, I give you a simple example. Like the farmers will require data on the weather. obviously this weather data also has to be used apart from the cropping pattern to ensure that the power supply is given in the various irrigation pump set feeders which go and supply the power to your irrigation pump sets now these two are related to each other apart from the cropping pattern so your GPS data to the granular level your data regarding the energy and the data regarding the cropping pattern and the weather conditions are interrelated now these department needs to talk to each other like energy agriculture, horticulture your disaster management cell all of them need to talk to each other and therefore the mental frame of the government officials have to change in fact in this direction we had a workshop we called the second in command of all the government departments first who maintain their data every department has got some kind of IT cell they have IT cell which manages their data manages their software we wanted to target them that you need to talk to each other you should you should see that what kind of potential exists if you start collaborating.

So this is one thing and obviously this will also require the academia and the industry to come together with us and that is where we want they have to be taken. Thank you.

Manav Subodh

Thank you very much, Pankaj. My next question is to Budhaji. There’s a lot of work happening on Anwadini AI and what the minister was talking about that we need local languages and you are leading a big initiative in the country, especially in higher education. So I would like to have your views on how this is coming along and how the grassroots participation can be critical and will be critical looking at some of the work that you’re doing.

Bhutachandra Shekhar

Thank you. Good afternoon all. On behalf of Minister of Education, Government of India, I welcome all of you for this thought -provoking, sharing is caring. That’s been the first standard. I think you know what a wonderful event is happening from last four to five days. Knowledge is flowing from one place to another place. Not only within our country from across the globe. You know so because we all believe Vasudhaiva Kutumbakam. The entire world is our family. That’s the reason you know it says you know wellness for all. You know for everyone. You know happiness for everyone. Welfare for all. You know very very apt. So I just come back to this because I see this in a little different way.

Because have you ever seen skill books anytime? For a plumber or a painter. Most of the books has images without a description. Have you observed? So the biggest problem if you give these books to notebook LLM to Google. I’m just taking a notebook LLM. You can take any such kind of a tool. It can’t even describe what it is. So that’s where the Anuvadini Ministry of Education component comes into picture. We have created an advanced visual arts library. Learning model. It can understand what is there inside the image. And it will describe what is there inside that image in Indian language. because as you all know that 85 % people in our country, they speak their mother language.

So the way they communicate, they trade in mother languages. So that is the biggest issue. So what we did, we have translated all skill -related books into 22 Indian languages so that the plumber, painter, you know, who is not well -educated can easily understand. But, you know, we did that and we have one very big event in Bengaluru. So my wife is also from Bengaluru. I thought I will go and, you know, little show off and send a photo to my wife saying that, you know, I am there and, you know, I help your people. Then I got a shock in my life. One painter, I just asked him, sir, you are also there in that event.

So basically I asked one simple question, are you happy we have given you book? He said, sir, you don’t understand our problem. I said, please explain what is your problem. You know the shocking answer he gave, sir. He said, I am a painter. In one hand I hold a paint dabba, other hand I will hold the paint brush. how I will hold the book do you see the difference between human intelligence and artificial intelligence so we are creating all technological solutions assuming that one human with less educated person can use it then we have come up with a wonderful audio based books for them because see I think this is where I see AI comes into picture but I put this in a three simple prospectus learning, earning, leading this is a simple three dots which we need to connect with respect to the skilling and connecting that with artificial intelligence but when I say artificial intelligence artificial intelligence with data intelligence with business intelligence because these three dots are interconnected but because people love artificial intelligent words so they are just simply using it but the matter of the fact here is the content what you have is not self -explanatory one.

The second thing, if someone wants to learn it, the best way of learning is in their own mother language. But here the challenge is, if you take exam, again I am taking Kannada, but you can take any language. Example, if you take Hindi. Punjabi Hindi is different than Bangladesh, West Bengal. If someone is coming and speaking in Hindi, their Hindi is different. Then Bihari Hindi is different. Then Bhojpuri Hindi is different. Then Haryani Hindi is different. Then Rajasthan Hindi is different. Do you see the issue, right? So the neutralization of the softwares, neutralization of these languages into a common neutral Hindi. Do you see? That is where Anuvadhani comes into picture because he was asked me to talk about Anuvadhani.

So basically we are a small learning model. Nothing rocket science, but we are trying to solve the major problem of making everyone understand what it is in a pictorial way, in an audio -based as well as in a video. based and recently as you know there are lot of cling and all I hope you all are using it see this advanced technology is given us wings to fly because if you are just you know talking maybe 40 % people can understand but if I am showing you something right so the because human being has a best of the best ability of capturing the impressions of images that is what we will run it so fast and we called it as videos right that is the reason people like youtube videos so now the matter of the fact is we are translating all this skill related content in a AR VR and a video and pictorial basis so that people can understand easily and at the same time it is in multiple languages not only in Indian language you know if you are interested to learn multiple languages let us assume I am the one who is getting trained but you know I am planning to play as in Japan so now the matter of fact is skill ministry come with a very wonderful initiatives where I can learn everything in Japanese as well as in my own mother language and including English so what a beautiful combination we are creating I see that this artificial intelligence is no more an artificial intelligence it’s an advanced India God sent an AI to make India as advanced country advanced India so this is where I clearly see it do I have one more minute or

Manav Subodh

yeah we can yeah please no no I have more questions for you but thank you thank you so much

Bhutachandra Shekhar

in fact I mean if you allow I just connect these three dots see the learning is important as well as earning is also very much important because we are competing with something called artificial intelligence you know the much more better intelligence but I would like to take you back to one level to prove that you know human intelligence is greater than artificial intelligence the reason being is you know there is a soap company I’m sure you all use soap right so one of the European customer complained saying that I received a soap box without a soap so the company they spent 300 million and created the best ICR engine in the world which can peep inside the soap box without putting a hole to see whether the soap is there or not and he implemented everywhere but the company they have in India also it is called Sini Tarakosa you have seen the ad also in the newspapers and TVs so the guy who is sitting there he never implemented it the CEO got pissed off he came visited India and he said what is the reason you are not using you give me an explanation I spent 300 million do you see the Indian best of the best brain the guy who is sitting there is a 6th standard failed farmer and a small labor and he said sir I don’t need to use this so he said prove it in front of me that you don’t need to use it you know what he did he just took a table fan and put in front of the conveyor belt of the soap so what happened if the soap is there if the soap is not there you know it is empty box right it fly he said I don’t even need to pick brothers I am telling you dear friends this is the best of the best brains Indian is carrying you know Indians have the best capability of connecting right and left brain we are the best of the best human beings living in this world the only problem we are not confident we are not working as a team that is the only problem I think the second dot you know we are not converting our skills into a earning which is much more important because you know if you earn then only you will live right because you need money at the same time how you can survive take it to the next level maybe I will just you know try to pass on to the other panelists I don’t want to occupy their time but we will discuss further

Manav Subodh

in fact you know there is an innovator in the room Bhubaneswaran I don’t know if he is here he is a farmer’s son and he himself has created a technology which is voice based AI for farmer guidance and when I was talking to him he was saying nobody none of the farmers like the complicated apps you know there’s too much too much of content out there I’m just making a simple phone and a voice based service and that’s what he’s getting that grass root knowledge to AI which is so important and one of the stories that he was sharing so thanks Bhuvneshwaran for being here and any one of you who are interested to know his technology he’s a local innovator he’s sitting back in the room but I’ll I’ll turn the next question to Baglaji who’s the mission director at Adal Innovation Mission and one of the big things is we need policy, policy for AI acceleration and there are innovation labs that Adal Innovation Mission is putting up in the hinterlands of India so Mr.

Bagla how can a local innovator participate from the hinterlands of India and still make some thing which is globally relevant or does he need to make something globally relevant

Deepak Bagla

you know first I’m sorry I got a bit late I was in hall number 17, 19 you know what was happening there they had identified so we have tinkering labs currently in 10 ,000 schools 5 ,000 in city, 5 ,000 in village actually 5 ,500 in village 4 ,500 in city and government schools and private schools all included so in the next 96 hours quick background, Atal Innovation Mission which is the government’s innovation mission it is from school to space and I’ll give you a quick introduction on that, will turn a 10 years and it’s 10th birthday which is after 96 hours it is the world’s largest grassroot innovation mission 1 .1 crore young entrepreneurs have moved to it and I’ll give you an answer you and I are now related There were three kids, one 11, one 12 and the other one 14.

You know what are the solutions they came up with? One has given a solution of radiology. How he’s brought in AI that reads your, when your MRI happens and gets it out of you. The other one is treating mental health among students with AI. These are kids which are 11, I can’t even call them kids. You know, I’ll give you a small example. I was just posted into Atal Innovation Mission in July and May. And I said I want to test the power of this platform. I’m just telling you at the school level to begin with. Garima, my colleague is sitting here and it was in September mid. And I said let’s do a hackathon. And everybody told me it’s time for a holiday.

Everyone is taking exams, midterms. Don’t do it now, do it later. I said no, let’s do it. None of you know it. None of you know about it because there was no big act. Five weeks later, we had over 25 lakh prototypes. It is now in the Guinness Book of World Records as the world’s largest hackathon. And I’ll tell you what happens, what I’m saying here. These are not just entries. These are solutions to challenges from that small village. I remember, it was a Saturday. I was doing my puja. I had my phone with me. It rang three times. I didn’t pick it up. The third time I picked it up. The guy said, sir, I’m speaking in a jittery manner.

I’m a 9th class student. I have a problem. Please sort it out. I said, what happened? He said, I want to give three ideas. The teacher is only allowing me to give one. You know what I’m trying to tell you? This is the problem. This India is a different India. He finds my mobile number Calls me up This India cannot be stopped now We talk about That we are number 3 in start -ups in the world And number 2 in the number of unicorns and all This is just a drop in the ocean This story is just 3600 years old 3600 days old This Atal Innovation Mission story Just imagine all these people Coming into your workforce And they are solving The smallest of problems Which are contextual And the other interesting thing Now, 2 months ago Yes This is just a drop in the ocean Mangalore has a government school Our 5 brothers We have a government school So every year there is a global olympiad of robotics.

They select from the best all across the world who go and present. And it’s a very difficult process to do it. This time it was in Panama. I got a call that our five children have been selected. I don’t have money for the ticket. My five kids flew to Panama. All within 96 hours of getting a visa and being there. And of over 90 countries, they came 13. It’s really unbelievable what is happening in India. And that’s what I was just saying in that room when I came from there. I said the future of India. And the biggest benefactor of AI in the world. which we call is the delta multiplier is India. We are 1 .4 billion. We will be 1 .6 billion by 2060.

You will be the largest on the planet. Just imagine each one of them with the power to make the change and the power to work together to make it happen. Which is what AI is doing now. It is empowering that youngster and it is giving them the ability to join hands with each other. The dots which you are joining. This power, we have not even thought of how it can be unleashed. But you know it cannot be stopped. You are now at that inflection point. and we are all underestimating how fast it will happen. We think it will take 10 years, 15 years. In India, it will happen now. So ladies and gentlemen, the future is now.

Manav Subodh

Yeah, the future is now. Thank you so much.

Bhutachandra Shekhar

And if you permit, I just want to add one. And see the kind of a transparent system government of India have. The school children from a small remote village, they got recognition and they got help also. This is the governance what Honorable Prime Minister Ji has created, a transparent and very valuable system for our next Gen Z as well as Millennium.

Manav Subodh

Wait, and I’m being told we have five minutes, so I’ll be making it very quick. And my next question is to Darren Farrant, who’s an Australian, and we were having a nice banter about the work. World Cup, that is going on.

Darren Farrant

sorry what is the world cup I’m not familiar with that I could draw your attention to the winter olympic medal when

Manav Subodh

and Darren and we work together and Darren is from the United Nations information center he’s based in Delhi he’s seeing it all happen and Darren not the world cup question which I’ll talk to you later but the question to you is very very quickly because we have limited time what role you think India can play in the global south and across the globe in taking AI and creating made in India for the world

Darren Farrant

well I think this week is your answer to that this is the first such summit we’ve had on AI in the global south and there’s a very good reason why it’s in India because India is a global leader in south south cooperation in sharing ideas in getting forward and of course just by sheer numbers one one one of humanity. It’s a microcosm of the world. What’s being done here, all the issues you might face with AI are already happening on a small scale, or not a small scale, but are already happening here in India. So the question of languages, well, that’s already an issue in India and you’re solving and dealing with it. So the experiences that you have, you can translate to any other country or any other context because you’re so diverse already.

So I think that’s why India is always going to be at the front of the pack in terms of getting out there and sharing ideas among the global South. And for us at the UN, that’s so important because we’re really worried about the AI divide, the people who might get left behind. So we really want to see India also as a champion of that, of making sure that people, not just in India, but around the rest of the world, get their opportunities to benefit from AI, especially in the area of skill like it’s great to hear of all the innovations. and all the skilling that will take place, but we do have to remember some people will lose their jobs on a large scale.

So what are the solutions we have to get them new skills to be ready for the future? Thanks, thank you.

Manav Subodh

And the question, Rishikesh, to you is, you know, how do you think we can scale it? You’re leading it at NSDC. You’re seeing it happen. What are one thing or two things we need to do to really scale this and make sure that the talent that we are developing is actually employable?

Ashish Pratap Singh

Thank you, Manavji. I think government’s focus is on employability, and it is always said that education is creating the opportunities, but skilling creates employability. And that is also focused in the current budget announcement which has been made, that employability has to be given more focus and employment will come through. And if you are talking about scale, I think some of the speakers have, I’ve already talked about the scale. Whatever we do in India is the world’s largest bid. digital literacy, financial literacy or the transactions which are happening online. I think with the right kind of mindset, skilling has not been that aspirational. But now with AI and multiple sectors growing in, I believe a lot of emphasis is now on improving the skill sets and it is lifelong learning.

And I think with the current government’s focus on multiple domains like logistics, maybe marine, aeronautics, aviation, I think there are a lot of opportunities which are created in the ecosystem. And with the right kind of ITIs who are now the 21st century Indian Institute of Technologies, which our Honourable Prime Minister thinks about, and the engineering institutions, TIIs. I think with all these, I think the stage is all set. We just need to create and come together. The canvas is vast. and whatever we do will be scaled up. Thank you so much. Thank you.

Manav Subodh

And I’ll just conclude with one last question to all the panelists. And, you know, public -private partnership is so important. So, one last question. Just wrapping it up. Just one last question and I’ll start with Mr. Pankaj Pandey. What can industry do to collaborate with your department to make this scalable and replicable? And that’s a question to all of us and we can wrap it up with that.

Pankaj Kumar Pandey

Probably we need the support of the industry the max right now. And we want to collaborate obviously with all the major companies which are there in the AI field apart from the startups which we have in Bangalore. And I think that will provide us the edge which we have in terms of being nimble -footed and adopt and adapt to the technology faster. So I think that is what we need. right now. Thank you.

Manav Subodh

Thank you so much, Mr. Pandey. Deepakji.

Deepak Bagla

Today, we don’t have a choice. It is now an imperative. Everyone has to work together, which is academia, industry, government, and each one of us has a stakeholder. The biggest challenge which I see in my job currently, so Atal Innovation Mission is the core entity which is responsible for the innovation ecosystem of the country to see what can be done. My entire focus now is how do I make one higher learning institution speak to another? One young school speak to another? And how industry and government work together? You know what is my dream? If I could take a moment.

Manav Subodh

No, no, please, please.

Deepak Bagla

If I have one dashboard, all my school innovation labs are there, all my incubators are there. The policy makers are there, the mentors are there. All together on that dashboard, speaking to each other, working together. My God. If it happens, unbelievable. Sorry.

Manav Subodh

The power of collaboration. Yeah.

Pankaj Kumar Pandey

So one thing which I really love about the Western concept is that movement of the people across sectors. So government moving into academia, academia coming to the industry. The guy who has worked in the industry also teaches there in the college. And again works in the government. This kind of a movement, if it is allowed, that will really help. The government will get to know what is happening in the industry, what is happening in the academy.

Ayurveda GPT Member

that you know we got NEP 2020. I strongly believe after the constitution of India, this is the best document have come up. It is connecting five simple dots. One is the education with the skill with the you know with our industry and with our talent and with our innovation. You know and research. I think these five dots are getting connected using this but you know I have a little different thought was all together what we need to do. Instead of doing a cast census for God’s sake no one know don’t want to know others cast. We need to know other skill. We need to do a skill census in this country so that we know what time is up.

You need to let me finish. So the skill census is much more important so that we know each and everyone skills. Let us do a SWOT analysis of it so that we know what is their strengths and weaknesses. We need to strengthen their weaknesses and you know make our country better and interconnect all these dots

Manav Subodh

together. Thank you. Thank you. And more power to India, more power to AI. Thank you so much for the panel. Thank you. Thank you. Thank you so much. A small memento for all the speakers. it’s from the impact summit team and as we conclude one message stands clear AI leadership is not just about models or compute, it’s about people skills and opportunity, if we invest in youth, we invest in impact thank you to all the panelists, thank you to all the audience you have been wonderful and have a good rest of the evening and rest of the summit, thank you so much and a big thank you to our institutional partners, Lloyd Business School and GIMS whose students have been here and engaging with us thank you so much

Related ResourcesKnowledge base sources related to the discussion topics (39)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Safin Matthew opened the session, a joint initiative of Meta and the 1 Million for 1 Billion Foundation titled “AI for Skilling, AI for Impact”.”

The knowledge base describes the special session “AI for Skilling, AI for Impact” hosted by Safin Matthew, VP at the 1M1B Foundation, in collaboration with Meta, confirming the opening and partnership details.

Confirmedhigh

“Jayant Chaudhary is the Minister for Skills and School Education in India.”

A knowledge‑base entry lists Jayant Chaudhary as the Minister for Skills and School Education in India, corroborating his role mentioned in the report.

Additional Contextmedium

“The session emphasized that expanding skills pipelines and accelerating AI deployment are essential for India’s AI future.”

The knowledge base notes that investing in skills pipelines and AI deployment is key to India’s sovereign AI infrastructure, providing broader context to the report’s focus on skilling initiatives.

External Sources (122)
S1
From India to the Global South_ Advancing Social Impact with AI — -Darren Farrant- Director United Nations Information Center India and Bhutan
S2
From India to the Global South_ Advancing Social Impact with AI — Darren Farrant argues that India serves as a microcosm of the world, with all the diversity and challenges that exist gl…
S3
Reskilling for the Intelligent Age / Davos 2025 — – Jayant Chaudhary discussed India’s efforts, including the Apprenticeship Act and PM Internship Program. – Jayant Chau…
S4
Driving Indias AI Future Growth Innovation and Impact — Right. Well, thank you, gentlemen, for that absolutely incredible conversation. You know, the takeaway is clear that inv…
S5
From India to the Global South_ Advancing Social Impact with AI — -Jayant Chaudhary- Honourable Minister of State Independent Charge for Skill Development and Entrepreneurship and Minist…
S6
From India to the Global South_ Advancing Social Impact with AI — -Ayurveda GPT Member- Young innovator working on Ayurveda GPT solution
S7
From India to the Global South_ Advancing Social Impact with AI — And all the MSM is here. He’s someone you could reach out to for an interesting solution. Next, I would like to invite A…
S8
From India to the Global South_ Advancing Social Impact with AI — 847 words | 95 words per minute | Duration: 530 secondss And all the MSM is here. He’s someone you could reach out to f…
S9
https://app.faicon.ai/ai-impact-summit-2026/from-india-to-the-global-south_-advancing-social-impact-with-ai — And all the MSM is here. He’s someone you could reach out to for an interesting solution. Next, I would like to invite A…
S10
https://dig.watch/event/india-ai-impact-summit-2026/from-india-to-the-global-south_-advancing-social-impact-with-ai — And all the MSM is here. He’s someone you could reach out to for an interesting solution. Next, I would like to invite A…
S11
From India to the Global South_ Advancing Social Impact with AI — -Manav Subodh- Founder and CEO of 1M1B, panel moderator
S12
From India to the Global South_ Advancing Social Impact with AI — -Bhutachandra Shekhar- CEO Anuvadini and CCO of AICT
S13
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission
S14
From India to the Global South_ Advancing Social Impact with AI — -Deepak Bagla- Mission Director for Atal Innovation Mission
S15
From India to the Global South_ Advancing Social Impact with AI — 474 words | 143 words per minute | Duration: 198 secondss So good evening. My name is Ashish Pratap Singh. I am the CEO…
S16
From India to the Global South_ Advancing Social Impact with AI — So good evening. My name is Ashish Pratap Singh. I am the CEO of Prasima AI. My father runs an MSME business in Lucknow….
S17
https://app.faicon.ai/ai-impact-summit-2026/from-india-to-the-global-south_-advancing-social-impact-with-ai — And I think with the current government’s focus on multiple domains like logistics, maybe marine, aeronautics, aviation,…
S18
From India to the Global South_ Advancing Social Impact with AI — -Aman Jain- Senior Director and Head of Public Policy, India Meta
S19
From India to the Global South_ Advancing Social Impact with AI — Aman Jain supports the minister’s view by providing evidence from the retail sector, where e-commerce represents only 7%…
S20
From India to the Global South_ Advancing Social Impact with AI — AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if…
S21
From India to the Global South_ Advancing Social Impact with AI — So are you guys ready to take it to the global level? Thank you so much. That was a fantastic initiative taking Ayurveda…
S22
https://dig.watch/event/india-ai-impact-summit-2026/from-india-to-the-global-south_-advancing-social-impact-with-ai — Good morning. My name is Nandakishor. Hello, everyone. In India, there are… There are around 30 ,000 primary health ce…
S23
From India to the Global South_ Advancing Social Impact with AI — The provided transcript does not contain a verbatim statement from Nandakishor Mukkunnoth, so a specific argument cannot…
S24
From India to the Global South_ Advancing Social Impact with AI — 0 words | 0 words per minute | Duration: 1 secondss Good morning. My name is Nandakishor. Hello, everyone. In India, th…
S25
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — The Prime Minister advocates for the responsible development and use of artificial intelligence. This argument stresses …
S26
Leveraging AI4All_ Pathways to Inclusion — It can be used with low internet and can be used offline as well. Second, and you will hear from Augustia soon, I really…
S27
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Its utility for educational and routine activities has been validated by a teenager from India, underlining the app’s re…
S28
Seeing, moving, living: AI’s promise for accessible technology — Compare this toEnvision Glasses, which uses a similar concept but targets professional and institutional markets. TheHom…
S29
Software.gov — Attracting tech talent into government was also identified as a key factor in implementing GovTech. The panelists mentio…
S30
ACRONYMS — To develop a stronger cyber security sector or industry, a public private partnership shall be established to develop …
S31
Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC — ## Youth Inclusion and Grassroots Innovation
S32
WS #119 AI for Multilingual Inclusion — Jesse Nathan Kalange: All right. Thank you very much. Very nice question, because that was the next question that wa…
S33
Comprehensive Report: Preventing Jobless Growth in the Age of AI — And I think a lot of those reasons is that to get the full benefit of AI, it’s not about an AI applied to a task, but it…
S34
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — It is changing how people get jobs and how they get hired for jobs. So an example of that is entrepreneurs often now are…
S35
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Lack of infrastructure, skills, compute access, and data access hinder policy effectiveness This optimistic context ali…
S36
Building Inclusive Societies with AI — -Inclusive perspective: Strong focus on reaching the most marginalized populations, not just the easily accessible segme…
S37
Fireside Conversation: 01 — A significant discussion focused on language accessibility for inclusive AI deployment. Amodei explained that while AI m…
S38
Building Inclusive Societies with AI — Inclusive perspective: Strong focus on reaching the most marginalized populations, not just the easily accessible segmen…
S39
Bridging the Digital Divide: Achieving Universal and Meaningful Connectivity (ITU) — The analysis argues for a multi-stakeholder approach in policy-making to effectively address these issues. It is suggest…
S40
WSIS 2018 – High-level policy statements: concluding session — Mr Pierre Mirlesse,Hewlett Packard Enterprise – EMEA, facilitated the Moderated High-Level Policy Session 9 – ICT applic…
S41
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — Another significant aspect highlighted is the role of multi-stakeholder engagement in the Internet Governance Forum (IGF…
S42
Smart Regulation Rightsizing Governance for the AI Revolution — Data sharing initiatives between government, academia, and industry within countries provide models for broader internat…
S43
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Rather than viewing India’s complexity as a challenge, Raghavan presented it as the country’s greatest competitive advan…
S44
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Amish points out that most global AI models operate in English, making Indian‑language capability crucial for the countr…
S45
Why science metters in global AI governance — Bouverot illustrates how varying scientific predictions about AI’s impact on employment lead to fundamentally different …
S46
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S47
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S48
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Explanation:There was unexpected consensus that fear about AI is widespread across different age groups and demographics…
S49
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Community involvement is crucial, and efforts have been made to make content available in multiple languages to ensure i…
S50
How Multilingual AI Bridges the Gap to Inclusive Access — Impact:This comment added geopolitical depth to the discussion while simultaneously complicating it by acknowledging tha…
S51
WS #119 AI for Multilingual Inclusion — Promoting Language Equity and Inclusion Public services should provide materials and support in multiple languages to p…
S52
Multilingual Internet: a Key Catalyst for Access &amp; Inclusion | IGF 2023 Town Hall #75 — In conclusion, establishing a fully multilingual internet is crucial for achieving digital inclusion, language justice, …
S53
From India to the Global South_ Advancing Social Impact with AI — The minister argues that when new technology emerges, early adopters gain advantages and the overall economic pie expand…
S54
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — In light of technological advancements, there is a need to revisit and reframe industrial and employment policies. The a…
S55
Artificial intelligence — The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate th…
S56
Skilling and Education in AI — Because from a human perspective, I will use a piece of technology if I can trust it. Now, there are many reasons why pe…
S57
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Good afternoon, and let me say how delighted I am to be a part of this wonderful summit, the Impact AI Summit that India…
S58
Comprehensive Report: Preventing Jobless Growth in the Age of AI — – Erik Brynjolfsson- Ravi Kumar S. – Valdis Dombrovskis- Laura D’Andrea Tyson- Elizabeth Shuler Economic | Future of w…
S59
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biologi…
S60
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Additionally,public-private partnershipsare essential for scaling sustainability initiatives. Companies invest in on-sit…
S61
From India to the Global South_ Advancing Social Impact with AI — -Public-Private Partnership for Scale: Emphasis on collaboration between government, industry, and academia to create em…
S62
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Development | Legal and regulatory Educational data is often not linked to financial data due to government silos. Priv…
S63
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — In conclusion, the analysis underscores the need to prioritize data sharing across borders for socio-economic developmen…
S64
AI and Data Driving India’s Energy Transformation for Climate Solutions — Coordination and Collaboration at Scale: Multiple speakers highlighted the critical need for coordination across stakeho…
S65
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — Explanation:Both speakers view India’s massive scale as an advantage for AI implementation rather than a challenge, sugg…
S66
The Global Power Shift India’s Rise in AI & Semiconductors — Summary:The speakers demonstrate strong consensus on key strategic approaches: the critical importance of public-private…
S67
Indias AI Leap Policy to Practice with AIP2 — Summary:The main areas of disagreement center around governance approaches (regulatory vs. flexible frameworks), investm…
S68
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Factors such as restricted access to …
S69
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S70
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Examples include children with disabilities being provided with non-inclusive educational materials, political participa…
S71
Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC — ## Youth Inclusion and Grassroots Innovation
S72
WS #119 AI for Multilingual Inclusion — Jesse Nathan Kalange: All right. Thank you very much. Very nice question, because that was the next question that wa…
S73
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — Sharing is learning with the rest of the world. One, an AI that is independent. From large global AI to empowered, scala…
S74
From India to the Global South_ Advancing Social Impact with AI — Safin reports on the progress of the URI initiative led by META in partnership with India AI, AICT, and 1M1B, which aims…
S75
Comprehensive Report: Preventing Jobless Growth in the Age of AI — And I think a lot of those reasons is that to get the full benefit of AI, it’s not about an AI applied to a task, but it…
S76
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S77
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — “And so I think there’s tens of millions of jobs.”[55]. “With AI and the tools that we have, that work can move to a who…
S78
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — And so out of a graduating class of 100, you’d have 30 job creators or people that have created jobs for 30 people and 7…
S79
AI for Good Impact Awards — – **Accessibility and inclusion**: Solutions focused on serving underserved populations including rural communities, ref…
S80
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Calls for contributions from the private sector and startups to facilitate the inclusion of persons with disabilities. …
S81
Fireside Conversation: 01 — A significant discussion focused on language accessibility for inclusive AI deployment. Amodei explained that while AI m…
S82
WS #6 Bridging Digital Gaps in Agriculture &amp; trade Transformation — Jimson Olufuye: Yes, so maybe after this quick statement I’ll have to join another session. What I want to, I’ve mention…
S83
Leveraging AI4All_ Pathways to Inclusion — Bhansali argues that good technology alone does not automatically include people, and adding AI doesn’t solve inclusion …
S84
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — In addition to their financial support, Microsoft embraces corporate social responsibility (CSR) and actively tracks the…
S85
From India to the Global South_ Advancing Social Impact with AI — -Public-Private Partnership for Scale: Emphasis on collaboration between government, industry, and academia to create em…
S86
Bridging the Digital Divide: Achieving Universal and Meaningful Connectivity (ITU) — The analysis argues for a multi-stakeholder approach in policy-making to effectively address these issues. It is suggest…
S87
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — Her Excellency Ms. Enkelejda Mucaj presented Albania’s achievement of offering 95% of public services exclusively online…
S88
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Rather than viewing India’s complexity as a challenge, Raghavan presented it as the country’s greatest competitive advan…
S89
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Amish points out that most global AI models operate in English, making Indian‑language capability crucial for the countr…
S90
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S91
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S92
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S93
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S94
Open Microphone Taking Stock — The tone was largely positive and appreciative, with many speakers thanking the hosts and expressing enthusiasm for the …
S95
Panel Discussion Inclusion Innovation &amp; the Future of AI — The discussion maintained a constructive and collaborative tone throughout, with panelists building on each other’s poin…
S96
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S97
Towards Parity in Power / DAVOS 2025 — The tone was primarily serious and analytical, with panelists providing data, personal experiences, and policy recommend…
S98
Redrawing the Geography of Jobs / Davos 2025 — The tone was primarily analytical and solution-oriented, with panelists offering data, insights and recommendations base…
S99
WS #213 Hold On, We’re Going South: beyond GDC — The tone was generally serious and analytical, with speakers providing expert perspectives on complex policy issues. The…
S100
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S101
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Summary:All three speakers emphasize that successful semiconductor workforce development requires close collaboration be…
S102
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S103
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission Atal Innovation Mission’s Decade of Impact Thank…
S104
Opening of the session — Focusing on practical, action-oriented measures that can benefit both developed and developing countries
S105
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S106
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S107
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S108
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S109
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S110
Upskilling for the AI era: Education’s next revolution — ## Programme Development and Current Impact Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morni…
S111
Skilling and Education in AI — So, good morning. AI is an opportunity and an enabler. So let me begin with a few words about NSDC itself. So this is a …
S112
Keynote Adresses at India AI Impact Summit 2026 — Thank you, Director Kratzios. Thank you for the opportunity to return to this stage and to mark this important occasion …
S113
Inclusive AI Starts with People Not Just Algorithms — Education, upskilling, and future skills for youth
S114
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Moderator – Yves Poullet:Thanks Gabriela for this marvellous introduction. I think this introduction will help us to fix…
S115
AI for Good Impact Initiative — Equipping young people with the resources to initiate and expand their ventures feeds into broader goals like promoting …
S116
IndoGerman AI Collaboration Driving Economic Development and Soc — “Germany is investing in so -called AI lighthouses, which foster AI innovations for climate and environmental protection…
S117
How to make AI governance fit for purpose? — Anne Bouverot: Thank you so much, Gabriela. Thank you for this. I’m lucky to go first because by the time everyone has s…
S118
Meta confirms the launch of Llama 3 — Meta has confirmed its imminentreleaseof Llama 3, the next iteration of its large language model set to power generative…
S119
Revolutionising neurosurgery: AI tool assists in brain surgeries — The Center for Artificial Intelligence and Robotics (CAIR), a Hong Kong-based research center affiliated with the Chines…
S120
Focus shifts to improving AI models in 2024: size, data, and applications. — Interest in artificial intelligence (AI) surged in 2023 after the launch of Open AI’s Chat GPT, the internet’s most reno…
S121
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S122
IGF 2024 Global Youth Summit — Margaret Nyambura Ndung’u: Thank you, Madam Moderator. Good morning, good afternoon, and good evening to all of the di…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Safin Matthew
1 argument95 words per minute847 words530 seconds
Argument 1
Large‑scale AI skilling initiative targeting 100,000 youth, with 15,000 already trained
EXPLANATION
Safin described a national AI‑skilling programme that aims to empower 100,000 young people on generative AI and large language models. He reported that about 15,000 youths have already completed the training within the first two months of the initiative.
EVIDENCE
Safin noted that the URI initiative, led by Meta in partnership with India AI, AICT and 1M1B, commits to empower 100,000 youth on generative AI and large language models, and that roughly 15,000 youths have already been skilled in the first two months after launch [12-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI summit session “AI for Skilling, AI for Impact” hosted by Safin highlighted a national AI-skilling programme aiming to empower 100,000 youth, with early reports of roughly 15,000 participants completing training in the first two months [S2][S1].
MAJOR DISCUSSION POINT
Large‑scale AI skilling initiative targeting 100,000 youth, with 15,000 already trained
AGREED WITH
Aman Jain, Pankaj Kumar Pandey, Jayant Chaudhary, Deepak Bagla
DISAGREED WITH
Jayant Chaudhary, Deepak Bagla, Pankaj Kumar Pandey
A
Aman Jain
3 arguments173 words per minute995 words344 seconds
Argument 1
Meta’s AI coach and inclusive skilling programs for under‑represented groups
EXPLANATION
Aman highlighted Meta’s development of an AI coach that supports multilingual learning for groups that are traditionally under‑served. He also referenced the Ray‑Ban “Be My Eyes” feature that assists visually impaired users, illustrating Meta’s broader inclusive AI strategy.
EVIDENCE
Aman asked how AI skilling could reach under-represented groups such as people with disabilities or those in remote areas [68-70], then described Meta’s AI coach built for multilingual support [98-99] and cited the Ray-Ban “Be My Eyes” feature that helps visually impaired users [82-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aman emphasized Meta’s multilingual AI coach and inclusive education approach, noting industry-partner collaboration to reach under-served communities, as described in the summit discussion [S2][S1].
MAJOR DISCUSSION POINT
Meta’s AI coach and inclusive skilling programs for under‑represented groups
AGREED WITH
Jayant Chaudhary, Bhutachandra Shekhar, Ayurveda GPT Member
Argument 2
Prime Minister’s view that AI generates opportunities rather than eliminating jobs; advice to embrace AI‑driven roles
EXPLANATION
Aman referred to the Prime Minister’s statement that AI will create new opportunities instead of taking away jobs. He asked the minister for thoughts and advice on how to prepare the workforce for AI‑driven roles.
EVIDENCE
Aman quoted the Prime Minister’s remark that the notion of AI taking away jobs is misplaced and that technology creates new opportunities, then posed a question to the minister about advice for embracing AI-driven roles [38-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Prime Minister’s stance that AI creates new opportunities and should be developed responsibly is recorded in the summit remarks, reinforcing the argument that AI will generate jobs rather than displace them [S25][S1].
MAJOR DISCUSSION POINT
Prime Minister’s view that AI generates opportunities rather than eliminating jobs; advice to embrace AI‑driven roles
AGREED WITH
Jayant Chaudhary, Darren Farrant
DISAGREED WITH
Jayant Chaudhary, Darren Farrant
Argument 3
Meta’s Ray‑Ban “Be My Eyes” feature and multilingual AI coach to aid visually impaired and language‑diverse users
EXPLANATION
Aman described Meta’s Ray‑Ban glasses feature called “Be My Eyes,” which assists people with visual impairments. He also reiterated the work on a multilingual AI coach that can serve users across India’s many languages.
EVIDENCE
Aman mentioned that Meta’s Ray-Ban glasses include a “Be My Eyes” feature designed for visually impaired users [82-84] and noted the development of a multilingual AI coach focused on adding more Indian languages [98-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Meta’s Ray-Ban glasses include a “Be My Eyes” feature for visually impaired users and operate offline, supporting low-internet contexts; the AI coach adds multilingual support, illustrating inclusive AI solutions [S26][S27][S28].
MAJOR DISCUSSION POINT
Meta’s Ray‑Ban “Be My Eyes” feature and multilingual AI coach to aid visually impaired and language‑diverse users
AGREED WITH
Jayant Chaudhary, Manav Subodh
M
Manav Subodh
1 argument162 words per minute1052 words389 seconds
Argument 1
Proposal for a national skill census and stronger public‑private partnership to map and develop talent
EXPLANATION
Manav called for a systematic national skill census to identify existing talent and gaps, coupled with deeper public‑private collaboration to align training with industry needs. He argued that such mapping would enable more effective scaling of AI skilling programmes.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion on attracting tech talent into government stresses the need for public-private partnerships and systematic talent mapping, aligning with the proposed national skill census [S29][S2].
MAJOR DISCUSSION POINT
Proposal for a national skill census and stronger public‑private partnership to map and develop talent
AGREED WITH
Aman Jain, Jayant Chaudhary
P
Pankaj Kumar Pandey
2 arguments168 words per minute582 words206 seconds
Argument 1
Government push for data sharing across departments to enable targeted AI‑driven services
EXPLANATION
Pankaj emphasized that effective AI‑driven public services require different government departments to share and interlink their data. He gave the example of weather, energy and agricultural data needing to communicate for coordinated service delivery.
EVIDENCE
Pankaj explained that for targeted AI services, data such as weather forecasts must be linked with energy supply information and cropping patterns, requiring departments to collaborate and share datasets [221-222].
MAJOR DISCUSSION POINT
Government push for data sharing across departments to enable targeted AI‑driven services
AGREED WITH
Deepak Bagla
DISAGREED WITH
Safin Matthew, Jayant Chaudhary, Deepak Bagla
Argument 2
Encouraging cross‑sector mobility of personnel to foster knowledge exchange between government, academia, and industry
EXPLANATION
Pankaj advocated for the movement of professionals across government, academia and industry to break silos and promote mutual learning. He suggested that such mobility would help align policies, curricula and industry needs.
EVIDENCE
Pankaj highlighted the importance of people moving between sectors-government officials, academic staff and industry professionals-to share knowledge and keep each sector informed about the others’ developments [418-422].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit highlighted cross-sector movement of professionals as essential for knowledge transfer and coordinated AI policy implementation [S2][S1].
MAJOR DISCUSSION POINT
Encouraging cross‑sector mobility of personnel to foster knowledge exchange between government, academia, and industry
D
Deepak Bagla
2 arguments131 words per minute979 words446 seconds
Argument 1
Atal Innovation Mission’s school labs, hackathons and a coordinated policy/dashboard to accelerate AI learning
EXPLANATION
Deepak outlined the Atal Innovation Mission’s extensive network of school labs and its record‑breaking hackathons that have generated millions of prototypes. He also proposed a unified dashboard to connect schools, incubators, mentors and policymakers for coordinated AI education.
EVIDENCE
Deepak described the Atal Innovation Mission’s 10,000 school labs (5,000 city, 5,500 village) and a hackathon that produced over 2.5 million prototypes, earning a Guinness World Record [284-304]; he later suggested a single dashboard that would bring together labs, incubators, mentors and policymakers for real-time collaboration [411-414].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Atal Innovation Mission operates 10,000 tinkering labs (5,500 villages, 4,500 cities) and organized a record-breaking hackathon generating millions of prototypes, demonstrating large-scale AI learning infrastructure [S13][S1].
MAJOR DISCUSSION POINT
Atal Innovation Mission’s school labs, hackathons and a coordinated policy/dashboard to accelerate AI learning
DISAGREED WITH
Safin Matthew, Jayant Chaudhary, Pankaj Kumar Pandey
Argument 2
Proposal for a unified dashboard linking schools, incubators, mentors, and policymakers to streamline collaboration
EXPLANATION
Deepak proposed creating a single digital platform that aggregates all innovation labs, incubators, mentors and policy makers, enabling them to interact and coordinate activities efficiently. He argued that such a dashboard would dramatically improve collaboration across the ecosystem.
EVIDENCE
Deepak suggested a dashboard where school innovation labs, incubators, mentors and policymakers are all visible and can communicate, describing it as a “unbelievable” enabler for collaboration [411-414].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A collaborative dashboard that aggregates schools, incubators, mentors and policymakers was proposed to unleash innovation potential across the ecosystem [S2].
MAJOR DISCUSSION POINT
Proposal for a unified dashboard linking schools, incubators, mentors, and policymakers to streamline collaboration
AGREED WITH
Pankaj Kumar Pandey
J
Jayant Chaudhary
4 arguments173 words per minute2065 words712 seconds
Argument 1
Early adoption of AI creates new job categories and expands the economic “pie”
EXPLANATION
Jayant argued that societies that adopt AI early become first‑movers, which enlarges the overall economic opportunity (“pie”) and generates entirely new job categories. He stressed that India’s current low adoption level means the country stands to gain substantially by scaling AI.
EVIDENCE
Jayant stated that early adopters gain an advantageous position and increase the size of the economic pie, noting that AI is not yet adopted at scale in India and that new jobs will emerge as the technology matures [41-44][55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The minister’s remarks at the summit note that early AI adopters gain a strategic advantage and enlarge the overall economic pie, echoing the view that technology historically creates new job categories [S2][S1].
MAJOR DISCUSSION POINT
Early adoption of AI creates new job categories and expands the economic “pie”
AGREED WITH
Aman Jain, Darren Farrant
DISAGREED WITH
Aman Jain, Darren Farrant
Argument 2
AI tools for people with disabilities and remote‑area users; teacher‑focused screening and personalized learning
EXPLANATION
Jayant highlighted AI applications that support students with special needs, emphasizing early screening, teacher sensitisation and personalised learning pathways. He also mentioned AI‑enabled solutions for remote regions such as the Northeast through the Skill India Assistant.
EVIDENCE
Jayant described the need to identify special-needs students early, equip teachers with AI tools for personalised journeys, and cited the Skill India Assistant as an example of AI reaching remote geographies [71-78][79-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI4All initiatives and Meta’s Ray-Ban “Be My Eyes” feature illustrate AI applications that support visually impaired and low-literacy users, providing inclusive learning tools for remote and disabled populations [S26][S27].
MAJOR DISCUSSION POINT
AI tools for people with disabilities and remote‑area users; teacher‑focused screening and personalized learning
AGREED WITH
Aman Jain, Bhutachandra Shekhar, Ayurveda GPT Member
Argument 3
Call for industry to move beyond closed hiring networks and co‑design curricula with ITIs and higher‑education institutes
EXPLANATION
Jayant urged Indian enterprises to abandon closed, trust‑based hiring practices and instead collaborate with academia and government to design skill‑based curricula. He advocated for open recruitment based on competencies rather than traditional degrees.
EVIDENCE
Jayant pointed out that corporations still hire within closed networks, calling for industry partners to help create new courses and open up recruitment criteria based on skills and confidence rather than solely on qualifications [100-108].
MAJOR DISCUSSION POINT
Call for industry to move beyond closed hiring networks and co‑design curricula with ITIs and higher‑education institutes
AGREED WITH
Safin Matthew, Aman Jain, Pankaj Kumar Pandey, Deepak Bagla
Argument 4
PM Setu funding of ₹60,000 crore for ITIs, creation of clustered institutes with industry governance
EXPLANATION
Jayant outlined the PM Setu scheme, which allocates ₹60,000 crore to upgrade over 3,000 ITIs, forming clusters of five institutions each with industry partners governing curriculum and training. He presented this as a way to modernise vocational education and align it with local economies.
EVIDENCE
Jayant explained that the PM Setu initiative will invest ₹60,000 crore in ITIs, creating clusters of five institutes each governed by industry partners, with new courses and trainers designed to meet local MSME needs [112-124].
MAJOR DISCUSSION POINT
PM Setu funding of ₹60,000 crore for ITIs, creation of clustered institutes with industry governance
D
Darren Farrant
1 argument175 words per minute312 words106 seconds
Argument 1
Need for reskilling and inclusive policies to prevent large‑scale job displacement, especially in the Global South
EXPLANATION
Darren warned that while AI can drive growth, it also risks displacing large numbers of workers, particularly in the Global South. He called for comprehensive reskilling programmes and inclusive policies to ensure that no one is left behind.
EVIDENCE
Darren noted that AI can widen the divide, causing job losses, and emphasized the UN’s concern about the AI divide, urging the development of reskilling solutions to give people new skills for the future [360-368].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Davos 2025 “Reskilling for the Intelligent Age” report stresses the urgency of reskilling programmes and inclusive policies to mitigate AI-driven job displacement, particularly in developing regions, aligning with the argument [S3][S25].
MAJOR DISCUSSION POINT
Need for reskilling and inclusive policies to prevent large‑scale job displacement, especially in the Global South
AGREED WITH
Jayant Chaudhary, Aman Jain
DISAGREED WITH
Aman Jain, Jayant Chaudhary
B
Bhutachandra Shekhar
1 argument187 words per minute1515 words485 seconds
Argument 1
Translation of skill‑related books into 22 Indian languages and audio/visual formats to reach non‑English speakers
EXPLANATION
Bhutachandra explained that the Anuvadini initiative has translated skill‑related manuals into 22 Indian languages and produced audio‑based versions, making technical content accessible to non‑English‑speaking workers such as plumbers and painters.
EVIDENCE
He stated that all skill-related books have been translated into 22 Indian languages and that audio-based books have been created to help low-literacy users understand the content [250-256].
MAJOR DISCUSSION POINT
Translation of skill‑related books into 22 Indian languages and audio/visual formats to reach non‑English speakers
AGREED WITH
Aman Jain, Jayant Chaudhary, Ayurveda GPT Member
A
Ayurveda GPT Member
2 arguments200 words per minute275 words82 seconds
Argument 1
Ayurveda‑specific LLM that answers queries directly from manuscripts, delivering source‑cited, multilingual responses
EXPLANATION
The Ayurveda GPT representative demonstrated a language model that can be queried to retrieve answers straight from Ayurvedic manuscripts, providing source citations and operating in multiple languages. This enables scholars and practitioners to access authentic knowledge instantly.
EVIDENCE
The member showed that the model can be queried to obtain answers directly from the manuscript, returning the source and supporting multilingual interaction [178-183].
MAJOR DISCUSSION POINT
Ayurveda‑specific LLM that answers queries directly from manuscripts, delivering source‑cited, multilingual responses
AGREED WITH
Aman Jain, Jayant Chaudhary, Bhutachandra Shekhar
Argument 2
Domain‑specific AI model for Ayurveda manuscripts, providing real‑time, source‑backed answers
EXPLANATION
The same demonstration highlighted the model’s ability to deliver real‑time answers with citations from original Ayurvedic texts, emphasizing its utility for research and education across language barriers.
EVIDENCE
The member reiterated that the AI model can answer queries in real time, citing the manuscript source and supporting multiple languages [178-183].
MAJOR DISCUSSION POINT
Domain‑specific AI model for Ayurveda manuscripts, providing real‑time, source‑backed answers
A
Ashish Pratap Singh
2 arguments143 words per minute474 words198 seconds
Argument 1
Demonstration of AI‑driven productivity gains for MSMEs, illustrating industry‑led AI adoption benefits
EXPLANATION
Ashish presented an autonomous AI agent that automates routine MSME tasks such as tender tracking, CRM queries and calendar management, delivering significant time savings and high compliance. He quantified the impact as over 15,000 minutes saved per month and near‑zero productivity loss.
EVIDENCE
He explained that the AI agent uses Meta’s Scout and Maverick models to automate tasks, saving more than 15,000 minutes monthly with 99.9% compliance and reducing a 35% productivity loss to almost zero [164-168].
MAJOR DISCUSSION POINT
Demonstration of AI‑driven productivity gains for MSMEs, illustrating industry‑led AI adoption benefits
Argument 2
Autonomous AI agent that automates MSME tasks (tender tracking, CRM, calendar) achieving >15,000 minutes saved monthly
EXPLANATION
Ashish reiterated the capabilities of his autonomous AI agent, emphasizing its role in streamlining MSME operations and delivering measurable efficiency gains. The solution showcases how AI can directly improve business processes for small enterprises.
EVIDENCE
He restated that the autonomous AI agent automates tasks like tender extraction and CRM querying, saving over 15,000 minutes per month with 99.9% compliance and eliminating a 35% productivity loss [164-168].
MAJOR DISCUSSION POINT
Autonomous AI agent that automates MSME tasks (tender tracking, CRM, calendar) achieving >15,000 minutes saved monthly
N
Nandakishor Mukkunnoth
1 argument171 words per minute252 words87 seconds
Argument 1
Offline AI diagnostic app for cardiac care in primary health centres, reducing mortality in rural areas
EXPLANATION
Nandakishor introduced “AI for Cardio,” an offline desktop application that analyses ECG images and blood reports using a fine‑tuned Llama 3.11 model. The tool shortens diagnostic delays from 30‑40 minutes, has been deployed in over 100 primary health centres and helped more than 1,000 patients.
EVIDENCE
He explained that the app runs completely offline, allowing practitioners to upload ECG images and blood reports for diagnosis, reducing a 30-40 minute delay that previously increased mortality; it has been implemented in 100+ PHCs serving 1,000+ patients [24-33].
MAJOR DISCUSSION POINT
Offline AI diagnostic app for cardiac care in primary health centres, reducing mortality in rural areas
Agreements
Agreement Points
Inclusive AI skilling for under‑represented groups and people with disabilities
Speakers: Aman Jain, Jayant Chaudhary, Manav Subodh
Meta’s AI coach and inclusive skilling programs for under‑represented groups Meta’s Ray‑Ban “Be My Eyes” feature and multilingual AI coach to aid visually impaired and language‑diverse users AI tools for people with disabilities and remote‑area users; teacher‑focused screening and personalized learning Proposal for a national skill census and stronger public‑private partnership to map and develop talent
All three speakers emphasized the need for AI-driven skilling solutions that reach marginalized and remote populations, highlighting multilingual tools, accessibility features for the visually impaired, and systematic talent mapping to ensure no one is left behind [68-70][98-99][82-84][71-78][424-433].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for inclusive AI that addresses accessibility barriers, as highlighted in discussions on AI for digital accessibility for persons with disabilities [S70] and broader inclusive AI initiatives emphasizing cross-cultural dialogue and resource constraints [S68]. Gender-inclusive policymaking also stresses user involvement to ensure equitable skill development [S69].
Public‑private partnership and industry collaboration are essential for scaling AI initiatives
Speakers: Safin Matthew, Aman Jain, Pankaj Kumar Pandey, Jayant Chaudhary, Deepak Bagla
Large‑scale AI skilling initiative targeting 100,000 youth, with 15,000 already trained Meta’s AI coach and inclusive skilling programs for under‑represented groups Government push for data sharing across departments to enable targeted AI‑driven services Call for industry to move beyond closed hiring networks and co‑design curricula with ITIs and higher‑education institutes Proposal for a unified dashboard linking schools, incubators, mentors, and policymakers to streamline collaboration
Speakers from government, industry and civil society agreed that coordinated partnerships, open hiring practices and shared platforms are critical to expand AI skilling and deployment at scale [12-13][147-148][221-222][100-108][411-414].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on public-private partnerships mirrors the consensus in multiple forums that such collaboration is essential for scaling AI, as noted in the IGF analysis of partnership models for sustainability [S60], the India-to-Global-South briefing on PPP for skill creation [S61], and the Global Power Shift report underscoring PPP as a strategic pillar for AI diffusion [S66].
Early adoption of AI expands the economic ‘pie’ and creates new job opportunities, but reskilling is needed
Speakers: Jayant Chaudhary, Aman Jain, Darren Farrant
Early adoption of AI creates new job categories and expands the economic “pie” Prime Minister’s view that AI generates opportunities rather than eliminating jobs; advice to embrace AI‑driven roles Need for reskilling and inclusive policies to prevent large‑scale job displacement, especially in the Global South
All three highlighted that AI will generate new employment opportunities if adopted early, while also stressing the necessity of reskilling programmes to mitigate displacement risks [41-44][38-40][360-368].
POLICY CONTEXT (KNOWLEDGE BASE)
The view that early AI adoption expands the economic pie and creates new jobs while requiring reskilling reflects arguments made by the Indian minister on early adopters gaining advantage and new job categories [S53] and the broader literature contrasting job-creation versus job-loss scenarios that call for reskilling policies [S45][S54].
Data sharing and coordinated platforms are needed across government and ecosystem actors
Speakers: Pankaj Kumar Pandey, Deepak Bagla
Government push for data sharing across departments to enable targeted AI‑driven services Proposal for a unified dashboard linking schools, incubators, mentors, and policymakers to streamline collaboration
Both emphasized interoperable data and a common digital hub to allow different agencies and partners to work together efficiently [221-222][411-414].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for coordinated data sharing echo recommendations from multi-stakeholder governance studies that highlight siloed government data and the need for shared platforms across public and private actors [S62], as well as cross-border data flow frameworks stressing trust and value creation [S63] and sector-wide coordination for AI-driven energy solutions [S64].
Multilingual AI and language accessibility are crucial for inclusive impact
Speakers: Aman Jain, Jayant Chaudhary, Bhutachandra Shekhar, Ayurveda GPT Member
Meta’s AI coach and inclusive skilling programs for under‑represented groups AI tools for people with disabilities and remote‑area users; teacher‑focused screening and personalized learning Translation of skill‑related books into 22 Indian languages and audio/visual formats to reach non‑English speakers Ayurveda‑specific LLM that answers queries directly from manuscripts, delivering source‑cited, multilingual responses
The speakers converged on the importance of delivering AI solutions in multiple Indian languages and formats, from skill-book translations to multilingual AI assistants, to ensure broad accessibility [98-99][71-78][250-256][178-183].
POLICY CONTEXT (KNOWLEDGE BASE)
The priority of multilingual AI aligns with IGF workshops advocating language equity and a multilingual internet as foundations for digital inclusion [S51][S52], and with specific initiatives to provide AI-generated content in multiple languages for broader accessibility [S49][S50].
Similar Viewpoints
Both highlighted AI‑driven tools that improve accessibility for disabled and linguistically diverse users, stressing multilingual support and personalized learning pathways [98-99][71-78].
Speakers: Aman Jain, Jayant Chaudhary
Meta’s AI coach and inclusive skilling programs for under‑represented groups AI tools for people with disabilities and remote‑area users; teacher‑focused screening and personalized learning
Both called for interoperable data systems and a common digital dashboard to coordinate actions across government, academia and industry [221-222][411-414].
Speakers: Pankaj Kumar Pandey, Deepak Bagla
Government push for data sharing across departments to enable targeted AI‑driven services Proposal for a unified dashboard linking schools, incubators, mentors, and policymakers to streamline collaboration
Both recognized that AI will reshape the labour market, creating new roles while also requiring systematic reskilling to avoid widespread job loss [41-44][360-368].
Speakers: Jayant Chaudhary, Darren Farrant
Early adoption of AI creates new job categories and expands the economic “pie” Need for reskilling and inclusive policies to prevent large‑scale job displacement, especially in the Global South
Both stressed the necessity of large‑scale, data‑driven skilling programmes to build a skilled workforce for the AI era [12-13][424-433].
Speakers: Safin Matthew, Manav Subodh
Large‑scale AI skilling initiative targeting 100,000 youth, with 15,000 already trained Proposal for a national skill census and stronger public‑private partnership to map and develop talent
Both emphasized multilingual content and tools as essential for inclusive AI education and services [250-256][98-99].
Speakers: Bhutachandra Shekhar, Aman Jain
Translation of skill‑related books into 22 Indian languages and audio/visual formats to reach non‑English speakers Meta’s AI coach and inclusive skilling programs for under‑represented groups
Unexpected Consensus
Domain‑specific AI solutions for sectoral challenges (health diagnostics and traditional knowledge)
Speakers: Nandakishor Mukkunnoth, Ayurveda GPT Member
Offline AI diagnostic app for cardiac care in primary health centres, reducing mortality in rural areas Ayurveda‑specific LLM that answers queries directly from manuscripts, delivering source‑cited, multilingual responses
Despite coming from very different domains-modern medical diagnostics and ancient Ayurvedic scholarship-both innovators demonstrated AI applications tailored to specific sectoral needs, highlighting a shared belief in AI’s capacity to address niche, high-impact problems [24-33][178-183].
POLICY CONTEXT (KNOWLEDGE BASE)
Focusing on sector-specific AI for health diagnostics resonates with expert commentary on the convergence of biological and artificial intelligence shaping future healthcare and food security [S59], reinforcing the need for targeted AI applications in critical domains.
Overall Assessment

The discussion showed strong convergence on several fronts: inclusive, multilingual AI skilling; the necessity of public‑private‑academic partnerships; the economic upside of early AI adoption coupled with a clear call for reskilling; and the need for interoperable data platforms. These points were echoed across government ministers, industry leaders, and innovators, indicating a cohesive vision for scaling AI responsibly in India.

High consensus – the repeated alignment among diverse stakeholders suggests that policy and investment frameworks supporting inclusive skilling, data sharing, and collaborative governance are likely to gain broad political and sectoral support, accelerating AI‑driven development while mitigating risks.

Differences
Different Viewpoints
Impact of AI on employment – optimism versus caution and the need for reskilling
Speakers: Aman Jain, Jayant Chaudhary, Darren Farrant
Prime Minister’s view that AI generates opportunities rather than eliminating jobs; advice to embrace AI‑driven roles Early adoption of AI creates new job categories and expands the economic “pie” Need for reskilling and inclusive policies to prevent large‑scale job displacement, especially in the Global South
Aman cites the Prime Minister’s statement that AI will create new opportunities and asks the minister for advice on embracing AI-driven roles [38-40]. Jayant agrees that AI will generate new jobs but warns that the transition may make work harder and raises concerns about preserving humanity [41-56]. Darren stresses that AI could cause massive job losses and calls for comprehensive reskilling programmes to avoid an AI-divide [360-368]. The three speakers share the premise that AI will change work, but they differ on how positive the net effect will be and on the urgency of remedial policies.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate between optimism and caution mirrors scholarly analyses of divergent scientific forecasts that drive contrasting policy responses such as universal basic income versus reskilling programs [S45], as well as documented tensions between optimistic growth narratives and risk-aware public-sector adaptation [S46][S47][S48].
Preferred mechanism for scaling AI skilling across India
Speakers: Safin Matthew, Jayant Chaudhary, Deepak Bagla, Pankaj Kumar Pandey
Large‑scale AI skilling initiative targeting 100,000 youth, with 15,000 already trained AI tools for people with disabilities and remote‑area users; teacher sensitisation and personalised learning Atal Innovation Mission’s school labs, hackathons and a coordinated policy/dashboard to accelerate AI learning Government push for data sharing across departments to enable targeted AI‑driven services
Safin outlines a centrally managed national skilling programme aiming to train 100,000 youths, already reaching 15,000 participants [12-13]. Jayant proposes a school-centric approach that uses AI for early screening of special-needs students, teacher tools and the Skill India Assistant to reach remote regions [71-78]. Deepak describes a grassroots network of 10,000 tinkering labs, record-breaking hackathons and suggests a unified dashboard to link labs, incubators and policymakers [284-304][411-414]. Pankaj argues that effective AI services require inter-departmental data sharing, urging a cultural shift among government officials to collaborate on datasets [221-222]. The speakers agree on the need to scale AI skilling but disagree on whether the primary vehicle should be a top-down training programme, school-based AI tools, a lab-driven ecosystem, or data-integration reforms.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on the optimal scaling mechanism reference proposals for industry-led training institutes and curriculum co-design under public-private partnership frameworks [S61], alongside recommendations for AI-centric skill curricula covering prompt engineering and data literacy [S58], and noted disagreements on investment focus between skills and institutions [S67].
Unexpected Differences
Effectiveness of simple translation of skill‑related content versus AI‑driven multimodal learning
Speakers: Bhutachandra Shekhar, Jayant Chaudhary, Aman Jain
Translation of skill‑related books into 22 Indian languages and audio/visual formats to reach non‑English speakers AI tools for people with disabilities and remote‑area users; teacher‑focused screening and personalised learning Meta’s AI coach and inclusive skilling programs for under‑represented groups
Bhutachandra describes a large-scale translation effort of skill manuals into 22 languages and the creation of audio-based books, but illustrates a painter’s inability to use a translated book, questioning the practicality of mere translation [250-256][262-266]. Jayant, while supporting AI for disabled and remote users, focuses on teacher-led screening and personalised AI tools rather than static translated texts [71-78]. Aman promotes a multilingual AI coach that can dynamically adapt content, implying a more interactive solution than static translations [98-99]. The disagreement is unexpected because all parties aim to improve access, yet they differ on whether static translation suffices or whether AI-driven, interactive tools are required.
POLICY CONTEXT (KNOWLEDGE BASE)
The critique of simple translation and the push for AI-driven multimodal learning is supported by analyses that highlight the limits of literal translation and advocate for code-switching and AI-enabled language models to achieve true linguistic equity [S50], as well as multilingual inclusion efforts that go beyond static translations [S49].
Overall Assessment

The participants share a common vision of leveraging AI for national development, inclusive skilling and economic growth. However, they diverge on the optimal pathways: optimism about AI‑generated jobs versus caution about displacement; centralised training programmes versus school‑based labs, teacher tools, data‑sharing reforms; and differing views on how industry should collaborate with government and academia. These moderate to high levels of disagreement reflect a healthy debate but also signal the need for coordinated policy frameworks to align implementation strategies.

Moderate to high – while consensus exists on overarching goals, the lack of agreement on concrete implementation mechanisms could impede swift, unified action unless reconciled through joint planning.

Partial Agreements
All three speakers endorse the goal of reaching under‑served populations with AI‑enabled learning. Aman highlights Meta’s multilingual AI coach and the Ray‑Ban “Be My Eyes” feature for visually impaired users [68-70][98-99][82-84]. Jayant stresses AI‑driven screening, teacher tools and the Skill India Assistant to support students with special needs and remote geographies [71-78]. Deepak points to the extensive network of school labs and hackathons that bring AI education to villages and cities, proposing a dashboard for coordinated outreach [284-304][411-414]. While they share the inclusive objective, they differ on the primary delivery channel – corporate AI coach, teacher‑centric tools, or school‑lab ecosystems.
Speakers: Aman Jain, Jayant Chaudhary, Deepak Bagla
Meta’s AI coach and inclusive skilling programs for under‑represented groups AI tools for people with disabilities and remote‑area users; teacher‑focused screening and personalised learning Atal Innovation Mission’s school labs, hackathons and a coordinated policy/dashboard to accelerate AI learning
These speakers concur that industry must engage more deeply with the public sector to scale AI initiatives. Jayant urges companies to abandon closed hiring networks and help design new vocational courses [100-108]. Pankaj advocates for personnel moving across government, academia and industry to share knowledge and align policies [418-422]. Deepak suggests a single digital dashboard that would make schools, incubators and policymakers visible to each other, facilitating collaboration [411-414]. The common aim is stronger public‑private partnership, but the suggested mechanisms – hiring reforms, staff mobility, or a shared platform – differ.
Speakers: Jayant Chaudhary, Pankaj Kumar Pandey, Deepak Bagla
Call for industry to move beyond closed hiring networks and co‑design curricula with ITIs and higher‑education institutes Encouraging cross‑sector mobility of personnel to foster knowledge exchange between government, academia, and industry Proposal for a unified dashboard linking schools, incubators, mentors, and policymakers to streamline collaboration
Takeaways
Key takeaways
A large‑scale AI skilling initiative (target 100,000 youth) has been launched, with 15,000 already trained, and is being scaled by Meta, 1M1B and government partners. AI is viewed as a job‑creating technology that can expand the economic “pie”; early adoption and first‑mover advantage are emphasized. Inclusion and accessibility are central – multilingual AI, tools for people with disabilities, and offline solutions for remote/rural areas are being demonstrated. Public‑private partnership is essential: industry is urged to co‑design curricula, open hiring networks, provide trainers, and govern ITI clusters under the PM Setu fund. Government data silos must be broken; cross‑departmental data sharing (e.g., weather, energy, agriculture) is needed to enable AI‑driven services. Showcase of youth‑led AI solutions (AI for Cardio, Prasima AI, Ayurveda GPT) illustrates concrete impact on health, MSME productivity, and cultural knowledge. A national skill census and stronger mapping of talent are proposed to guide skilling efforts. Lifelong learning and employability are highlighted as the ultimate goals of AI‑enabled skilling.
Resolutions and action items
Scale the 100,000‑youth AI skilling program; Meta’s AI Coach and Ray‑Ban “Be My Eyes” features will be expanded to under‑represented groups. Implement a national skill census to map existing skills and gaps. Deploy the Skill India Digital Hub with additional data layers and an open‑stack for researchers and industry. Utilise PM Setu funding (₹60,000 crore) to upgrade ITIs, create clustered institutes, and bring industry partners into governance and curriculum design. Industry to move beyond closed hiring networks, co‑design training modules, and provide subject‑matter trainers for ITIs and higher‑education institutes. Create a unified dashboard linking school innovation labs, incubators, mentors, and policymakers (proposed by the Atal Innovation Mission). Continue cross‑departmental data collaboration within government (e.g., linking weather, energy, agriculture data). Support and replicate youth‑led AI pilots (AI for Cardio, Prasima AI, Ayurveda GPT) across more regions.
Unresolved issues
Specific mechanisms for reskilling workers whose jobs may be displaced by AI were not detailed. Operational plan, funding and governance structure for the proposed unified dashboard remain unclear. Concrete steps for teacher sensitization and early identification of special‑needs students need further definition. How multilingual AI models will be developed, validated, and deployed across all 22 Indian languages was not fully addressed. Metrics and evaluation frameworks to measure the impact of AI skilling on employability and economic outcomes were not established. Policy details for AI acceleration beyond skilling (e.g., regulation, data privacy) were not discussed.
Suggested compromises
Focus on locally‑relevant AI models rather than competing for frontier models, balancing cutting‑edge research with practical applicability. Industry is asked to open hiring networks while still maintaining trust and quality standards, creating a middle ground between closed recruitment and fully open markets. Emphasis on preserving human values and humane work‑life balance while adopting AI, acknowledging that productivity gains must not make life harder. Combining AI‑driven automation with human‑centric tools (e.g., audio‑based skill books, visual‑arts library) to serve both literate and less‑literate users.
Thought Provoking Comments
“He did say that, look, AI taking away jobs, the very notion is kind of misplaced… I want to know what are your thoughts on this, because that’s obviously top of mind for folks… are we going to end up losing jobs?”
The question directly challenges the common fear that AI will cause massive unemployment, linking it to the Prime Minister’s remarks and prompting a policy‑level response.
It shifted the conversation from showcasing innovations to a broader socio‑economic debate. It prompted Minister Jayant Chaudhary to elaborate on job creation, trust, and the need for new skill categories, steering the dialogue toward workforce implications of AI.
Speaker: Aman Jain
“I think it comes down – when any new tech comes in, if as a society we adapt to it early… the size of the pie will go up… AI will create new jobs… we need to move away from white‑collar/blue‑collar terminology because opportunities will blur.”
He reframes the AI‑jobs narrative by emphasizing first‑mover advantage, trust building, and the emergence of entirely new roles, while also critiquing outdated occupational labels.
His answer deepened the earlier question, introducing the concept of ‘contextualisation’ jobs and highlighting inclusivity. It set the stage for later remarks about industry hiring practices and the need for new training ecosystems.
Speaker: Jayant Chaudhary
“My one ask really is that enterprises are created… we need to move away from closed networks… we need industry partners… PM Setu is allocating 60,000 crore for ITIs… industry should help design courses and provide trainers.”
This is a concrete policy proposal linking massive government funding (PM Setu) with industry participation, calling for open hiring and curriculum co‑creation.
It pivoted the discussion toward actionable public‑private partnership models, influencing subsequent speakers (e.g., Pankaj Pandey and Deepak Bagla) to speak about data collaboration and ecosystem dashboards.
Speaker: Jayant Chaudhary
“One thing which has to change is the mentality of the people working in the government – we have to talk to each other, make data sets talk to each other… departments need to collaborate on weather, energy, cropping patterns, etc.”
He identifies data silos as a systemic barrier and proposes a cultural shift toward inter‑departmental data sharing, a foundational step for AI‑driven governance.
His observation broadened the conversation from skilling to governance, reinforcing the earlier call for cross‑sector collaboration and encouraging the panel to consider how AI can be embedded in public services.
Speaker: Pankaj Kumar Pandey
“Most skill‑related books have images without description… a painter told me you can’t hold a book while holding a paint dabba… we are creating audio‑based books and translating skill content into 22 Indian languages.”
He highlights a practical accessibility gap in vocational training materials and proposes AI‑powered multilingual audio/visual solutions, grounding the AI discussion in everyday realities of informal workers.
This anecdote shifted the tone toward grassroots inclusivity, prompting further remarks about language barriers (Jayant’s multilingual model) and reinforcing the need for AI that serves under‑served skill groups.
Speaker: Bhutachandra Shekhar
“We ran a 96‑hour hackathon that generated over 2.5 million prototypes – the world’s largest. What we need now is a single dashboard where school labs, incubators, mentors, and policy‑makers can interact.”
He provides a vivid illustration of the scale of grassroots innovation and proposes a unifying digital platform to coordinate the ecosystem, moving from isolated pilots to a national innovation infrastructure.
His vision crystallized the earlier calls for collaboration into a tangible tool, influencing the concluding remarks about dashboards and the importance of a connected innovation network.
Speaker: Deepak Bagla
“India is a global leader in South‑South cooperation… the challenges you face here – multilingualism, data diversity – are the same that other developing nations will face. India’s experience can be a template for the world.”
He situates India’s AI journey within a global context, framing the summit’s outcomes as models for the Global South and emphasizing the responsibility to avoid an AI divide.
This comment broadened the scope from national to international, reinforcing the earlier theme of “AI for all” and encouraging participants to think about scalability and exportability of Indian solutions.
Speaker: Darren Farrant
Overall Assessment

The discussion was steered by a handful of incisive remarks that moved it from a showcase of individual projects to a strategic dialogue on policy, inclusivity, and ecosystem design. Aman Jain’s provocation about job loss triggered a nuanced debate on new employment categories, which Jayant Chaudhary expanded with concrete calls for open hiring and industry‑government collaboration. Pankaj Pandey’s focus on data silos and Bhutachandra Shekhar’s illustration of language‑access barriers highlighted systemic obstacles, while Deepak Bagla’s vision of a unified dashboard offered a practical solution. Darren Farrant’s global‑south framing tied these national initiatives to a broader international responsibility. Collectively, these comments reshaped the conversation, deepening analysis, introducing actionable proposals, and aligning the summit’s narrative around inclusive, scalable AI skilling.

Follow-up Questions
How can AI skilling and its benefits be extended to traditionally under‑represented groups such as people with disabilities, remote regions and the Northeast?
Ensuring inclusive access to AI tools and training is critical for equitable impact across all sections of society.
Speaker: Aman Jain
What concrete steps or a clarion call can industry take to partner with government and academia to accelerate AI skilling initiatives?
Industry collaboration is essential for scaling programs, providing real‑world relevance and creating employment pathways.
Speaker: Aman Jain
How can innovators from hinterland or remote areas participate in AI development and produce solutions that are globally relevant?
Empowering grassroots innovators ensures diverse problem‑solving and widens the talent pool for global competitiveness.
Speaker: Manav Subodh (to Deepak Bagla)
What role can India play in the Global South to create ‘Made‑in‑India’ AI solutions for the world?
Positioning India as a leader in AI for the Global South can drive knowledge sharing, reduce AI divide and open new markets.
Speaker: Manav Subodh (to Darren Farrant)
What are the key actions needed to scale NSDC’s AI skilling programmes and guarantee that the talent produced is employable?
Scaling requires clear pathways to jobs, industry alignment and mechanisms for lifelong learning.
Speaker: Manav Subodh (to Rishikesh Patankar)
What can industry do to collaborate with the Karnataka government to make AI initiatives scalable and replicable?
Industry input can help design curricula, provide mentorship and create sustainable deployment models across states.
Speaker: Manav Subodh (to Pankaj Kumar Pandey)
Research needed on early identification and screening tools for students with special needs using AI.
Early detection can enable customized interventions, reducing dropout rates and improving educational outcomes.
Speaker: Jayant Chaudhary
Research needed on developing and evaluating multilingual AI models for Indian languages (e.g., AI Coach, Sarvam edge model).
Overcoming language barriers is essential for mass adoption of AI tools across India’s linguistic diversity.
Speaker: Jayant Chaudhary; Aman Jain
Research needed on teacher sensitization and AI‑enabled tools for inclusive classrooms.
Equipping teachers with AI‑driven insights can personalize learning and support weaker students effectively.
Speaker: Jayant Chaudhary
Research needed on the impact of PM Setu funding on ITI clusters and industry‑partnered training outcomes.
Understanding how the large investment translates into skill creation will guide future policy and scaling.
Speaker: Jayant Chaudhary
Research needed on creating an open educational data stack and its governance mechanisms.
A shared data infrastructure can accelerate innovation, research and policy‑making across education stakeholders.
Speaker: Jayant Chaudhary
Validation and scalability study of the offline AI‑for‑Cardio system in primary health centres nationwide.
Ensuring diagnostic accuracy and operational feasibility at scale is vital for rural healthcare impact.
Speaker: Nandakishor Mukkunnoth
Assessment of ROI, adoption barriers and long‑term sustainability of autonomous AI agents for MSMEs (Prasima AI).
Understanding economic benefits and challenges will help refine the solution for broader MSME uptake.
Speaker: Ashish Pratap Singh
Evaluation of the Ayurveda GPT model’s accuracy, usability and impact for manuscript‑based queries.
Rigorous testing is needed to confirm that the model reliably supports Ayurvedic scholarship and public use.
Speaker: Ayurveda GPT Member
Effectiveness study of Meta Ray‑Ban ‘Be My Eyes’ glasses for visually impaired users.
Measuring real‑world benefits will guide further development and deployment for accessibility.
Speaker: Aman Jain
Design and pilot of a unified dashboard linking school innovation labs, incubators, mentors and policymakers.
A single platform could streamline collaboration, resource sharing and monitoring of innovation ecosystems.
Speaker: Deepak Bagla
Conduct a national skill census to map existing skills, strengths and gaps across the population.
A comprehensive skill inventory would inform targeted skilling programmes and policy decisions.
Speaker: Ayurveda GPT Member
Formulation of a policy framework for AI acceleration and innovation labs in hinterland regions.
Clear policies are needed to support grassroots innovation, funding, and integration with national AI strategy.
Speaker: Deepak Bagla
Research on AI‑driven job displacement, reskilling pathways and social safety nets.
Anticipating workforce shifts and designing effective upskilling programs are crucial to mitigate unemployment risks.
Speaker: Aman Jain; Darren Farrant
Study on how AI can blur the line between technical and non‑technical jobs, enabling non‑programmers to create apps and products.
Understanding this shift can help redesign curricula and democratize innovation.
Speaker: Jayant Chaudhary
Evaluation of the visual‑arts library and AI‑powered translation of skill‑related books into 22 Indian languages for low‑literacy workers.
Assessing effectiveness will determine if audio‑visual AI tools truly improve skill acquisition among artisans.
Speaker: Bhutachandra Shekhar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

GermanAsian AI Partnerships Driving Talent Innovation the Future

GermanAsian AI Partnerships Driving Talent Innovation the Future

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, convened by the German Ministry of Economic Cooperation and Development and the Indian Ministry of Education, examined how global digital transformation and AI can be harnessed to support small and medium-sized enterprises (SMEs) in Germany, India and other economies [1-4]. Moderator Dr. Kusumita Arora framed the discussion around the need for partnerships that link talent development, policy, infrastructure and inclusive workforces in the age of AI [21-28].


Dr. Bärbel Kofler highlighted public anxiety about AI-driven job losses and stressed that governments must act as reliable partners to ensure technology serves both large firms and SMEs, thereby closing the existing power and access gaps [33-42]. She cited the AI Living Lab launched with the University of Mumbai as a concrete example of integrating AI curricula with real-world projects from small media enterprises to give students practical exposure [46-53]. Govind Jaiswal described India’s policy response, noting the National Education Policy 2020 and the establishment of research parks and AI modules across engineering curricula, which aim to train millions of students and align education with industry needs [85-99]. Augustus Azariah pointed out a skills shortage in industry, observing that many graduates present AI-generated CVs and that faculty lack training, prompting his company’s initiative to certify thousands of teachers in tools such as Copilot through hackathons and to tap talent in tier-2 and tier-3 cities [115-147].


Jan Noether emphasized that AI applications span healthcare, agriculture, water management and energy, and announced a dual-degree master’s programme with a German university that will be partly delivered in India, illustrating cross-border academic cooperation [155-164]. Arthur Rapp warned that reliance on non-European AI platforms creates dependency, bias and data-sovereignty risks, and cited studies showing students increasingly use AI for career decisions, underscoring the need for responsible AI education and governance [170-186]. Returning to policy, Dr. Kofler argued that international cooperation must address the “power gap” in AI creation and use, embed inclusive standards, and translate commitments into concrete outcomes such as the Mumbai Living Lab and the AI Academia-Industry Innovation Partnership in Asia [214-244]. She also noted that achieving the Sustainable Development Goals requires AI to be made accessible to SMEs, with governments providing frameworks for privacy, trust and rapid up-skilling of workforces [248-252].


Jan Noether later stressed that German and Indian SMEs can benefit from joint sandboxes and collaborative projects that combine German efficiency with Indian creativity, enabling low-risk experimentation for sector-wide solutions [268-276]. The moderator introduced the AI Academia-Industry Innovation Partnership, a GIZ-implemented programme that brings together universities, businesses and governments across Germany and Asia to create living labs where students work on real industry challenges and firms access future-ready talent [287-304][312-322]. The discussion concluded that coordinated government, academic and industry actions are essential to bridge skill gaps, ensure responsible AI deployment and unlock economic growth for SMEs worldwide [214-252][293-304].


Keypoints


Major discussion points


International cooperation is essential to close the AI “power gap” and make the technology inclusive for SMEs and the broader workforce.


Kofler stresses that Germany-India partnership aims to make AI “applicable, useful for everybody” and to “overcome that power gap” for small and medium-sized enterprises [39-44][46-53]. She later reiterates that international cooperation must “overcome the gaps” in creation and use of AI and be backed by concrete commitments [221-227][232-238].


Education and skills development – from primary schools to university and vocational training – is a cornerstone of the AI agenda.


The moderator asks how higher-education and vocational systems can be re-oriented [60-63]; Jaiswal outlines India’s policy steps (NEP 2020, new research parks, dual-education model, industry-academy collaboration) to embed AI across curricula [84-95][96-99]; Jan Noether mentions a joint German-Indian master’s programme [162-164]; Augustus Azariah describes large-scale faculty up-skilling (e.g., 1,000 faculty certified in Copilot) and hackathons to bridge the skill gap [136-140][141-147]; Arthur Rapp adds that responsible AI education must address bias, data protection, and the growing reliance of students on generative tools [170-186][190-196].


Industry-academia partnerships and “living labs” are presented as concrete mechanisms to translate AI research into real-world solutions and jobs.


Kofler cites the AI Living Lab launched with the University of Mumbai and Leipzig [46-53][241-247]; Jan Noether highlights a dual-university master’s programme and the need for cross-border sandboxes for SMEs [162-166][275-276]; Augustus notes industry-driven faculty certification and hackathons that connect students with real business problems [136-140]; the moderator and video narrator introduce the AI Academia-Industry Innovation Partnership in Asia and describe living labs as structured spaces where universities and companies co-create [312-321][322-329][293-304].


Addressing concerns about job displacement and ensuring responsible, unbiased AI deployment.


Both Kofler and Jaiswal acknowledge public fear of job loss and the need to treat those feelings “very carefully” [36-38][67-78]; Kofler later stresses responsible AI, data bias, language exclusion, and the necessity of inclusive frameworks [216-220][221-224]; Augustus points out the proliferation of AI-generated CVs and the risk of superficial skills, calling for human oversight and originality [115-124].


Sector-specific opportunities where AI can drive sustainability and productivity, especially in healthcare, agriculture, and energy.


Jan Noether identifies healthcare data analytics, remote patient care, digital imaging, water-scarcity management, and energy sustainability as prime AI application areas [155-161][162-164]; Arthur Rapp adds examples of AI’s impact on agriculture and broader economic growth [206-207].


Overall purpose / goal of the discussion


The panel was convened to explore how Germany, India, and broader Asian partners can jointly shape an inclusive, human-centred AI future of work by (i) bridging the technology-access gap for SMEs, (ii) building AI-ready talent through coordinated education and training, (iii) establishing concrete industry-academia collaborations such as living labs, and (iv) ensuring responsible, bias-aware AI deployment that supports sustainable development goals.


Overall tone and its evolution


The conversation begins with a formal, diplomatic tone, introducing panelists and the strategic priority of AI cooperation. As participants speak, the tone becomes optimistic and solution-focused, highlighting concrete policies, programmes, and success stories. When addressing job-loss anxieties and AI bias, the tone shifts briefly to empathetic and cautionary, acknowledging public concerns. The session closes on a hopeful, forward-looking tone, emphasizing commitment to turn “intent to commitment” into tangible outcomes. Throughout, the discourse remains constructive and collaborative, with occasional repetitions and filler but no adversarial moments.


Speakers

Arthur Rapp – Representative of DAAD (German Academic Exchange Service); expertise in academic research and international education programs [S1]


Mr. Jan Noether – Director General, Indo-German Chamber of Commerce; focus on Indo-German economic and business cooperation [S2]


Dr. Kusumita Arora – Moderator of the panel; (role as moderator indicated in transcript)


Dr. Augustus Azariah – HR leader for Asochem; works for Kindrel (IBM spinoff) in infrastructure management and industry-academia collaboration [S8]


Moderator – Session moderator (no specific title or affiliation provided in transcript)


Video Narrator – Voice-over for the introductory video (no additional role details)


Dr. Bärbel Kofler – Parliamentary State Secretary, Federal Ministry of Economic Cooperation and Development (Germany); expertise in international cooperation and sustainable development [S15]


Mr. Govind Jaiswal – Joint Secretary, Ministry of Education, Government of India; expertise in higher education and skills development [S18]


Additional speakers:


Mr. Gobind Jaswal – Joint Secretary, Ministry of Education, Government of India (mentioned in the opening introductions)


Mr. J. J. Stahl – Referred to for remarks later in the session (specific role not detailed)


Mr. Ross – Mentioned briefly as a participant (specific role not detailed)


Full session reportComprehensive analysis and detailed insights

1. Opening & Context – The moderator opened the session by stating that the strategic priority has shifted from merely developing artificial intelligence to ensuring its effective deployment, especially for SMEs that are the backbone of both the German and Indian economies [1-4]. He framed the discussion as a joint effort of governments, industry, academia and development partners to shape an inclusive, human-centred future of work [5-8][add citation].


2. Session Framing – Dr Kusumita Arora thanked GIZ for convening the forum, outlined the day’s focus on “people in the age of AI”, and emphasized the need for partnerships that link talent development, policy, infrastructure and an inclusive workforce [21-28][add citation].


3. Policy Perspective (Germany) – Dr Bärbel Kofler acknowledged the legitimacy of public anxiety about AI-induced unemployment and argued that governments must act as reliable partners to close the “power gap” between large corporations and SMEs [30-38][39-44]. She outlined regulatory priorities – open-source data, climate-friendly computing, privacy and decent-work standards [214-222]. Kofler highlighted the AI Living Lab launched jointly by the University of Mumbai, the University of Leipzig and small-media enterprises, which embeds AI modules into university curricula and connects students with real-world projects for SMEs [46-53]. She also referred to a study that estimates roughly 1.3 million AI-related jobs, noting uncertainty about the exact source (World Bank or WTO) [236-238][add citation].


4. Education & Skills (India) – Govind Jaiswal compared the AI transition to the historic rollout of electricity and asserted that the shift can be “seamless” if the workforce is properly reskilled [65-78]. He described India’s policy actions: the National Education Policy 2020 mandates that at least 50 % of university courses include skill-oriented modules [84-95]; research parks have expanded from three to nine institutions, adding AI components to civil and mechanical engineering programmes [96-99]; the German dual-education/apprenticeship model is being adapted for India [81-82]; and a bilateral master’s programme with Baden-Württemberg (≈ 2/3 taught in India, 1/3 in Germany) will equip roughly 40 million students with AI competencies [161-166][add citation].


5. Industry Viewpoint – Augustus Azariah (Kindrel, IBM spinoff) warned that many fresh graduates submit AI-generated CVs, creating “chaos and confusion” for recruiters [115-124]. His company runs faculty-certification programmes (e.g., Microsoft Copilot); a recent hackathon in Mangaluru involved over 18 000 students and produced more than 1 000 certified faculty [136-147]. An endowment fund has been created to support faculty-led AI research and patenting [138-140]. Azariah also highlighted untapped talent in India’s tier-2 and tier-3 cities, citing a blind-selection hiring exercise where four of ten high-salary offers went to candidates from these regions [145-147].


6. Indo-German Business Perspective – Jan Noether, Director General of the Indo-German Chamber of Commerce, identified AI opportunities in health-care data analytics, remote patient monitoring, digital imaging for agriculture, water-resource management, energy sustainability and digital skills development [155-161]. He announced the dual-university master’s programme with Baden-Württemberg (≈ 2/3 in India, 1/3 in Germany) [162-166] and called for “sandboxes” where young talent from both countries can co-create low-risk, SME-focused AI solutions [274-276][267-276].


7. Research & Data-Sovereignty (DAAD) – Arthur Rapp (DAAD) warned that reliance on non-European AI platforms poses risks of bias, loss of research freedom and potential data leakage [170-180]. He cited studies showing students using generative AI for career decisions and research drafting, underscoring the need for responsible AI governance that protects privacy, intellectual property and linguistic inclusivity [181-186][190-196].


8. International Cooperation & Responsible AI – Dr Kofler returned to the theme of responsible AI, reiterating the need to close the creator-user power gap, align AI deployment with the Sustainable Development Goals, and honour the concrete commitments discussed at the Hamburg Sustainability Conference [214-222][236-238].


9. Complementary Collaboration – Govind Jaiswal added that India and Germany have different “patterns” (societal, industrial, maturity) and that collaboration should be complementary, with each side filling the other’s gaps [add citation].


10. SME Integration – Jan Noether stressed that SMEs constitute > 98 % of businesses in both countries; successful integration will require joint experience, “sandboxes”, and clear financial/operational benefits for risk-averse German firms [267-276].


11. Early-Stage AI Literacy – Augustus Azariah advocated introducing AI education already at the elementary level, citing the EU’s GDPR experience as a model for data-protection and urging the sharing of German vocational-training expertise with Indian schools [279-282].


12. Program Launch & Video Summary – The moderator introduced the AI Academia-Industry Innovation Partnership in Asia (commissioned by BMZ, implemented by GIZ), which links German, Indian and Vietnamese universities, businesses and governments through “living labs” [288-304]. The video narrator explained that living labs provide hands-on, cross-border projects, improve employability and give firms low-risk access to emerging talent [318-322][326-329].


13. Closing & Follow-up – The moderator thanked the panel, noted that Dr Kofler and Govind Jaiswal would stay for further discussion, and announced a follow-up session featuring Mr J. J. Stahl [add citation].



Overall Consensus & Open Issues

Consensus: All participants agreed on the need for SME-focused, inclusive AI deployment; large-scale up-skilling through Living Labs, dual-degree programmes, national policies and faculty certification; responsible-AI governance; and structured industry-academia-government partnerships (including sandboxes) to translate research into jobs [46-53][155-166][214-222][274-276][318-322].


Moderate Disagreement:


1. Leadership of Upskilling – Kofler favoured a government-coordinated Living Lab, whereas Azariah argued for industry-led faculty certification and hackathons.


2. Handling Job-Loss Anxiety – Kofler called for careful treatment of employment fears; Jaiswal portrayed the transition as seamless.


3. Platform Sovereignty vs. Broad Inclusivity – Rapp stressed European data-sovereignty, while Kofler focused on inclusive AI without explicit ownership concerns.


4. Optimal Entry Point for AI Education – Azariah promoted elementary-level literacy; Arora and Kofler emphasized higher-education and vocational training.


Unresolved Issues:


a. Mechanisms to monitor AI deployment’s impact on net job creation and work quality.


b. Sustainable financing models for Living Labs, faculty endowments and sandboxes.


c. Standardisation of AI curricula across linguistic and institutional contexts, especially at primary/secondary levels.


d. Cross-border data-protection and sovereignty frameworks to mitigate bias and platform dependence.


e. Clear metrics, timelines and evaluation criteria for scaling SME integration and measuring productivity gains.


f. Ongoing industry involvement in curriculum design, assessment and practical training beyond pilot phases [54-58][170-186][279-282].


Key Take-aways:


1. Deploy AI responsibly and inclusively, mitigating job-loss anxieties and ensuring decent work.


2. Bridge the power and access gap between large corporations and SMEs, and between the Global North and South.


3. Prioritise education and skill development-Living Labs, dual-degree programmes, national policies, faculty certification and early-stage AI literacy-to build a talent pool of tens of millions.


4. Institutionalise structured industry-academia-government partnerships, sandboxes and joint degree programmes that align curricula with real-world challenges.


5. Leverage international cooperation, exemplified by the Germany-India-Vietnam AI Academia-Industry Innovation Partnership, to share resources, standards and commitments.


6. Address AI bias, data-sovereignty and platform dependence through robust, responsible-AI governance frameworks [214-222][170-186][46-53][162-166][236-238].


Overall, the panel agreed that coordinated, inclusive AI up-skilling and SME-focused innovation, underpinned by responsible-AI governance, are essential for Germany, India and the broader Asian region. [add citation]

Session transcriptComplete transcript of the session
Moderator

global digital transformation for partners such as Germany and India. The strategic priority is not longer solely the development of artificial intelligence, but very much its response limit effective deployment. And particularly for small and medium -sized enterprises, which in Germany and in India and other countries, these are the backbones of our economies, access to skills, innovation, ecosystems and trusted partnerships will determine whether AI becomes a driver of opportunity for all. Today’s panel will explore how cooperation amongst governments, industry, academia and development partners can address these challenges and shape a future of work that is innovative, inclusive and human -centered. It is now my great pleasure to introduce our distinguished panelists. We are deeply honored by the presence of Dr.

Bärbel Kofler, Parliamentary State Secretary to the Federal Ministry of Economic Cooperation and Development, whose leadership underscores Germans’ strong commitment to international cooperation and sustainable development. Ms. Kofler, please come up. There’s no signs. You can choose in the middle. Next panelist, I would really warmly welcome Mr. Gobind Jaswal, Joint Secretary at the Ministry of Education of the Government of India. He plays a very pivotal role in advancing higher education and skills development here in India. Also, it’s my great pleasure to welcome Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength of Indo -German economic and business cooperation. Please, Jan. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength of Indo -German economic and business cooperation.

of Indo -German economic and business cooperation. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength of Indo -German economic and business cooperation. Please, Jan. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength of Indo -German economic and business cooperation. Please, Jan. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength Please, Jan. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength of Indo -German economic and business cooperation. Please, Jan. of Indo -German economic and business cooperation. Please, Jan. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength of Indo -German economic and business cooperation.

Please, Jan. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength Please, Jan. Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength Jan Noether, the Director General of the Indo -German Chamber of Commerce, reflecting the strength Today’s discussion will be moderated by

Dr. Kusumita Arora

Good morning. Good morning, everybody. Thank you to GIZ for this very special and important session. So we have been hearing, I think, about all aspects of AI in the last few days. And today… Close up? Okay. Okay, I will be, wait one minute. Okay. So this session, we want to talk about people in the age of AI and what partnerships are going to look like for talent, which is going to drive the innovation, and also completely the future of work as we know now. This forum where we are going to discuss policy intent, what is required for scaling, for startup needs, for infrastructure and other pragmatic issues which are going to drive the conversation ahead.

This includes and always have to include the people, personal growth, their dreams and their particular circumstances through which they will connect to AI and to each other. I would request all our panelists for their comments on the different issues. To Dr. Babel Kofler, I will ask you to just explain your views. Thank you. cooperation, public policy support between AI partnerships in industry, academia, as well as technology providers, how will this drive productivity and drive jobs? Because people are scared of the jobs.

Dr. Bärbel Kofler

Well, you’re quite right. Also, with your last remark, thank you for the question, and good morning, everybody. Start with that. Good morning, everybody, to all of you. Yes, people get afraid of that there might be a loss of jobs, and it’s also an issue then, and maybe we come later to that issue also, on how decent the work jobs are they can require. I think we have to take those feelings very carefully because there’s reason for that. I think we dive a little bit deeper in that in the next round. What we are doing, and I will start a little bit with a general remark, what we are doing is First thing is with our cooperation, we try to be a reliable partner in a very uncertain world.

We all know how power shifts around the globe are taking place, how the international order is redrawn somehow. And I think what we need, especially if it’s coming to technological transformation, which is really having a big impact on everybody’s life, we need to make sure that that technology, included in all the other changes which are going on on the planet, is really there for serving people, serving those who are in the workforces, serving enterprises, serving not only big enterprises, but as we are jointly thinking, small and medium -sized enterprises. Because only if we do that, if we overcome that power gap, which is still existing, the full… possibility of new technology can be spread and can be used by everybody.

And I think that’s the aim and the goal of my ministry and that’s the aim of the goal of the German government to make the new technology being applicable, useful for everybody. And I think we are very aligned with that with the Indian government, so I’m very happy that colleague, General Secretary, is here on the panel with us because at the end of the day we are discussing about open source, open data, we are discussing about computing possibilities about how we can make that all more climate friendly, reducing the costs of energy, reducing environmental impact, the use of water for example which is necessary for all those computing things there. So there are a lot of things we have to regulate, I would say, in the overall governmental framework.

to make it then being applicable in a very positive way for the people, for their companies. One concrete example of what we are doing is just coming back from Mumbai. We also met Mr. Newton and where we were opening AI Living Lab at University, Rattentata University at Mumbai. What is it all about? At the end of the day, it’s about making the new technology being part of a curricula of a university, offering students the chance to get close to that, but not doing that in something artificially made up, doing it with concrete working examples from small media enterprises who get the advantage then to have access to AI, which is also not always there. So bring those two groups.

Those groups who normally don’t have so much access. as a creator of AI, but sometimes, yeah, you may use it. You have to have GDP on your mobile, but not really as a creator and as somebody who is inventing the solutions which are needed in business, which are needed also for social interaction. Bring that together. That’s something we are doing as government, and I think that’s something we can talk a little bit more about it later on, but that’s something we want to foster in a global cooperation and in an overall momentum, we really strive to close the power gap. I was talking in another panel about the chances on getting access to competing data centers.

That’s totally different in the global north than in the global south. The access to venture capital, there are so many things surrounding the setting, but also then at the end of the day, how are the regulations on decent work, for example? So people are really suffering from that or are really participating. So those things are the overall topics we have to solve in government. Thank you.

Dr. Kusumita Arora

I will move to Sri Jeshwar, Joint Secretary. I think many universities, departments are already starting AI courses or some centers or departments. So how do we plan to have higher education and vocational training systems orient to or reorient to work closely, not only on the courses, but along with industry and innovation ecosystems so that workers, the graduates who are coming out of these systems are prepared for AI -enabled workplaces, because having the talent pool is one of the priorities. I think that is one of the priority areas nowadays. Thank

Mr. Govind Jaiswal

you. I’ll start. With some context of the first question, then I will link how are we preparing. Most of the time it has been asked about this afraid about the introduction of new technology. I’ll give one example because there are many person who might be interested into AI as a layman and they may not be aware. Any technology, whenever it comes, it creates disruption in the ecosystem. I’ll give one example. When electricity was introduced, discovered long time back, you’re just imagining one person who was manually doing the work of a fan for some elite or some rich person. When the electricity was introduced, the same kind of question might have arised whether he will lose his job or not.

But what happened, the electricity, when it was reduced, the consequence of that fan, freeze, the vehicle, everything, the electronic batteries, everything came into existence. But what technology does, if it has been used effectively, it ensures the person who never thought that he will have access to a fan and he will get fan, which he is doing manually after one or I think one or two centuries, he might be using the same thing. The quality of life, especially marginal people, increases with any technology. What is the role of government and the industry to ensure that when the transition takes place, they are effectively and efficiently trained for new skills and a new job role? I am 100 percent sure a person who was doing that kind of job a few centuries ago, after a few decades, he might be doing a better job and a new kind of thing.

That’s the challenge, actually. And when you said about the university and the ecosystem and the introduction of vocational training and the introduction of AI courses, we are keeping that in the mind because the transition which took place in one century that few centuries ago will take in few decades. So that transition it has to be very seamless. So no one is adversely affected. And any technology emotional agnostic it will not go with the emotion. It go with the hard core reality. So in government of India is already taking many steps to ensure that everyone is being trained about the new skills including AI. And in the last six seven years it has been introduced after the new education policy 2020 where we have enabled all the university ecosystem especially humanities also to include 50 % courses especially for the skill courses.

And it’s a very very organized and very very structured way that we are moving in last five to eight years especially last one decade. We introduced national education policy where focus on skill courses. We started six new research parks in the premium institution, especially in IITs. Before 2014, it was three. So now it is nine. We are still going for another nine. So, and all the courses of the civil engineering, mechanical engineering, we have introduced a certain component where the students who are getting through the, they are also equally trained with AI. If you see about the industry academy of collaboration, the recent budget also, where it was announced, five educational cities, the core word was it should be near to the industrial corridor.

It is so curated and if you go for the last ten years, it is so organized way that every student of this country is getting equipped with artificial intelligence. Not only this, either it is semiconductor or it is quantum theory. Everything we are trying to train our human resources equally. as we are also in negotiation and we are already having some cooperation with our German government for the introduction of AI courses and it has been launched also in some of our portal slowly slowly we will embed everything what I suggest with the industry academy of collaboration most of the time it was confined with the curriculum thing we are going ahead and we are requesting not only to involve for the curriculum they should be involved for the entrance they should be involved for the assessment they should be involved for the practical training as much as possible last month I was in Germany actually Stuttgart and Munich I have seen the dual education system very influential very effective way and we are also going aggressively to ensure that every student gets industry exposure internship was made mandatory apprenticeship embedded degree program was launched entire education landscape is changing drastically we have series of activity that we are doing.

And I’m very much sure that students, especially in higher education, we have around 40 million students enrolled and they will be equipped and they are equipped in the coming year. We will lead into AI sector also. Thank you.

Dr. Kusumita Arora

Thank you. That’s very encouraging. Very hopeful. Mr. Azaria, from the point of view of the Indian industry, what would be your comments as to what kind of partnership or collaboration models already exist? And how do you see the students coming into the young workforce and turning AI innovation into real productivity improvements and improvements for companies for the bottom line as well as for sustainability? Thank you

Dr. Augustus Azariah

very much. Truly delighted to be here this morning. Quick commercial about me and my company. All right. I work for Kindrel, which is an IBM spinoff, and we’re in the space of infrastructure management, which means that most of the transactions that you’re doing, banking to airline to various other things, are powered by our technology. That’s my day job. I also serve as the HR leader for South, for Asochem, and that’s the Industry Connect. Now, coming to your question, as I was riding in and my cab driver, I chatted up with him and asked him, what is it about this AI conference? AI, sir, AI means all Indian. Wonderfully put. Not to take the credit away from other countries, but you see.

At that level, the penetration. and the hope that AI sovereignty could happen right here where we are sitting. So that is from that level. Now, the other one I wanted to tell you is the industry -academia collaboration. And as a HR leader, the first thing I see is that there is some chaos and confusion among the laterals as well as the freshers. When you go to campus to hire, you don’t find real AI skills except that you see the CV developed by ChatGPT. And when I read through that verbosity, I know very well this is system -generated. This is AI -generated. And I tell them, look, we need some levels of originality. You get your ChatGPT or whatever, Gen AI.

The AI system wants to generate your stuff. But I want your involvement. So which means that the human element that we want to put here is to oversee what is actually being generated by generative AI. The other requirement from the pressures is that college freshers we’re talking about is not just an awareness of what’s Gen AI. They know that. But for them to know certain productivity tools like Copilot, OK, to use Copilot to develop small applications or, you know, have AI agents running. That’s the level. And I’m not saying that you can’t do that. And I would say that’s pretty basic. Is it available today for the industry? The answer is not as much as it should be.

Why? Because the faculty are not trained to impart that level. of AI awareness. And therefore, we saw this gap. And we said, hey, look, let us address this gap and go into colleges. The industry goes into colleges with partnerships, with large companies, call it NVIDIA, call it Google, call it IBM, call it Kindle. And we go beyond the guest lecture. We start with making it competitive to them. All right. And telling the faculty and certifying them. Recently in a hackathon with about 18 ,000 people in the southern city of Mangaluru, there were more than 18 ,000 students. And during that time, we got more than a thousand faculty certified in Copilot. And. I would say that our target is to go into the hundreds and thousands and millions for faculty to be trained and also to provide faculty.

an endowment fund so that they can innovate and they can come up with models and patents that they can file. And I suppose that is where we have a big gap. And if we are able to do it, we are going to the hinterlands, tier two, tier three cities in India. And that’s where the talent lies. And AI, while you can call it all India, it’s also about unlocking talent. The talent that is available in tier two and tier three cities is so humongous I will just take 30 seconds and tell you. When we did a hiring, we did what is called a blind selection. And in that blind selection of 10 people who were shortlisted or finalized for a job that was paying close to 30 lakhs per annum, which is for freshers.

Four of them were IITs. three of them were tier two tier one the rest were all tier two and three what does this tell me this tells me that the talent doesn’t just stay in our top tier institutes it’s also so common and it’s socialized right across the spectrum and that my dear friends is the challenge that we have the opportunity that we have and I think today it’s about making sure that they unlearn the past and learn about how to cope with AI for the

Dr. Kusumita Arora

that is a real wonderful to know I mean this is an example or demonstration of how industry is really engaging with academia and engaging on a long term basis at least a medium term basis and I’m sure this is going to yield results at the pace that industry and academia is looking at. Mr. Jan, can you please tell us where you see the strongest potential for cooperation in AI? And this translates directly into productivity, into gains, economic growth.

Mr. Jan Noether

Glad to do so. Now, of course, we need to bring people together. We yesterday had a tour around our German pavilion. And it’s amazing what’s going on in Germany as well when it comes to AI. So all India is great, but AI does not know any borders. And we need to bring people together. Now, when we talk about application, looking at India, the first thing which comes to my mind is healthcare. since if you look into a let’s say analysis of these millions and millions of records of we have of healthcare data if we look into disease management if we look into remote access to patients via AI systems that is going to be the future not only in India that’s going to be the future I mean across the world agriculture digital imaging satellite imaging and water is unfortunately not only used to cool systems water is the scarce raw material if you want to on our planet in the years to come how do we use that in a meaningful way and how do we protect this resource and how do we look at the agricultural development in certain areas.

So all of that could be done. Energy sustainability, very, very important. And AI will play a very crucial role in that segment, as does in the skills development, remote learning. We do have, and Secretary, I’m very happy to share with you, you were in Stuttgart, which is fantastic. We just signed an agreement with a dual university of Baden -Württemberg on a master’s program where like two -thirds is going to be handled in India, one -third is going to be handled in Germany. But these are the concepts of the future. So if you ask me where to apply, it’s across the board. It’s

Dr. Kusumita Arora

Okay. Thank you. Mr. Ross. I came up as a representative of DAAD, which has been supporting academic research for decades. What would be your comments as to how our programs would, on research, would integrate AI skills, new directions in AI, right from maybe schools and greater into universities and, in fact, lifelong learning, to equip leaders, learners with skills, the critical thinking, to use AI for their personal uses as well as to drive the economy? What would be the role in this direction?

Arthur Rapp

So the second study – oh, sorry, one important point, one very important conclusion was that there is a big risk of dependence on non -European AI platforms, and this is a threat to freedom of research and teaching. Now, this is, of course, very much centered on Europe, but this is also something that, of course, applies to India as well. When we use certain systems and the owners of these systems, they are not in our countries. They are somewhere else. There are people training this AI, you know, so AI is not neutral. It’s also biased. This is another interesting aspect, you know. Today, maybe this application is free of charge, so I’m using it. I’m putting my data inside, so at the same time I’m training that.

Tomorrow, this application might not be available anymore, and then me as a country or me as a company, I will get into trouble, right? Because suddenly I’m… Maybe I need to pay for something, and I can be excluded. But the whole aspect of data protection is also mentioned in that study because there was a lot of questions. For example, when people today write a research proposal and they use AI just to check the spelling, you know, and to make the sentences a bit more polished so it sounds nicer, right? They don’t understand, I think, the impact it has because where does this data go to? I might have a new great idea, right? This might be a revolution.

So it could be that someone has access to this, will extract this information at another end of the world and might use this, might even file a patent, right? So we don’t know that, right? So there’s also this dimension. So another study, another publication that was published that’s called University Student and Generative. If I am parroted. that’s about two years old already but I would say it’s still important and what I very much liked about it is that there’s basically three messages so I don’t know if you would be surprised but they found out a lot of young people today consult AI on their career choice on their choice of university and the subject I’m going to study so I don’t ask anymore maybe my teacher or my aunt I consult AI and then I take the decision what career I’m going to choose and then again not a big surprise four out of five people use AI so this is two years ago I’m sure this number is a lot higher today and another interesting fact is engineering used to be number one among the people asked and today it’s computer science and information systems so So this is where the tension is going now, of course, because people see there is an opportunity, right?

This is an interesting career path. So you can see there is a lot of different aspects. And we as an institution, we are, of course, also quite active. We do offer scholarships, so we support any field. And just about a week ago, we conducted interviews. So we conducted interviews. There was about 100 people participating in these interviews for PhD scholarship and research scholarships in Germany. So the conclusion the professors came to was almost all the applicants used AI. And you can see this. This was mentioned before, right, by the way it’s written. You can see it, okay? And then a lot of people did actually have AI in their research proposals. It was part of the title.

So we see that. We see that. That. That changed. and this is positive right because we will progress and there will be new opportunities and just maybe also to draw a little conclusion out of these two publications that I mentioned my personal conclusion is this is a disruptive technology it’s just like when robots were invented and when computers changed our world you might recall, those who are a bit older will recall that there was also a lot of fear there is going to be mass unemployment we need, as the minister has said, we need to listen to the people we need to educate people to tell them what AI actually is AI is not intelligence at least not at the moment this is statistical tools that are predicting predicting an outcome you know but we need to listen to these fears and there is a lot of opportunity opportunity to for the entire world, which has just been mentioned before, right?

When we look at, for example, how we do agriculture, how we do farming and so on. Thank you.

Dr. Kusumita Arora

Thank you. I think we have very interesting aspects of AI, what it already means for individuals, for people, and what it’s likely to mean in the future. As it has come out, AI is without borders. So a few questions now on what international cooperations are needed and what they will do for AI, for humanity as a whole. I think AI in India and AI, the actual circumstances are a little bit different, but the fundamentals of AI will be very, very potent and very important for all countries and all environments equally. So, Dr. Kotla. I’ll come to you first to ask that. how international cooperation programs should get involved for better integration of AI for skills and innovation initiatives and to ensure an inclusive workforce globally.

Well, maybe

Dr. Bärbel Kofler

I’ll start with an overall topic. I’m just coming from another panel, which was about responsible AI. I think we have to get involved in responsibility because, yes, it was said, it’s crystal clear, that’s not neutral. We have biases in data. We have languages where millions of mother language speakers are excluded because they cannot use it in their language. People who have challenges to read and write are still sometimes excluded. So there are still things we have to overcome. We have to overcome to be really inclusive. And as I was pointing out before, it’s also in the business sector that way, that there is not the same chance for a small, medium -priced enterprise to be included in using AI or making it available for their purposes than it is for a big company.

So what international cooperation should do is to overcome the gaps. We formulated always that there is still a power gap, a gap in being a creator of AI, a gap in using AI in certain parts of the world more than in others. Dependency was talked about before. So we have to overcome those gaps. That’s the first thing we have to do in international cooperation, and we have to do it in a meaningful way. So it’s really to the perspective. To the purpose of those who are using it, to the purpose of countries, to the purpose of individuals, to companies, and so on. I still think we should have a close look how those new technologies can support agreed, internationally agreed ideas like the sustainable development goals because there’s a lot of potential in that technology that could help us to reach those goals.

So we should do that in a general way, but we also have to be concrete because it doesn’t really help us if we have conference after conference and there’s no concrete outcome on the ground. So we have to make ourselves controllable, I would say, as a government. We have to have commitments also in an international cooperation. We have to stick to those commitments and we have to report them and discuss them with public how to develop them further. So that is quite important. And that’s, by the way, something we try to do with our Hamburg sustainability, our declaration on responsibility. I was at the Hamburg Sustainability Conference. with concrete commitments by all stakeholders, governments, industry, academia, NGOs, everybody who wants to join that to come out with very concrete outcomes.

There are outcomes in skilling people to be not only users but co -creators. There can be outcomes like we were debating a little bit before, for how to bridge academia, industry needs, and the needs of the young generation of students. There are concrete topics we are working on that. I was mentioning the Living Lab in Mumbai we were launching. That’s not my ministry only or government only. It’s a cooperation of government. It’s academia with University of Mumbai and University of Leipzig who is sharing their insights. And there are concrete stakeholders from industry, and especially underlying small and medium enterprises who need access. and need workforces who don’t have to be trained for years when they left university.

They need to come up with solutions immediately when they enter a company. So we have to bridge all those things. And I think a governmental approach has to be also, on the one hand, to set frameworks, to create a reliable setting so people can trust and know what they are doing. Privacy was one of the topics. But, on the other hand, we also have to bridge the gaps which are existing in the conversation in between the stakeholders. So, yes, I’m always saying I love that with AI and all India, but at the end of the day, it’s the whole world. We all have to bridge if we really want to be useful or make use of that technology.

I think that’s, for me, the most important thing for government.

Dr. Kusumita Arora

Thank you. And Mr. Jeshwal, would you have some quick comments as to whether there needs to be other avenues of cooperation? Dr. Kofler has already said how cooperation has started between India and Germany for education. Would you like to add something to that?

Mr. Govind Jaiswal

Yeah, I’ll just add one point that AI is primarily based on the pattern. So when both the countries are collaborating, normally they have a different pattern set depending upon the societal structure, industry, maturity. Small media enterprises challenges. So when we collaborate, we try to complement each other because as long as we take to train the entire system and train the entire ecosystem, especially you said about the commitment of a stakeholder, that’s the core thing. If you want to achieve and we are working on that aspect, both the country industry and academia are doing excellent in some other field. And we will definitely collaborate and complement each other aspect. That’s it.

Dr. Kusumita Arora

Thank you. Thank you. We are a little bit running out of time, but just. One last question to Mr. Yan. How do you think German and Indian SMEs can better integrate into this? effort that is starting in full force, in fact.

Mr. Jan Noether

Yeah, thank you. That is, I believe, a central, a very central question, since if you look into Germany, 98 .5 % of the German business setup is SME. And if you look into India, it’s similar. And what is important to an SME, it is basically, I have to develop myself into a scenario where I am efficient, I am saving costs, I am innovative. Otherwise, there’s a very fierce competition, which makes me very, very vulnerable. So if we now bring this long -term experience of these Germans, German mid -sized and small companies, and we talk about decades of experiences, if we bring that together… with the talent, the spirit, the creativity, and this innovation spirit of the Indian talents.

And very important, if we in Germany get used to the speed we have in India, then this is going to be unbeatable project teams. So we need to bring people together, and we need to bring people together across countries. We need to form sandboxes where young talents of both countries and no borders of European countries and India can really experiment and come up with solutions which is not geared towards one SME, but for an industry within the SME sector. That is basically how we need to go forward. German companies are cautious when it comes to spending. and they are not risk takers so there needs to be a benefit and they need to see the benefit whether it’s a financial benefit an operational benefit they need to see that benefit in order to act therefore I look forward to be a little bit working on the field of integration together of course with other entities we’ll have in India and in Germany Thank

Dr. Kusumita Arora

hank you, thank you everybody and I think we have all come together Can

Dr. Augustus Azariah

I just make one last observation sorry, here here one last observation, sorry this thought came to me in terms of how do we collaborate and cooperate and how well the EU has done GDPR okay, making sure that people have that security similarly there’s a lot to learn from Germany in terms of how they improved their vocational training from elementary levels right up to master’s and PhD levels. And I think today, like the Honorable Secretary also said, at the school level, we need that level of collaboration to ensure that AI is seeded at the elementary level, if not at the primary level. And my request, of course, is to this eminent panel to enable our educational institutions and provide them the expertise that they can mature in taking this at the elementary level.

Thank you. Yeah.

Dr. Kusumita Arora

Of course, that would make a world of difference. And I’m sure all the partners here are ready to see a conversion of intent to commitment in the very near future. And I wish everybody the best and look forward to the outcomes. Thank you. Thank you.

Moderator

Mr. Govind and Mrs. Kofler to stay here and thank you very much for the other panelists for the days and good luck and I wish you a good summit. So because Mr. Azaria asked for follow up and we will do a follow up, we now turn to an important initiative that exemplifies the next phase of German -Indian cooperation in the field of artificial intelligence the AI academia industry innovation partnerships in Asia commissioned by BMZ and implemented through GIZ and this exactly addresses the widening gap between widening demand for AI skills and the need for job ready talent We learned about the living labs. This will be all included, combining students, researchers, industry experts to co -create and test AI solutions in a real world setting.

So this is all about and it’s just my honor to ask to invite Babel Kofler to deliver her remarks on this initiative and followed by Mr. J. J. Stahl’s remarks, That’s for me. You can stand also. No. Okay.

Dr. Bärbel Kofler

It’s a little bit dynamic at the end of the day. And I’m very brief because there was said a lot about the necessity of cooperation, especially on the training sector and the training field. What we all know is we were talking about workplaces of the future. And we know there is a chance in create also new jobs through new technologies. You were mentioned. And also. So there’s a World Bank study about, or is it World Trade Organization, I think, to study about already the creation, job creation is 1 .3 million, but we don’t have really enough skilled workforces for that sphere. So there are almost 1 million job opportunities not really filled in with adequate people, which at the end of the day leads to personal loss, economic loss, and we want to bridge those things.

That’s why we are creating this academic approach together. We want to bridge that and offer those job opportunities, which are already there on the ground, to people around the globe, and that initiative should be a part of that. And that’s why we’re really happy, and I also have to read the title, that we created. We launched this project of Artificial Intelligence Academia Industry Innovation Partnership in Asia. through my ministry together with India, Indian partners and partners in Vietnam and I’m very happy that we can do that today. Thank you.

Mr. Govind Jaiswal

Actually when we started the collaboration for this project and when we got to know about this innovation, this living lab actually. So the name is very interesting. The lab which where you incubate your idea and create a prototype and living means it has should be all the attributes of a life. So I hope it will be able to solve the problem of the industry and academia and it’s about bringing the academic world closer to industry and industry closer to academic world and academic training to just come straight with the requirement of industry. That is the major objective. and I convey my wishes for this project I am very much sure it will achieve its objective and will have further collaboration in the future also.

Thank you Thank you Thank you

Moderator

So now we invite you to watch a brief video representing the initiative Okay

Video Narrator

AI and digital technologies are reshaping how businesses operate faster than ever before. For companies the challenge is no longer access to technology but access to people. People with the skills to adapt, innovate and work confidently with AI What is taught today is often no longer what industry needs tomorrow especially for German SMEs expanding into global and Asian markets At the same time, Asia is emerging as a powerful driver of growth, dynamic economies, new ideas, and a rising generation of digital talent ready to engage with the world. This is where German Development Cooperation, implemented by GIZ, brings together German and Asian universities, businesses, and governments in a new AI Academia Industry Innovation Partnership. The question is simple.

How do we develop AI -ready skills, support innovation, and grow across borders? The answer lies in learning and innovation spaces, living labs. Living labs are structured learning and innovation spaces where universities and companies collaborate on real, industry -driven challenges. Students work on real business problems. Companies test ideas. Innovate and access emerging talent in a low -risk environment. Faculty strengthen curricula through direct engagement with industry And institutions build long -term, meaningful partnerships For students, this means hands -on experience, global collaboration, and improved employability For businesses, it means access to future -ready talent, fresh perspectives, a vibrant AI ecosystem, and a testing ground for innovation More than a program, this is a partnership at eye level Combining German expertise with Asian entrepreneurial energy and drive to innovate This is the AI Innovation Partnership Uniting academia, industry, and governments across Germany and Asia to shape what’s next Developing skills, enabling innovation, building an AI -driven future together

Moderator

Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (22)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The moderator opened the session by stating that the strategic priority has shifted from merely developing artificial intelligence to ensuring its effective deployment, especially for SMEs in Germany and India.”

The knowledge base explicitly notes that the strategic priority is no longer solely AI development but effective deployment, particularly for SMEs in Germany and India [S3].

Confirmedmedium

“Dr Kusumita Arora thanked GIZ for convening the forum, outlined the day’s focus on “people in the age of AI”, and emphasized the need for partnerships linking talent development, policy, infrastructure and an inclusive workforce.”

The session opening in the knowledge base thanks GIZ and mentions a focus on people in the age of AI, and the broader discussion stresses multi-stakeholder partnerships involving government, industry, academia and civil society [S4] and [S97].

Additional Contextmedium

“Dr Bärbel Kofler highlighted regulatory priorities – open‑source data, climate‑friendly computing, privacy and decent‑work standards, and referenced an AI Living Lab involving universities in Mumbai and Leipzig.”

The knowledge base does not list those specific regulatory items, but it does underline the importance of collaborative governance among government, industry, academia and civil society, providing broader context for Kofler’s multi-stakeholder approach and the focus on SMEs [S97] and [S3].

External Sources (109)
S1
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Arthur Rapp- Role: Representative of DAAD (German Academic Exchange Service); Area of expertise: Academic research and …
S2
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Mr. Jan Noether- Title: Director General of the Indo-German Chamber of Commerce; Area of expertise: Indo-German economi…
S3
GermanAsian AI Partnerships Driving Talent Innovation the Future — global digital transformation for partners such as Germany and India. The strategic priority is not longer solely the de…
S4
https://dig.watch/event/india-ai-impact-summit-2026/germanasian-ai-partnerships-driving-talent-innovation-the-future — Ms. Kofler, please come up. There’s no signs. You can choose in the middle. Next panelist, I would really warmly welcome…
S5
GermanAsian AI Partnerships Driving Talent Innovation the Future — Dr. Arora emphasizes the need for higher education and vocational training systems to reorient themselves to work closel…
S6
ISBN: — – H.E. Dr. Amani Abou-Zeid, African Union Commission – H.E. Ms. Aurélie Adam Soulé Zoumarou, Benin – Dr. Ann Aerts, …
S7
AI Meets Agriculture Building Food Security and Climate Resilien — Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And unde…
S8
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Dr. Augustus Azariah- Title: HR leader for Asochem; Role: Works for Kindrel (IBM spinoff); Area of expertise: Infrastru…
S9
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S10
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S11
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S12
GermanAsian AI Partnerships Driving Talent Innovation the Future — Speakers:Dr. Bärbel Kofler, Mr. Jan Noether, Video Narrator
S13
GermanAsian AI Partnerships Driving Talent Innovation the Future — – Dr. Bärbel Kofler- Mr. Jan Noether- Video Narrator
S15
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Dr. Bärbel Kofler- Title: Parliamentary State Secretary to the Federal Ministry of Economic Cooperation and Development…
S16
GermanAsian AI Partnerships Driving Talent Innovation the Future — global digital transformation for partners such as Germany and India. The strategic priority is not longer solely the de…
S17
https://app.faicon.ai/ai-impact-summit-2026/germanasian-ai-partnerships-driving-talent-innovation-the-future — global digital transformation for partners such as Germany and India. The strategic priority is not longer solely the de…
S18
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Mr. Govind Jaiswal- Title: Joint Secretary at the Ministry of Education of the Government of India; Area of expertise: …
S19
Open Forum: A Primer on AI — Another concern is the potential impact of AI on the job market. As AI capabilities advance, certain professions may bec…
S20
Strengthening Worker Autonomy in the Modern Workplace | IGF 2023 WS #494 — Overall, the analysis provides valuable perspectives on promoting decent work and economic growth and calls for measures…
S21
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Minister Weishnaff, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere app…
S22
Responsible AI for Shared Prosperity — For AI to fulfill its promise of helping achieve the Sustainable Development Goals and overcoming inequality, it must be…
S23
Responsible AI for Shared Prosperity — Shekar Sivasubramanian from Wadwani AI described their work in India across 14-16 languages, emphasizing that utility an…
S24
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — The lack of adequate data quality and collection mechanisms, coupled with inadequate data protection and privacy laws, r…
S25
Open Forum #38 Harnessing AI innovation while respecting privacy rights — Jimena Viveros: Hello. I don’t know if anyone can hear me. Yes? Okay, great. So it is great to be here, sorry for the …
S26
Education meets AI — In addition to the above topics, the significance of critical information and critical thinking in education was also di…
S27
Fireside Chat The Future of AI & STEM Education in India — Throughout the discussion, speakers consistently emphasized the importance of ethical AI deployment and responsible use….
S28
Science under siege from AI, integrity of research at risk — AI is rapidlytransformingthe landscape of scientific research, but not always for the better. A growing concern is the p…
S29
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — Furthermore, the speaker expresses a negative sentiment towards outsourcing, emphasising the potential risks involved. B…
S30
Regulators raise concerns over tech giants’ market power in AI sector. — Andreas Mundt, the head of Germany’s antitrust authority, has expressed his concerns about the potential for AI to incre…
S31
WS #184 AI in Warfare – Role of AI in upholding International Law — Mohamed Sheikh-Ali: All I can add is, for now, until the technology is advanced enough, which, in our opinion, from th…
S32
Open Forum #33 Building an International AI Cooperation Ecosystem — Qi Xiaoxia: Thank you, Professor, distinguished guests, ladies and gentlemen, friends, good afternoon. I’m delighted to …
S33
Press Conference: Closing the AI Access Gap — Business partnership with civil society, governance and industries is important The alliance also focuses on enhancing …
S34
Comprehensive Report: 18th Meeting of the Disarmament and International Security Committee — AI technologies can be powerful tool in achieving sustainable development. International cooperation, capacity building …
S35
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S36
Multistakeholder Partnerships for Thriving AI Ecosystems — Government investment extends to skills training, vocational education, and university research programmes that connect …
S37
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — We have deployed tools to help us to achieve the best results. largest supercomputing clusters in the region of Central …
S38
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — UNESCO is providing policy guidance on AI in education, focusing on frameworks that emphasize ethical use of AI in educa…
S39
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the s…
S40
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — Concrete timelines and mechanisms for the proposed collaborations with industry, academia, and government
S41
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S42
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S43
WS #110 AI Innovation Responsible Development Ethical Imperatives — Daisy Selematsela: Thank you. I just want to highlight on issues faced by academic libraries when we look at the integra…
S44
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Higher productivity potential exists in agriculture, manufacturing, healthcare, and construction sectors
S45
How AI Drives Innovation and Economic Growth — I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk …
S46
A Digital Future for All (afternoon sessions) — AI has the potential to accelerate progress on the UN Sustainable Development Goals. It can be applied to benefit humani…
S47
Panel Discussion Data Sovereignty India AI Impact Summit — Disagreement level:Low to moderate disagreement level with high strategic alignment. The disagreements are primarily tac…
S48
Open Forum #26 High-level review of AI governance from Inter-governmental P — These key comments shaped the discussion by broadening its scope from purely technical considerations to encompass ethic…
S49
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S50
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion revealed strong alignment between industry needs, academic capabilities, and government policy. David Fre…
S51
GermanAsian AI Partnerships Driving Talent Innovation the Future — AI and digital technologies are reshaping how businesses operate faster than ever before. For companies the challenge is…
S52
GermanAsian AI Partnerships Driving Talent Innovation the Future — How do we develop AI -ready skills, support innovation, and grow across borders? The answer lies in learning and innovat…
S53
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — The role division between research labs, government, and industry in establishing industry-specific standards needs clar…
S54
Open Forum: A Primer on AI — Privacy protection is another important aspect discussed in the analysis. It is noted that AI training often involves th…
S55
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Clarity needed on principles that resonate with all stakeholders In conclusion, the conversation highlights the cautiou…
S56
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Despite disagreements on other issues, all panellists converged on the critical importance of massive investment in skil…
S57
Building Population-Scale Digital Public Infrastructure for AI — All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential for ach…
S58
Comprehensive Report: Preventing Jobless Growth in the Age of AI — -Comprehensive skills development is critical, requiring strong partnerships between businesses, educational institution…
S59
Welfare for All Ensuring Equitable AI in the Worlds Democracies — I mean, I think we need to focus on AI literacy because, you know, again, the technology is moving so fast. How do we ma…
S60
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S61
Inclusive AI Starts with People Not Just Algorithms — High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers come from diffe…
S62
Inclusive AI Starts with People Not Just Algorithms — Consensus level:High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers…
S63
High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech — High level of consensus on fundamental principles and approaches, with differences mainly in implementation details rath…
S64
How AI Drives Innovation and Economic Growth — Consensus level:High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, developmen…
S65
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — The level of disagreement among the speakers was minimal. This high level of agreement implies a strong consensus on the…
S66
Keynote Adresses at India AI Impact Summit 2026 — Consensus level:Very high level of consensus with no apparent disagreements or tensions. This suggests a mature, well-co…
S67
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Industry-academia collaboration through mentorship programs is essential for bridging the skills gap
S68
WSIS Action Line C2 Information and communication infrastructure — While AI is powerful, it requires comprehensive policy frameworks to ensure its implementation is both ethical and equit…
S69
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — The lack of adequate data quality and collection mechanisms, coupled with inadequate data protection and privacy laws, r…
S70
WS #82 A Global South perspective on AI governance — Gian Claudio: Thank you very much. I hope you hear me well. Yeah, good. So yeah, indeed, the AI Act was a bit everyw…
S71
Building Sovereign and Responsible AI Beyond Proof of Concepts — Responsible AIencompasses ethics, governance, bias prevention, and crucially, human-centred design. This requires clear …
S72
Open Forum #33 Building an International AI Cooperation Ecosystem — Qi Xiaoxia: Thank you, Professor, distinguished guests, ladies and gentlemen, friends, good afternoon. I’m delighted to …
S73
GermanAsian AI Partnerships Driving Talent Innovation the Future — So what international cooperation should do is to overcome the gaps. We formulated always that there is still a power ga…
S74
GermanAsian AI Partnerships Driving Talent Innovation the Future — Dr. Kofler argues that international cooperation should focus on bridging gaps in AI access and creation capabilities be…
S75
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S76
Comprehensive Report: 18th Meeting of the Disarmament and International Security Committee — AI technologies can be powerful tool in achieving sustainable development. International cooperation, capacity building …
S77
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S78
Driving Indias AI Future Growth Innovation and Impact — Theinnovation pillarcenters on comprehensive skilling programs spanning from primary education through workforce develop…
S79
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — We have deployed tools to help us to achieve the best results. largest supercomputing clusters in the region of Central …
S80
Multistakeholder Partnerships for Thriving AI Ecosystems — Government investment extends to skills training, vocational education, and university research programmes that connect …
S81
Skilling and Education in AI — And then the data that I’m submitting into the system, simply by interacting with AI, I’m submitting data and providing …
S82
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — Concrete timelines and mechanisms for the proposed collaborations with industry, academia, and government
S83
The Purpose of Science / DAVOS 2025 — Collaboration between academic institutions and industry can lead to innovative solutions
S84
Open Forum: A Primer on AI — Another concern is the potential impact of AI on the job market. As AI capabilities advance, certain professions may bec…
S85
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S86
Thinking through Augmentation — L’Oréal emphasizes the importance of data privacy and ethical algorithms in their AI implementations. They are committed…
S87
WS #110 AI Innovation Responsible Development Ethical Imperatives — Daisy Selematsela: Thank you. I just want to highlight on issues faced by academic libraries when we look at the integra…
S88
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — High level of consensus on implementation approach and timeline, with moderate consensus on regulatory strategies. The a…
S89
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S90
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Higher productivity potential exists in agriculture, manufacturing, healthcare, and construction sectors
S91
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — Thank you, Sarah. Is this working? Yeah. Thank you all for sharing this wonderful moment for me because we’re here with …
S92
Sustainable development — AI-powered tools like remote sensing, drones, and predictive analytics can enhance precision agriculture practices. They…
S93
World Economic Forum 2025 at Davos — Discussions moved beyond philosophical debates to focus onAI as a commodity—how businesses can integrate it into workflo…
S94
Delegated decisions, amplified risks: Charting a secure future for agentic AI — – **Moderator**: Role mentioned as moderator of the session
S95
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S96
Session — Susan Ariel Aaronson: Thank you so much. So I am one of the fortunate few, I’d say, whose grant has not been cut as of y…
S97
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S98
Closing remarks – Charting the path forward — Speakers consistently advocated for governance approaches that involve governments, industry, academia, and civil societ…
S99
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — A triple helix approach involving government, academia, and industry is supported as a means to address higher education…
S100
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — Yong Guo: Thank you, Xin, for your brief introduction. Distinguished guests, colleagues and friends, ladies and gentleme…
S101
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Despite differing control perspectives, speakers agreed on collaborative governance needs. Tobias Thiel advocated for “f…
S102
WS #162 Overregulation: Balance Policy and Innovation in Technology — Paola Galvez, a tech policy consultant, stated that we are past the question of whether to regulate or not, and now the …
S103
The Challenges of Data Governance in a Multilateral World — An advocate in the discussion strongly supports data governance models that prioritize cooperation, privacy, and the com…
S104
AI governance efforts centre on human rights — At theInternet Governance Forum 2025in Lillestrøm, Norway, a keysessionspotlighted the launch of the Freedom Online Coal…
S105
Centering People and Planet in the WSIS+20 and beyond — She identified this as a priority emerging from UNCTAD consultations, emphasizing the need for improved data governance,…
S106
Exploring the impacts of AI technology on our lives — I have been hearing a lot about Artificial Intelligence! ChatGPT has been going viral recently and people are using it t…
S107
Empowering Workers in the Age of AI — Sher Verick, advisor to the ILO’s Deputy Director General and representative of the AI Observatory, presented research f…
S108
AI: The Great Equaliser? — While the introduction of AI technology may result in job losses in certain sectors, it also creates new job opportuniti…
S109
How AI Is Transforming Indias Workforce for Global Competitivene — I think sometimes we have to take a step back and just realize how transformational, how exciting this technology is. I …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Bärbel Kofler
5 arguments143 words per minute1618 words678 seconds
Argument 1
Addressing job‑loss fears and ensuring decent work (Dr. Bärbel Kofler)
EXPLANATION
She acknowledges that people are afraid AI may lead to job losses and stresses the importance of treating these concerns seriously. She argues that technology must be deployed to serve workers, especially in SMEs, and to promote decent work conditions.
EVIDENCE
Kofler notes that “people get afraid of that there might be a loss of jobs” and that these feelings must be taken “very carefully because there’s reason for that” [36-38]. She further explains that AI should be a reliable partner that serves the workforce, including small and medium-sized enterprises, to overcome power gaps and make technology useful for everyone [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concern about AI-driven job displacement and the need for decent work is echoed in external analyses that highlight legitimate fears of job loss [S3], cite examples such as Duolingo’s AI-driven layoffs [S19], and call for measures to promote decent work and economic growth [S20].
MAJOR DISCUSSION POINT
Job‑loss fears and decent work
DISAGREED WITH
Mr. Govind Jaiswal
Argument 2
AI Living Lab that embeds AI into university curricula and connects students with SMEs (Dr. Bärbel Kofler)
EXPLANATION
Kofler describes the creation of an AI Living Lab at the University of Mumbai that integrates AI into university courses and gives students practical experience with small‑media enterprises. The initiative aims to provide access to AI for groups that normally lack it.
EVIDENCE
She reports that after returning from Mumbai, “we were opening AI Living Lab at University, Rattentata University at Mumbai” and that the Lab makes AI part of the university curriculum, linking students with small media enterprises that otherwise would not have AI access [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Living Labs as a bridge between academia and SMEs are described as concrete cooperation models that integrate AI into curricula and give students practical experience with small-media enterprises [S1][S3].
MAJOR DISCUSSION POINT
AI Living Lab for curricula and SMEs
DISAGREED WITH
Dr. Augustus Azariah, Dr. Kusumita Arora
Argument 3
AI Academia‑Industry Innovation Partnership in Asia and Living Labs as concrete cooperation (Dr. Bärbel Kofler)
EXPLANATION
Kofler outlines a broader partnership called the AI Academia‑Industry Innovation Partnership in Asia, which uses Living Labs to co‑create AI solutions involving governments, academia and industry. The partnership seeks tangible outcomes rather than just discussions.
EVIDENCE
She refers to the Living Lab in Mumbai as a concrete example of cooperation among government, the University of Mumbai, the University of Leipzig and industry stakeholders [241-247]. Later she announces the launch of the AI Academia-Industry Innovation Partnership in Asia, describing it as a joint effort with India, Vietnam and Germany to bridge skill gaps [293-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The partnership that brings together governments, academia and industry through Living Labs is presented as a concrete, outcome-oriented initiative in the external report [S3] and further detailed in the overview of the collaboration model [S1].
MAJOR DISCUSSION POINT
Living Labs and Asia partnership
Argument 4
Germany‑India cooperation, Hamburg sustainability declaration, and concrete commitments for inclusive AI (Dr. Bärbel Kofler)
EXPLANATION
Kofler highlights the Hamburg Sustainability Declaration, which gathers commitments from governments, industry, academia and NGOs to ensure inclusive AI. She stresses that such concrete commitments are essential for responsible AI deployment.
EVIDENCE
She mentions the Hamburg Sustainability Conference where “concrete commitments by all stakeholders, governments, industry, academia, NGOs” were made to produce outcomes such as skilling people to be co-creators of AI [236-238].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Hamburg Sustainability Declaration and the associated measurable commitments from multiple stakeholders are highlighted as a key example of concrete action for inclusive AI [S3].
MAJOR DISCUSSION POINT
Hamburg sustainability and inclusive AI
Argument 5
Need for responsible AI that overcomes language bias and power gaps, ensuring inclusive access (Dr. Bärbel Kofler)
EXPLANATION
Kofler argues that AI is not neutral; it contains biases in data and language that exclude many users. International cooperation must address these power gaps to make AI accessible to all, especially small enterprises and linguistic minorities.
EVIDENCE
She states that “there are biases in data” and that “millions of mother language speakers are excluded” because AI does not support their languages, and that there is a “power gap” that must be closed for inclusive AI use [214-222]. She adds that overcoming these gaps is the first task of international cooperation [225-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI must address data bias, language exclusion and power imbalances, as discussed in reports on inclusive AI and language diversity [S22][S23] and reinforced by the broader analysis of bias in AI systems [S3].
MAJOR DISCUSSION POINT
Responsible, inclusive AI
DISAGREED WITH
Arthur Rapp
A
Arthur Rapp
3 arguments151 words per minute782 words309 seconds
Argument 1
Risks of dependence on non‑European platforms, bias, and data‑protection concerns (Arthur Rapp)
EXPLANATION
Rapp warns that reliance on AI platforms owned outside Europe creates dependency, introduces bias, and raises data‑protection issues that can threaten research freedom. He stresses the need for Europe‑based solutions to safeguard sovereignty.
EVIDENCE
He notes a “big risk of dependence on non-European AI platforms” which threatens “freedom of research and teaching” and highlights that such platforms are biased and can lead to data leakage, as users unknowingly train AI with their data [170-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of reliance on non-European AI platforms, associated bias and data-leakage concerns are documented in the external assessment of platform dependence [S3] and in the discussion of data-quality and privacy challenges [S24].
MAJOR DISCUSSION POINT
Dependence on foreign AI platforms
DISAGREED WITH
Dr. Bärbel Kofler
Argument 2
Students using AI for career choices, highlighting the need for critical thinking and awareness (Arthur Rapp)
EXPLANATION
Rapp cites studies showing that a large proportion of students consult AI for career and university decisions, which raises concerns about data privacy and the need for critical thinking. He points out that AI influences student choices and may affect intellectual property.
EVIDENCE
He references a study where “four out of five people use AI” for career decisions, noting that students often rely on AI rather than teachers or family, and that engineering has shifted to computer science as the top choice [181-186]. He also mentions a follow-up study where “almost all the applicants used AI” in their research proposals and scholarship applications [188-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies emphasizing the need for critical thinking when students rely on AI for career decisions and the broader issue of indiscriminate consumption of AI-generated content are presented in the education-focused sources [S26][S27].
MAJOR DISCUSSION POINT
AI influence on student career choices
Argument 3
Dependence on foreign AI platforms creates bias, data‑leakage risks, and threatens research freedom (Arthur Rapp)
EXPLANATION
Rapp reiterates that using AI services owned abroad can embed bias, expose data to foreign entities, and jeopardize the independence of research. He calls for awareness of these risks and for developing local AI capacities.
EVIDENCE
He repeats that dependence on non-European platforms leads to bias, data-protection concerns, and the possibility that “someone has access to this, will extract this information” and could file patents elsewhere, threatening research sovereignty [170-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same concerns about foreign platform dependence, bias and threats to research freedom are reiterated in the external analysis of platform risks [S3] and the data-protection overview [S24].
MAJOR DISCUSSION POINT
Data sovereignty and bias
D
Dr. Augustus Azariah
4 arguments127 words per minute886 words418 seconds
Argument 1
Industry’s worry about superficial AI skills and the need for human oversight (Dr. Augustus Azariah)
EXPLANATION
Azariah observes that many fresh graduates present AI‑generated CVs lacking genuine expertise, creating confusion for recruiters. He stresses that human oversight is essential to validate generative AI outputs.
EVIDENCE
He describes “chaos and confusion” where “CV developed by ChatGPT” appears, and emphasizes that “the human element” must oversee what generative AI produces [115-124]. He also notes that faculty are not trained to teach productivity tools like Copilot, creating a skill gap [125-130].
MAJOR DISCUSSION POINT
Superficial AI skills and oversight
Argument 2
Faculty certification programmes, endowment funds, and tapping talent in tier‑2/3 cities (Dr. Augustus Azariah)
EXPLANATION
Azariah details industry‑academia collaborations that certify faculty in AI tools, run large hackathons, and provide endowment funds to stimulate innovation, especially in tier‑2 and tier‑3 cities where talent is abundant.
EVIDENCE
He reports a hackathon in Mangaluru with over 18,000 participants, resulting in more than a thousand faculty certified in Copilot and the creation of an endowment fund to support faculty innovation and patents [133-140].
MAJOR DISCUSSION POINT
Faculty upskilling and talent outreach
Argument 3
Industry‑driven college engagements, hackathons, and faculty training to bridge skill gaps (Dr. Augustus Azariah)
EXPLANATION
Azariah emphasizes that industry should move beyond guest lectures, actively engage with colleges, certify faculty, and organize hackathons to close the AI skill gap. This hands‑on approach aims to produce job‑ready graduates.
EVIDENCE
He explains that “the industry goes into colleges with partnerships” and that they “start with making it competitive” by certifying faculty and running hackathons, as demonstrated by the 18,000-person event and faculty certification mentioned earlier [133-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for deeper industry-academia collaboration beyond guest lectures, including hackathons and faculty certification, is highlighted in the external report on industry-driven engagement [S3].
MAJOR DISCUSSION POINT
Industry‑college collaboration
DISAGREED WITH
Dr. Bärbel Kofler
Argument 4
Learning from German GDPR and vocational‑training models to seed AI education from elementary level (Dr. Augustus Azariah)
EXPLANATION
Azariah suggests that the EU’s GDPR experience and Germany’s robust vocational‑training system can inform how AI education is introduced early, even at primary school, to build a strong foundation.
EVIDENCE
He notes that “the EU has done GDPR” and that “Germany improved their vocational-training models” and calls for seeding AI at the elementary level, urging institutions to receive expertise for this purpose [279-282].
MAJOR DISCUSSION POINT
Early AI education using German models
DISAGREED WITH
Dr. Kusumita Arora, Dr. Bärbel Kofler
M
Mr. Govind Jaiswal
3 arguments157 words per minute1056 words402 seconds
Argument 1
National Education Policy, dual‑system apprenticeship, and Germany‑India collaboration to equip 40 million students (Mr. Govind Jaiswal)
EXPLANATION
Jaiswal outlines India’s National Education Policy 2020, which mandates 50 % skill‑oriented courses, the expansion of research parks, and a dual‑system apprenticeship model. He highlights collaboration with Germany to train roughly 40 million students in AI and related technologies.
EVIDENCE
He cites the NEP 2020 that “enabled all the university ecosystem… to include 50 % courses especially for the skill courses” and mentions the creation of six new research parks (now nine) and a dual-education system that makes industry exposure mandatory, aiming to equip around 40 million students [84-95] and [96-99].
MAJOR DISCUSSION POINT
Education policy and large‑scale AI skill rollout
DISAGREED WITH
Dr. Bärbel Kofler
Argument 2
Structured industry‑academia collaboration with stakeholder commitments and joint governance (Mr. Govind Jaiswal)
EXPLANATION
He stresses that effective AI adoption requires structured collaboration among industry, academia and government, with clear stakeholder commitments and joint governance mechanisms to ensure seamless ecosystem transition.
EVIDENCE
Jaiswal states that “when both the countries are collaborating… we try to complement each other” and that “stakeholder commitment” is the core thing for training the entire system, emphasizing joint work between country, industry and academia [260-263].
MAJOR DISCUSSION POINT
Stakeholder‑driven collaboration framework
Argument 3
Complementary patterns and joint efforts between the two countries to train the whole ecosystem (Mr. Govind Jaiswal)
EXPLANATION
He points out that India and Germany have different technological and societal patterns, and by complementing each other’s strengths they can jointly train the full AI ecosystem, ensuring no adverse effects during transition.
EVIDENCE
He explains that “both the countries have different pattern set” and that collaboration aims to “complement each other” to train the entire ecosystem, highlighting the importance of stakeholder commitment [257-263].
MAJOR DISCUSSION POINT
Complementary bilateral training approaches
M
Mr. Jan Noether
3 arguments123 words per minute595 words289 seconds
Argument 1
Dual university programme and remote‑learning initiatives for AI‑enabled education (Mr. Jan Noether)
EXPLANATION
Noether announces a partnership with a dual university in Baden‑Württemberg, creating a master’s programme where two‑thirds of teaching occurs in India and one‑third in Germany, facilitating remote learning and cross‑border education.
EVIDENCE
He says “we just signed an agreement with a dual university of Baden-Württemberg on a master’s program where two-thirds is handled in India, one-third in Germany” [161-164].
MAJOR DISCUSSION POINT
Dual‑degree AI master programme
Argument 2
Cross‑border sandboxes for SME innovation and joint degree programmes (Mr. Jan Noether)
EXPLANATION
Noether proposes establishing sandboxes where young talent from both countries can experiment together, creating solutions for SMEs and supporting joint degree programmes that blend German and Indian expertise.
EVIDENCE
He calls for “sandboxes where young talents of both countries… can really experiment and come up with solutions” and links this to the joint master programme mentioned earlier [274-276] and [162-164].
MAJOR DISCUSSION POINT
Sandboxes and joint degrees for SMEs
Argument 3
Cross‑border AI applications (healthcare, agriculture, energy) and dual‑degree programmes to boost SME competitiveness (Mr. Jan Noether)
EXPLANATION
Noether highlights AI’s potential in sectors such as healthcare, agriculture, and energy, and argues that dual‑degree programmes and cross‑border collaborations will enhance SME competitiveness in these areas.
EVIDENCE
He lists applications in “healthcare, disease management, remote access to patients” as well as “agriculture, digital imaging, satellite imaging, water management” and notes the importance of AI in energy sustainability, linking these to the dual university programme [155-161] and the joint degree details [162-164].
MAJOR DISCUSSION POINT
AI sector applications and SME boost
D
Dr. Kusumita Arora
5 arguments101 words per minute818 words485 seconds
Argument 1
AI policy must keep people at the centre, ensuring personal growth, dreams and individual circumstances are considered in AI deployment.
EXPLANATION
Dr. Arora stresses that any discussion about AI should always include the human dimension, guaranteeing that technology serves people’s aspirations and specific situations rather than being a purely technical exercise.
EVIDENCE
She frames the session as a conversation about “people in the age of AI and what partnerships are going to look like for talent” and explicitly states that the discussion “includes and always have to include the people, personal growth, their dreams and their particular circumstances” through which they will connect to AI and each other [26-28].
MAJOR DISCUSSION POINT
Human‑centred AI policy
Argument 2
Higher education and vocational training must be re‑oriented to AI‑enabled workplaces, linking academia, industry and innovation ecosystems.
EXPLANATION
Dr. Arora calls for universities and vocational systems to adapt curricula and training models so that graduates are ready for AI‑driven jobs, emphasizing close cooperation with industry and innovation ecosystems.
EVIDENCE
She asks the panel how to “plan to have higher education and vocational training systems orient to or reorient to work closely, not only on the courses, but along with industry and innovation ecosystems so that workers, the graduates who are coming out of these systems are prepared for AI-enabled workplaces” [60-63].
MAJOR DISCUSSION POINT
Aligning education with AI workforce needs
Argument 3
International cooperation is essential because AI has no borders; partnerships must be inclusive and human‑centred.
EXPLANATION
She highlights that AI transcends national boundaries and therefore requires collaborative frameworks that ensure inclusive access and benefits for all societies.
EVIDENCE
In her remarks she notes “AI is without borders” and then asks what “international cooperations are needed and what they will do for AI, for humanity as a whole” to ensure an inclusive workforce globally [208-212].
MAJOR DISCUSSION POINT
Cross‑border AI cooperation
Argument 4
There is a need to convert the current intent into concrete commitments and actions by all partners.
EXPLANATION
Dr. Arora urges the panel and stakeholders to move beyond discussion and translate their stated intentions into measurable, binding actions.
EVIDENCE
She says, “I’m sure all the partners here are ready to see a conversion of intent to commitment in the very near future” and wishes everyone the best for the outcomes [283-285].
MAJOR DISCUSSION POINT
From intent to commitment
Argument 5
Research and academic programmes, such as those supported by DAAD, should integrate AI skills from school level through lifelong learning.
EXPLANATION
Dr. Arora calls on research funding bodies to embed AI competencies across the entire education continuum, ensuring that learners acquire critical thinking and AI literacy from early stages onward.
EVIDENCE
She addresses Mr. Ross, a DAAD representative, asking how programmes could “integrate AI skills, new directions in AI, right from maybe schools and greater into universities and, in fact, lifelong learning, to equip leaders, learners with skills, the critical thinking, to use AI for their personal uses as well as to drive the economy” [166-169].
MAJOR DISCUSSION POINT
Embedding AI in research‑based education pathways
M
Moderator
3 arguments110 words per minute626 words340 seconds
Argument 1
The strategic priority has shifted from merely developing AI to ensuring its effective deployment, especially for small and medium‑sized enterprises.
EXPLANATION
The moderator frames the discussion by stating that the focus is now on how AI can be deployed responsibly and inclusively, with particular attention to SMEs that form the backbone of economies.
EVIDENCE
He says, “The strategic priority is not longer solely the development of artificial intelligence, but very much its response limit effective deployment” and highlights that “access to skills, innovation, ecosystems and trusted partnerships will determine whether AI becomes a driver of opportunity for all” especially for SMEs in Germany, India and elsewhere [2-4].
MAJOR DISCUSSION POINT
Strategic focus on AI deployment
Argument 2
The AI Academia‑Industry Innovation Partnership in Asia is a concrete initiative to bridge the AI skills gap through Living Labs and multi‑stakeholder cooperation.
EXPLANATION
The moderator announces a new partnership that brings together governments, academia and industry to create real‑world AI solutions and address the shortage of job‑ready talent.
EVIDENCE
He describes the initiative as “the AI academia industry innovation partnerships in Asia commissioned by BMZ and implemented through GIZ” that will combine “students, researchers, industry experts to co-create and test AI solutions in a real world setting” and mentions the Living Lab in Mumbai as an example of concrete cooperation [288-304].
MAJOR DISCUSSION POINT
Concrete partnership to develop AI skills
Argument 3
Development partners such as GIZ play a pivotal role in supporting AI skill development and international cooperation.
EXPLANATION
The moderator thanks GIZ at the start and later highlights its implementation role for the Asia partnership, underscoring the importance of development agencies in facilitating AI collaboration.
EVIDENCE
He thanks GIZ for the session at the beginning ([22]) and later notes that the AI partnership is “implemented through GIZ” and that the initiative will “address the widening gap between demand for AI skills and the need for job-ready talent” [288-291].
MAJOR DISCUSSION POINT
Role of development agencies in AI cooperation
V
Video Narrator
3 arguments119 words per minute280 words141 seconds
Argument 1
The main challenge for companies is not technology access but the shortage of people with AI‑ready skills.
EXPLANATION
The narrator emphasizes that while digital technologies are rapidly reshaping business, the bottleneck lies in finding skilled talent capable of adapting, innovating and working confidently with AI.
EVIDENCE
The narration states, “For companies the challenge is no longer access to technology but access to people. People with the skills to adapt, innovate and work confidently with AI” [312-314].
MAJOR DISCUSSION POINT
Skills gap as primary business challenge
Argument 2
Living labs are structured learning and innovation spaces that connect universities and companies to solve real industry challenges, providing hands‑on experience and improving employability.
EXPLANATION
The video describes living labs as environments where students work on actual business problems, companies test ideas, and faculty strengthen curricula through direct industry engagement, thereby creating a win‑win for education and business.
EVIDENCE
It explains that “Living labs are structured learning and innovation spaces where universities and companies collaborate on real, industry-driven challenges. Students work on real business problems. Companies test ideas. Faculty strengthen curricula through direct engagement with industry” [318-322].
MAJOR DISCUSSION POINT
Living labs as a solution to skill and innovation gaps
Argument 3
The AI Innovation Partnership combines German expertise with Asian entrepreneurial energy to develop AI‑ready skills, support innovation and build a cross‑border AI ecosystem.
EXPLANATION
The narrator presents the partnership as a collaborative effort that merges German technical know‑how with the dynamism of Asian economies to create AI‑ready talent and foster innovation across borders.
EVIDENCE
The narration says, “More than a program, this is a partnership at eye level Combining German expertise with Asian entrepreneurial energy and drive to innovate This is the AI Innovation Partnership Uniting academia, industry, and governments across Germany and Asia to shape what’s next Developing skills, enabling innovation, building an AI-driven future together” [315-322].
MAJOR DISCUSSION POINT
Cross‑regional AI partnership for skill development
Agreements
Agreement Points
All speakers agree that AI deployment must prioritize small and medium‑sized enterprises (SMEs) and ensure inclusive, decent work opportunities.
Speakers: Moderator, Dr. Bärbel Kofler, Mr. Jan Noether
The strategic priority has shifted from merely developing AI to its effective deployment, especially for SMEs (Moderator). AI should serve the workforce, including small and medium‑sized enterprises, to close power gaps and promote decent work (Dr. Bärbel Kofler). SMEs are the backbone of the German and Indian economies; their competitiveness depends on AI integration (Mr. Jan Noether).
The moderator frames the discussion around moving AI from development to deployment for SMEs [2-4]; Kofler stresses that AI must be a reliable partner for small and medium-sized enterprises to overcome power gaps and support decent work [41-44]; Noether underlines that 98.5 % of German businesses are SMEs and that AI must help them stay efficient and innovative [268-272].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors calls for inclusive AI that addresses inequalities in access to technology and labour markets, as highlighted in discussions on equitable AI and inclusive growth [S60][S61][S51].
Living Labs and the AI Academia‑Industry Innovation Partnership are identified as concrete cooperation models to bridge the AI skills gap.
Speakers: Dr. Bärbel Kofler, Moderator, Video Narrator, Mr. Jan Noether
AI Living Lab at the University of Mumbai integrates AI into curricula and links students with SMEs (Dr. Bärbel Kofler). The AI Academia‑Industry Innovation Partnership in Asia will combine students, researchers and industry experts in Living Labs (Moderator). Living labs are structured spaces where universities and companies co‑create real‑world AI solutions, improving employability (Video Narrator). Sandboxes for young talent across borders act as experimental Living Labs for SME‑focused AI solutions (Mr. Jan Noether).
Kofler describes the AI Living Lab in Mumbai that embeds AI in university curricula and connects students with small media enterprises [46-53]; the moderator announces a broader AI Academia-Industry Innovation Partnership that uses Living Labs to co-create solutions [288-304]; the video narrator defines Living Labs as learning-innovation spaces linking academia and industry [318-322]; Noether calls for cross-border sandboxes where talent can experiment for SME benefit [274-276].
POLICY CONTEXT (KNOWLEDGE BASE)
Living Labs are promoted as structured learning-innovation spaces for university-industry collaboration to address real-world challenges and upskill talent, as described in multiple reports on talent innovation and partnership models [S52][S50][S67][S57].
There is shared concern about AI bias, data‑protection risks and dependence on non‑European platforms, calling for responsible AI governance.
Speakers: Dr. Bärbel Kofler, Arthur Rapp
AI is not neutral; biases in data and language exclude many users, creating power gaps that must be closed (Dr. Bärbel Kofler). Reliance on non‑European AI platforms creates dependence, bias and data‑leakage risks that threaten research freedom (Arthur Rapp).
Kofler highlights biases in data and language that exclude millions of speakers and stresses the need to close power gaps for inclusive AI [214-222]; Rapp warns of a big risk of dependence on non-European AI platforms, noting bias and the danger that user data may be harvested and used elsewhere [170-180].
POLICY CONTEXT (KNOWLEDGE BASE)
These concerns align with broader AI governance frameworks that emphasize bias mitigation, data-privacy safeguards and platform sovereignty, reflected in high-level AI governance reviews and privacy-focused discussions [S48][S54][S68][S69][S70][S71].
All agree on the importance of large‑scale education and training programmes, including dual‑system/apprenticeship models and joint degree programmes, to equip millions of students with AI skills.
Speakers: Mr. Govind Jaiswal, Mr. Jan Noether, Dr. Kusumita Arora
India’s National Education Policy, dual‑system apprenticeship and Germany‑India collaboration aim to equip around 40 million students with AI skills (Mr. Govind Jaiswal). A dual university agreement creates a master’s programme split between India and Germany (Mr. Jan Noether). Higher education and vocational training must be re‑oriented to AI‑enabled workplaces, linking academia with industry ecosystems (Dr. Kusumita Arora).
Jaiswal outlines the NEP 2020, dual-system apprenticeship and a partnership with Germany to train roughly 40 million students in AI and related fields [84-95][96-99]; Noether announces a dual university master’s programme with two-thirds taught in India and one-third in Germany [161-164]; Arora asks how higher education and vocational training can be aligned with industry to prepare graduates for AI-enabled workplaces [60-63].
POLICY CONTEXT (KNOWLEDGE BASE)
Large-scale AI upskilling initiatives, such as national programmes targeting millions of learners and dual-system apprenticeship models, have been documented in skill-development strategies and workforce reskilling reports [S56][S58][S59][S51][S57].
There is consensus on the need to translate intent into concrete, measurable commitments and to monitor implementation.
Speakers: Dr. Bärbel Kofler, Dr. Kusumita Arora, Moderator
The Hamburg Sustainability Declaration gathers concrete commitments from all stakeholders for inclusive AI (Dr. Bärbel Kofler). Panelists should convert intent into binding commitments and actions (Dr. Kusumita Arora). The moderator announces the AI Academia‑Industry Innovation Partnership as a concrete initiative to address the AI skills gap (Moderator).
Kofler cites the Hamburg Sustainability Conference where concrete stakeholder commitments were made for skilling people as AI co-creators [236-238]; Arora stresses the need to move from intent to commitment [283-285]; the moderator presents the AI Academia-Industry Innovation Partnership as a tangible step forward [288-291].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for measurable commitments and systematic monitoring echoes recommendations for moving from pilot projects to coordinated, evidence-based AI policies and implementation frameworks [S57][S56][S65][S58].
Similar Viewpoints
Both stress that AI systems are not neutral and that bias and data‑related risks must be addressed through responsible governance [214-222][170-180].
Speakers: Dr. Bärbel Kofler, Arthur Rapp
Need for responsible AI that overcomes language bias and power gaps (Dr. Bärbel Kofler). Risks of dependence on non‑European platforms, bias and data‑protection concerns (Arthur Rapp).
Both underline that SMEs are the backbone of the economies and that AI must be made accessible and beneficial for them [41-44][268-272].
Speakers: Dr. Bärbel Kofler, Mr. Jan Noether
AI deployment must serve SMEs and close power gaps (Dr. Bärbel Kofler). SMEs are central to economic competitiveness and need AI to stay efficient (Mr. Jan Noether).
Both advocate joint, cross‑border education models (dual‑system/apprenticeship and dual‑degree) to scale AI skills rapidly [84-95][161-164].
Speakers: Mr. Govind Jaiswal, Mr. Jan Noether
National Education Policy, dual‑system apprenticeship and Germany‑India collaboration to train millions (Mr. Govind Jaiswal). Dual university programme linking India and Germany for AI master’s education (Mr. Jan Noether).
Both call for moving from discussion to measurable actions and commitments [283-285][236-238].
Speakers: Dr. Kusumita Arora, Dr. Bärbel Kofler
Need to convert intent into concrete commitments (Dr. Kusumita Arora). Hamburg Sustainability Declaration provides concrete commitments for inclusive AI (Dr. Bärbel Kofler).
Both present Living Labs as a practical mechanism to bridge skill gaps and foster innovation [318-322][46-53].
Speakers: Video Narrator, Dr. Bärbel Kofler
Living labs as structured spaces linking academia and industry to solve real challenges (Video Narrator). AI Living Lab in Mumbai as a concrete example of university‑industry cooperation (Dr. Bärbel Kofler).
Unexpected Consensus
All three speakers—Dr. Bärbel Kofler, Arthur Rapp and Augustus Azariah—converge on the importance of data protection and privacy frameworks (including GDPR) as a foundation for trustworthy AI, despite coming from policy, research and industry perspectives.
Speakers: Dr. Bärbel Kofler, Arthur Rapp, Dr. Augustus Azariah
Responsible AI must address bias, language exclusion and power gaps (Dr. Bärbel Kofler). Dependence on foreign platforms raises data‑leakage and sovereignty risks (Arthur Rapp). Learning from the EU’s GDPR experience can guide AI governance and early education (Dr. Augustus Azariah).
While Kofler and Rapp focus on bias and platform dependence, Azariah unexpectedly brings GDPR into the conversation, creating a three-way consensus on the need for strong data-protection regimes to underpin responsible AI [214-218][170-180][279-282].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on GDPR-aligned data protection reflects ongoing international dialogue on privacy standards for AI, as discussed in privacy-focused forums and AI-specific data-protection analyses [S54][S55][S68][S63][S70].
Overall Assessment

The panel shows strong consensus on four pillars: (1) prioritising SME‑focused, inclusive AI deployment; (2) using Living Labs and the AI Academia‑Industry Innovation Partnership as concrete cooperation mechanisms; (3) addressing AI bias, data‑protection and platform dependence through responsible governance; (4) scaling education via dual‑system/apprenticeship models and joint degree programmes, backed by concrete commitments.

High consensus – most speakers align on the need for practical, cross‑border cooperation, skill development and responsible AI. This convergence suggests that the proposed partnership and Living‑Lab approach have broad political and industry support, increasing the likelihood of effective implementation and measurable outcomes.

Differences
Different Viewpoints
Perception of AI‑induced job loss and the smoothness of the transition to AI‑enabled work
Speakers: Dr. Bärbel Kofler, Mr. Govind Jaiswal
Addressing job‑loss fears and ensuring decent work (Dr. Bärbel Kofler) National Education Policy, dual‑system apprenticeship, and Germany‑India collaboration to equip 40 million students (Mr. Govind Jaiswal)
Dr. Kofler stresses that fear of job loss is legitimate and that AI deployment must be handled carefully to protect decent work, especially for SMEs [36-38][41-44]. In contrast, Mr. Jaiswal asserts that the transition will be seamless and that no one will be adversely affected, emphasizing large-scale training programmes and dual-system apprenticeships as sufficient to manage the shift [81-82][84-95].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over AI-driven job displacement and transition pathways have featured in reports on preventing jobless growth and the need for comprehensive reskilling programmes [S58][S59][S51].
Primary driver for building AI skills – government‑led Living Labs versus industry‑led faculty certification and hackathons
Speakers: Dr. Bärbel Kofler, Dr. Augustus Azariah
AI Living Lab that embeds AI into university curricula and connects students with SMEs (Dr. Bärbel Kofler) Industry‑driven college engagements, hackathons, and faculty training to bridge skill gaps (Dr. Augustus Azariah)
Dr. Kofler promotes a government-coordinated AI Living Lab that integrates AI into university curricula and links students with small-media enterprises as a concrete cooperation model [46-53][241-247]. Dr. Azariah argues that the industry should take the lead by certifying faculty, running large hackathons and providing endowment funds to close the skill gap, especially in tier-2/3 cities [133-140][115-124]. Both aim to upskill but differ on which sector should spearhead the effort.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between government-driven Living Labs and industry-led certification models reflects differing tactical approaches to skill development noted in analyses of living-lab ecosystems and industry-academia collaborations [S52][S50][S67].
Focus on platform dependence and data sovereignty versus broader inclusive AI concerns
Speakers: Arthur Rapp, Dr. Bärbel Kofler
Risks of dependence on non‑European platforms, bias, and data‑protection concerns (Arthur Rapp) Need for responsible AI that overcomes language bias and power gaps, ensuring inclusive access (Dr. Bärbel Kofler)
Arthur Rapp warns that reliance on AI services owned outside Europe creates dependency, bias and data-leakage risks that threaten research freedom and sovereignty [170-180][185-186]. Dr. Kofler acknowledges AI bias and language exclusion but frames the issue as a power gap to be closed through international cooperation, without specifically addressing platform ownership [214-222][225-227]. The disagreement lies in the priority given to platform sovereignty versus broader inclusivity.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on platform dependence and data sovereignty contrast with broader inclusive AI agendas, a dynamic observed in data-sovereignty panels and inclusive AI policy debates [S47][S49][S60][S68].
Timing of AI education – early elementary seeding versus higher‑education and vocational re‑orientation
Speakers: Dr. Augustus Azariah, Dr. Kusumita Arora, Dr. Bärbel Kofler
Learning from German GDPR and vocational‑training models to seed AI education from elementary level (Dr. Augustus Azariah) Higher education and vocational training must be re‑oriented to AI‑enabled workplaces, linking academia, industry and innovation ecosystems (Dr. Kusumita Arora) AI Living Lab that embeds AI into university curricula and connects students with SMEs (Dr. Bärbel Kofler)
Dr. Azariah calls for AI to be introduced at the elementary level, using German GDPR experience and vocational-training models as templates [279-282]. Dr. Arora and Dr. Kofler focus on re-orienting higher education, vocational systems and university curricula through Living Labs and industry partnerships, arguing that these are the immediate levers for AI-ready talent [60-63][46-53][241-247]. The disagreement concerns the optimal entry point for AI education.
POLICY CONTEXT (KNOWLEDGE BASE)
The optimal timing for AI education-from early school curricula to higher-education and vocational pathways-has been debated in AI literacy initiatives and workforce upskilling strategies [S59][S56][S58].
Unexpected Differences
Platform dependence versus inclusive AI without explicit platform focus
Speakers: Arthur Rapp, Other panelists (e.g., Dr. Bärbel Kofler, Mr. Jan Noether)
Risks of dependence on non‑European platforms, bias, and data‑protection concerns (Arthur Rapp) AI Living Lab that embeds AI into university curricula and connects students with SMEs (Dr. Bärbel Kofler) Cross‑border sandboxes for SME innovation and joint degree programmes (Mr. Jan Noether)
While most speakers concentrate on cooperation models, living labs and education, Arthur Rapp uniquely raises the strategic risk of relying on foreign AI platforms, a concern not addressed by the others, making it an unexpected point of divergence [170-180][46-53][274-276].
POLICY CONTEXT (KNOWLEDGE BASE)
The trade-off between emphasizing platform sovereignty and pursuing inclusive AI without platform-specific constraints mirrors the policy discourse on balancing national control with global collaboration [S47][S49][S68].
Early AI education versus focus on higher education and vocational training
Speakers: Dr. Augustus Azariah, Dr. Kusumita Arora, Dr. Bärbel Kofler
Learning from German GDPR and vocational‑training models to seed AI education from elementary level (Dr. Augustus Azariah) Higher education and vocational training must be re‑oriented to AI‑enabled workplaces (Dr. Kusumita Arora) AI Living Lab that embeds AI into university curricula and connects students with SMEs (Dr. Bärbel Kofler)
Azariah’s push for elementary-level AI education contrasts with the panel’s predominant emphasis on university-level curricula and living labs, revealing an unexpected split on where AI education should begin [279-282][60-63][46-53].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over early AI exposure versus concentration on higher-education and vocational training aligns with ongoing discussions about AI literacy across the education continuum and the need for targeted skill pathways [S59][S56][S58].
Overall Assessment

The panel largely agrees on the need for international cooperation, inclusive AI, and skill development, but diverges on how to manage job‑loss anxieties, which sector should lead skill‑building initiatives, the importance of platform sovereignty, and the appropriate entry point for AI education.

Moderate – disagreements are substantive on implementation pathways (e.g., government vs industry leadership, timing of education, and platform dependence) but do not fracture the shared vision of collaborative AI development. These differences suggest that concrete policy design will require negotiation to align priorities and allocate responsibilities.

Partial Agreements
Both agree that AI deployment must be accompanied by coordinated action among government, industry and academia to protect workers and ensure decent work, but Kofler emphasizes careful handling of job‑loss anxieties while Jaiswal stresses structured stakeholder commitments and governance mechanisms as the primary solution [36-38][260-263].
Speakers: Dr. Bärbel Kofler, Mr. Govind Jaiswal
Addressing job‑loss fears and ensuring decent work (Dr. Bärbel Kofler) Structured industry‑academia collaboration with stakeholder commitments and joint governance (Mr. Govind Jaiswal)
Both recognise the need for inclusive AI and capacity building, but Azariah focuses on industry‑led faculty upskilling, whereas Kofler stresses broader inclusive AI governance and language equity, leading to different pathways toward the same inclusive goal [115-124][214-222].
Speakers: Dr. Augustus Azariah, Dr. Bärbel Kofler
Industry‑driven college engagements, hackathons, and faculty training to bridge skill gaps (Dr. Augustus Azariah) Need for responsible AI that overcomes language bias and power gaps, ensuring inclusive access (Dr. Bärbel Kofler)
Takeaways
Key takeaways
AI must be deployed responsibly and inclusively, addressing fears of job loss and ensuring decent work conditions. A persistent power and access gap exists between large corporations and SMEs, as well as between Global North and South; closing this gap is essential for equitable AI benefits. Education and skill development are critical: AI Living Labs, dual‑system apprenticeship models, national education policies, faculty certification, and early exposure from primary school onward are needed to prepare a large talent pool. Effective collaboration requires structured industry‑academia‑government partnerships, living labs, sandboxes, and joint degree programmes that align curricula with real‑world industry challenges. International cooperation—particularly Germany‑India (and broader Asia) initiatives such as the AI Academia‑Industry Innovation Partnership—provides a framework for shared resources, standards, and commitments. Governance issues around bias, data sovereignty, and dependence on non‑European AI platforms must be tackled through responsible AI frameworks, data‑protection rules, and multilingual support.
Resolutions and action items
Launch of an AI Living Lab at the University of Mumbai linking students with small‑ and medium‑sized enterprises. Establishment of a dual‑university master’s programme between Germany (Baden‑Württemberg) and India, with two‑thirds of instruction in India. Creation of the AI Academia‑Industry Innovation Partnership in Asia (Germany, India, Vietnam) coordinated by BMZ and implemented by GIZ. Commitment by the Indian Ministry of Education to embed AI and related skill courses for approximately 40 million students under the National Education Policy and dual‑education system. Implementation of faculty certification programmes (e.g., Microsoft Copilot) and the provision of endowment funds to enable faculty‑led AI research and patenting. Adoption of the Hamburg Sustainability Declaration commitments to promote responsible AI, privacy safeguards, and transparent reporting. Proposal to develop cross‑border sandboxes and living labs for SME collaboration, enabling joint prototyping and low‑risk innovation.
Unresolved issues
Concrete mechanisms for monitoring and guaranteeing that AI deployment does not lead to net job losses or degrade work quality. Detailed funding structures and long‑term financial sustainability for living labs, faculty endowments, and sandboxes. Standardisation of AI curricula across diverse linguistic and institutional contexts, especially for primary and secondary education. Comprehensive data‑protection and sovereignty frameworks for cross‑border AI projects, including how to mitigate bias from foreign platforms. Specific metrics, timelines, and evaluation criteria for scaling SME integration and measuring productivity gains. Strategies for ensuring continuous industry involvement in curriculum design, assessment, and practical training beyond initial pilot phases.
Suggested compromises
Combine government‑led regulatory frameworks with industry‑driven training programmes (dual‑system model) to balance oversight and agility. Adapt German vocational‑training best practices to the Indian context, allowing for rapid skill acquisition while respecting local educational structures. Create joint sandboxes where risk‑averse German SMEs can test innovations with the faster, more flexible Indian partners, sharing risk and benefit. Focus on developing both AI user competencies and co‑creator capabilities to bridge the creator‑user power gap. Implement incremental, pilot‑based approaches (living labs) before scaling to larger national programmes, ensuring evidence‑based adjustments.
Thought Provoking Comments
We need to close the power gap so that new technology can be spread and used by everybody, especially small and medium‑sized enterprises, and make AI open‑source, climate‑friendly and regulated to serve people and enterprises of all sizes.
Highlights the systemic inequality in AI access and frames it as a geopolitical and economic challenge, moving the discussion from abstract benefits to concrete distributional concerns.
Shifted the conversation toward equity and the role of government in regulating and facilitating AI for SMEs; prompted later speakers to mention concrete initiatives like the AI Living Lab and dual‑education models.
Speaker: Dr. Bärbel Kofler
The transition caused by new technology should be seamless – like when electricity arrived – and India’s National Education Policy 2020, dual education system, and new research parks are being used to embed AI across curricula, from humanities to engineering, with industry exposure mandatory.
Provides a detailed policy roadmap linking education reform to AI readiness, using historical analogy to demystify disruption and emphasizing structural reforms.
Introduced a concrete national strategy, leading other panelists to discuss partnership models and reinforcing the need for coordinated government‑industry‑academia action.
Speaker: Mr. Govind Jaiswal
Many fresh graduates present AI‑generated CVs from tools like ChatGPT; we need faculty trained in tools like Copilot and to certify them, as we did in a recent hackathon with 18,000 students, to ensure genuine skill development and bridge the industry‑academia gap.
Exposes a practical problem of AI‑assisted credential inflation and proposes a scalable solution through faculty upskilling and certification, highlighting talent beyond elite institutions.
Redirected the dialogue to the quality of AI education and the hidden talent in tier‑2/3 cities, prompting Jan Noether and others to discuss cross‑border talent pipelines.
Speaker: Dr. Augustus Azariah
There is a big risk of dependence on non‑European AI platforms, which threatens research freedom, data sovereignty, and could lead to bias and loss of control over our own innovations.
Raises strategic concerns about AI platform dependency, data protection, and bias, linking technical choices to national security and academic freedom.
Introduced a new dimension of geopolitical risk, causing participants to stress the importance of open‑source, responsible AI and influencing Kofler’s later emphasis on concrete international commitments.
Speaker: Arthur Rapp
AI applications are most promising in sectors like healthcare, agriculture, water management, and energy sustainability; we have already signed a dual‑degree master’s program with Baden‑Württemberg where two‑thirds of teaching occurs in India.
Identifies specific high‑impact domains for AI collaboration and showcases an actionable education partnership, moving the conversation from abstract policy to tangible projects.
Expanded the discussion to sector‑specific cooperation and validated the earlier education‑focused proposals, encouraging others to think about concrete pilot programs.
Speaker: Mr. Jan Noether
AI should be seeded at the elementary level; we need to enable schools with expertise so that children grow up AI‑ready, learning from the EU’s GDPR model for data security and Germany’s vocational training excellence.
Advocates for early‑stage AI literacy, linking it to broader societal safeguards and lifelong learning, thus broadening the scope of the partnership to K‑12 education.
Served as a concluding call‑to‑action that broadened the timeline of cooperation, prompting the moderator to highlight the AI Academia‑Industry Innovation Partnership and reinforcing the need for long‑term commitment.
Speaker: Dr. Augustus Azariah (final remark)
Overall Assessment

The discussion was propelled forward by a series of pivotal remarks that moved the dialogue from high‑level aspirations to concrete challenges and solutions. Dr. Kofler’s framing of the ‘power gap’ set the equity lens, which Govind Jaiswal answered with a detailed national education strategy. Azariah’s exposure of AI‑generated CVs and faculty upskilling highlighted practical skill gaps, while Arthur Rapp’s warning about platform dependence introduced a geopolitical urgency. Jan Noether anchored the conversation in sector‑specific collaborations and concrete dual‑degree programs, and Azariah’s final emphasis on early AI education expanded the partnership horizon to K‑12. Each of these comments redirected the conversation, deepened analysis, and prompted other participants to elaborate on policy, industry, and academic actions, ultimately shaping a cohesive narrative around inclusive, responsible, and actionable AI cooperation between Germany and India.

Follow-up Questions
How should higher education and vocational training systems be reoriented to work closely with industry and innovation ecosystems to prepare graduates for AI‑enabled workplaces?
Ensures curricula match the skills needed for AI‑driven jobs and reduces the talent gap.
Speaker: Dr. Kusumita Arora
What partnership or collaboration models already exist between Indian industry and academia, and how can students translate AI innovation into productivity improvements and sustainability for companies?
Seeks concrete examples of effective industry‑academia links that drive economic and environmental benefits.
Speaker: Dr. Kusumita Arora
Where is the strongest potential for cooperation in AI that translates directly into productivity gains and economic growth?
Identifies priority sectors where bilateral collaboration can deliver measurable economic impact.
Speaker: Dr. Kusumita Arora
How can academic research programmes integrate AI skills and new AI directions from schools through lifelong learning to equip leaders with critical thinking for personal and economic use?
Addresses the need for a continuous learning pipeline that keeps pace with rapid AI advances.
Speaker: Dr. Kusumita Arora
How should international cooperation programmes get involved to better integrate AI for skills and innovation initiatives and ensure an inclusive global workforce?
Calls for coordinated policy and funding mechanisms to avoid fragmented efforts and promote equity.
Speaker: Dr. Kusumita Arora
How can German and Indian SMEs better integrate into the AI partnership effort?
SMEs constitute the bulk of both economies; practical pathways are needed for their AI adoption.
Speaker: Dr. Kusumita Arora
What concrete commitments and reporting mechanisms should be established in international AI cooperation to ensure accountability and progress?
Highlights the risk of empty declarations and the need for measurable outcomes.
Speaker: Dr. Bärbel Kofler
How can the power gap between AI creators and users, especially in the Global South, be closed through policy and cooperation?
Addresses equity concerns so that AI benefits are widely distributed.
Speaker: Dr. Bärbel Kofler
What are the risks and implications of dependence on non‑European AI platforms for research freedom, data sovereignty, and bias?
Critical for maintaining autonomous research ecosystems and protecting against hidden biases.
Speaker: Arthur Rapp
What privacy and intellectual‑property risks arise when AI tools are used for drafting research proposals or career decisions, and how can they be mitigated?
Ensures that personal and innovative ideas are not inadvertently exposed or appropriated.
Speaker: Arthur Rapp
How effective are AI Living Labs as structured learning and innovation spaces for co‑creation between academia and industry?
Needs evaluation to determine whether living labs deliver the promised skill development and innovation outcomes.
Speaker: Moderator
What is the magnitude of the AI‑related skill gap (e.g., 1.3 million unfilled jobs) and how effective are current training programmes in closing it?
Quantifying the gap informs policy and investment priorities.
Speaker: Dr. Bärbel Kofler
What impact do faculty certification programmes (e.g., Copilot certification) and endowment funds have on faculty innovation, curriculum improvement, and patent generation?
Understanding this impact helps scale faculty capacity and industry‑relevant research.
Speaker: Dr. Augustus Azariah
How can sandbox environments be designed and evaluated to accelerate AI adoption by SMEs in Germany and India?
Sandboxes can lower risk for SMEs; their design and outcomes need systematic study.
Speaker: Mr. Jan Noether
How can AI systems be made more inclusive for speakers of less‑represented languages and for users with literacy challenges?
Ensures that AI does not exacerbate existing digital divides.
Speaker: Dr. Bärbel Kofler
In what concrete ways can AI support the Sustainable Development Goals, and how can progress be measured?
Links AI development to broader global priorities and provides a framework for impact assessment.
Speaker: Dr. Bärbel Kofler
What lessons from the German dual‑education system can be transferred to India to seed AI skills at elementary and secondary levels?
Early‑stage education is crucial for building a future AI‑ready workforce.
Speaker: Mr. Govind Jaiswal

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Global Enterprises Show How to Scale Responsible AI

Global Enterprises Show How to Scale Responsible AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising senior leaders from Infosys, IBM, NVIDIA and Meta, examined how trust and responsible AI can be scaled across enterprises [1-5]. Geeta Gurnani noted that clients, who a year ago were unfamiliar with responsible AI, now treat security as a “shift-left” priority and expect governance to be integral rather than an afterthought [17-24], and she illustrated the immaturity of many organisations by recounting a senior leader who managed AI governance on an Excel sheet, a practice she said cannot support large-scale deployment [26-28]. Sundar Nagalingam added that when AI is delivered to billions, the most common failures are not infrastructure outages but missing or weak control mechanisms that expose functional or security vulnerabilities [34]. Sunil Abraham warned against anthropomorphising AI, emphasizing that generative models are merely weight files whose epistemic status is dual-use and that fearing them is unnecessary if a Unix-style security model is applied [36-49].


The panel agreed that trustworthy AI must be judged by the end-user’s confidence that the system is secure, non-hallucinatory and compliant with applicable laws [55-64]. Sundar grouped the necessary safeguards into three buckets-functional safety, AI-specific safety, and cybersecurity-using autonomous-surgery as an example of how each layer must be addressed [68-75]. Geeta stressed that governance should be a gate-keeping control backed by senior leadership and eventually embedded in the enterprise risk framework rather than remaining a manual, post-hoc review [114-144]. She also said that customers will only pay a premium for “trust-grade” AI when the use case directly impacts reputation or compliance, while internal experiments may remain low-cost [216-233].


Sundar affirmed that high-performance AI hardware should ship with built-in privacy guardrails, citing autonomous driving and aerospace as domains where such safety layers are non-negotiable [148-158]. Sunil highlighted that ad-supported generative AI can democratise access and help bridge the AI divide, arguing that the business model does not inherently conflict with AI neutrality [182-200]. All panelists concurred that AI model innovation is outpacing governance frameworks, making rapid standardisation and accountability essential [301-304]. The discussion ended without consensus on mandatory watermarking, with participants split between viewing it as a useful demarcation and seeing it as an impractical universal requirement [327-335].


Keypoints


Major discussion points


Trust and responsible-AI adoption is still immature in many organisations.


Geeta notes that “security used to be an afterthought… now people first think security” and that “people are adopting AI but trust, governance, security is taking a prime stage now” [17-18]. She also recounts a senior leader who managed AI governance on an “Excel sheet,” highlighting how rudimentary practices still block scaling [18-24].


When AI is scaled to billions of users the first failures are in safety and control, not raw infrastructure.


Sundar explains that “the systems that drive the infra… break” and that failures appear either in “how efficiently each of the use cases… gets served” or in “whether it is being served safely in a secure way” [34-38]. He later groups the critical failure domains into three buckets – functional safety, AI safety, and cybersecurity – using the example of AI-assisted robotic surgery [68-75].


A practical definition of “trustworthy AI” centres on the end-user’s confidence that the model is secure, non-hallucinating and compliant.


Geeta breaks it down: the model must have “passed the security test,” be “not hallucinating” with monitoring controls, and meet “compliance” for the relevant law or industry [55-64]. She stresses that trustworthy AI is about “how the end user will consume confidently” [65-66].


Governance must move from a passive, observation-only role to an enforceable control embedded in enterprise risk management.


Geeta describes the need for senior-leadership commitment, “governance as a control point, like a gatekeeper,” and cites the IBM ethical board that must approve every AI proposal before sales can proceed [130-141]. She later notes that AI risk should be folded into the organisation’s overall risk posture rather than treated as a separate silo [143-144].


Global regulation, standards and industry-wide alignment are still evolving, and many panelists see a gap between rapid model innovation and slower governance.


Sundar calls for “standardization… then tailor it for the needs of each of the countries” and outlines a three-step approach (platform safety, algorithmic safety, ecosystem safety) [240-249]. Sunil argues that ad-supported AI can level access while acknowledging the “no regulatory vacuum for AI” and that responsibility ultimately rests on developers [190-202][295-298]. When asked whether model advances outpace governance, all panelists answered affirmatively [298-304].


Overall purpose / goal of the discussion


The panel was convened to explore how large enterprises (Infosys, IBM, NVIDIA, Meta) can build and scale trust in generative AI-covering responsible-AI practices, safety and security failures, governance mechanisms, and the need for coherent regulatory and industry standards. The moderators repeatedly asked participants to articulate concrete “non-negotiables” and practical steps for embedding trust at scale.


Overall tone and its evolution


– The conversation opens enthusiastic and collegial, with applause and light banter as the panelists are introduced [5-9].


– It quickly becomes analytical and cautionary, focusing on concrete challenges (Excel-sheet governance, failure modes, safety buckets) [17-24][34-38][68-75].


– A pragmatic, solution-oriented tone emerges when discussing governance integration and enterprise risk [130-144].


– Mid-session, skepticism and philosophical nuance appear, especially in Sunil’s remarks about anthropomorphisation, ontology, and the limits of regulation [36-44][295-298].


– The final segment shifts to a rapid-fire, slightly humorous style, with yes/no questions, playful disagreements, and a closing “thank you” [273-284][327-334].


Overall, the tone moves from upbeat introduction → serious technical and policy analysis → reflective skepticism → light-hearted rapid questioning, maintaining a professional yet conversational atmosphere throughout.


Speakers

Mr. Syed Ahmed – Moderator; member of the Responsible AI Office at Infosys [​S1]


Ms. Geeta Gurnani – Field CTO, Technical Pre‑sales and Client Engineering, IBM [​S3]


Mr. Sundar R. Nagalingam – Senior Director, AI Consulting Partners, NVIDIA [​S4]


Mr. Sunil Abraham – Public Policy Director, Meta [​S6]


Additional speakers:


– None identified beyond the four listed above.


Full session reportComprehensive analysis and detailed insights

The panel opened with brief introductions of the four senior representatives – Mr Syed Ahmed (Infosys), Geeta Gurnani (IBM’s field CTO for technical pre-sales and client engineering), Mr Sundar R. Nagalingam (senior director of AI consulting partners at NVIDIA), and Mr Sunil Abraham (public policy director at Meta) – and the moderator framed the session as a discussion on how large organisations can scale responsible, trustworthy AI while tackling governance, safety and regulatory challenges [1-5].


Geeta Gurnani highlighted a dramatic shift in industry attitudes toward security. Two years ago many clients still asked “what is responsible AI and what is trust?” but today “security has become a shift-left priority – people first think security then everything else” [17-19]. She illustrated the immaturity of current governance by recounting a senior leader who managed AI risk on an Excel spreadsheet, a practice she said “cannot let anybody fail” at scale [23-28].


Sundar Nagalingam explained that when AI systems are delivered to billions of users the first points of failure are not the underlying hardware but the control layers that orchestrate the infrastructure. He grouped these risks into three “buckets”: functional safety (e.g., an AI-assisted robotic surgery delivering the correct clinical outcome), AI-specific safety (bias, training-time validation, synthetic testing) and cybersecurity (protecting the system from malicious intrusion) [68-75]. He added that the proliferation of standards reflects a deeper problem: the lack of a clear party to hold accountable when an AI-driven robotic system fails [260-267].


When asked to define “trustworthy AI”, Geeta framed it from the end-user’s perspective: a model must pass a security test, be monitored to prevent hallucinations, and comply with the relevant legal regime, thereby allowing the end user to consume the output confidently [55-66].


Geeta argued that governance must move from passive observation to an enforceable, gate-keeping control embedded in the organisation’s risk framework. She called for senior-leadership commitment and for AI risk to be folded into the enterprise risk posture rather than treated as a siloed function [130-144].


Sundar echoed the need for standardisation before localisation. He proposed first establishing a safe platform (the “template”) and then fine-tuning it for each country’s regulations, covering platform safety, algorithmic safety and ecosystem safety [240-249]. This mirrors Geeta’s call for a technology-level baseline that can be adapted per jurisdiction [290-294].


On the hardware side, Sundar affirmed that high-performance AI infrastructure should ship with built-in privacy guardrails. He cited autonomous driving and aerospace as domains where “the safety layer is non-negotiable” and where silicon-level protections are essential [148-158].


Sunil Abraham offered a philosophical stance, warning against anthropomorphising AI. He framed an AI model as a single weight file, a dual-use artifact, and argued that a Unix-style “security-first” mental model gives confidence while reminding listeners that AI should not be treated as a sentient entity [36-49]. When asked about the “open-claw malt-bot” community, Sunil dismissed it as a hallucination of stochastic parrots, reinforcing his view that AI should not be anthropomorphised [36-44].


Sunil also discussed Meta’s “Trusted Execution Environment” paper, noting that the ~80-page document devotes half of its length to hardware-level attacks and enumerates 33 distinct attack strategies comprising more than 100 individual attack types [180-186][190-194].


Regarding business models, Sunil argued that ad-supported generative AI can increase accessibility and help close the AI-usage gap, especially for low-income users, without necessarily compromising the principle of AI neutrality [182-202].


Geeta addressed market willingness to pay for “trust-grade” AI. She said enterprises are unlikely to pay a premium for internal experiments, but will do so when the AI product is consumer-facing or carries downstream reputational, compliance or brand risk – “I cannot afford to fail there” [221-233].


All three panelists concurred that the models and innovation outpace governance [301-305], underscoring the urgency of accelerating standards and oversight.


Points of disagreement emerged. Geeta advocated for a universal technical baseline before geographic regulation [290-294], whereas Sunil asserted that existing laws already apply and there is no regulatory vacuum for AI [295-296]. Sundar’s middle-ground proposal of standardisation then localisation sits between these positions. On mandatory watermarking, Sundar expressed skepticism, arguing that the industry has already accepted AI-generated content and that blanket watermarking may be unnecessary [332-337]; Sunil evaded a direct answer [327-330], while Geeta suggested future technology might render watermarks unnecessary [338].


The discussion yielded several actionable take-aways: senior leadership must mandate AI governance as a non-optional, gate-keeping function and integrate AI risk into enterprise risk management [130-144]; organisations should replace ad-hoc tools such as Excel with automated, runtime-enforced governance pipelines [114-124]; hardware vendors need to embed privacy and safety guardrails at the silicon level for high-risk sectors [148-158]; a three-layer safety framework (functional safety, AI safety, cybersecurity) should become the industry baseline, with country-specific tweaks applied thereafter [68-75][240-249]; and while ad-supported models can increase accessibility, their long-term impact on trust and neutrality warrants further study [182-202].


In closing, the moderator thanked the participants and the audience, noting that the diversity of perspectives underscored consensus on layered security and governance while highlighting divergent views on global regulatory alignment and content-labeling, pointing to clear directions for future research and policy [339-340].


Session transcriptComplete transcript of the session
Mr. Syed Ahmed

of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, technical pre -sales and client engineering at IBM. Sundar R. Nagalingam, senior director AI consulting partners at NVIDIA. And Sunil Abraham, public policy director at Meta. So now between Infosys, IBM, NVIDIA and Meta, you can’t get better global enterprises and better AI companies that are building trust at scale. So please join me in giving a big round of applause to my co -panelists. So let me request the panelists to please come on stage for a very quick photograph as requested by the organizers before we get started with the panel discussion. Thank you. All right. So it’s really amazing to be on panel with all of you again.

So before we get started with, you know, a lot of heated discussions on the scaling of trust, because trust is something that everyone thinks, you know, they have a different perspective on trust. So let me get started on very simple questions and then we’ll do the hard hitting ones a little later. So Geetha, you have been working for decades with customers. You have been working with them on trust and responsible AI. You have been attending a lot of meetings. What is something that, you know, surprises you? I mean, what is something that has happened in the industry in your experience? after that you have felt that oh even after decades of experience this industry still surprises me

Ms. Geeta Gurnani

sure so thank you so much say it for that question and as i was mentioning when i was standing outside that when i was walking in and meeting many clients almost two years back everybody was asking me what is this responsible ai and what is this trust okay and what surprised me that we all all witnessed so much learning from security as a concept right security always used to be afterthought and now i think people just can’t afford of not thinking security it has become completely shift left right that people first think security then everything else but in spite of that whole learning what i witnessed in last 24 months is that people are adopting ai but trust governance security is taking a prime stage now okay it wasn’t uh it wasn’t a first thought And when I met a very senior leader, I will, of course, not name them.

And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? And he said, but that will block my innovation. And I don’t want to block my innovation. And I asked him, so how do you manage the governance? He said, on Excel sheet. And we are like, so I was, wow. I said, if you’re ready to spend and so much money. But I think now when I go and meet, I realize that that organization is not able to scale because they’re not confident. But this Excel never let anybody fail. I think that’s the first thing.

Mr. Syed Ahmed

That’s quite profound what you mentioned, right? So what you’re saying is the people are more open to responsible AI now and trustworthy AI now. And in many ways, they leaped ahead earlier with innovation, with a lot of innovation. There is. There is absolutely no doubt in anyone’s mind at the power of AI. What AI can do. but true scales can come only when you start trusting AI only when you start building that layer of trust and that is our time is now that’s correct excellent okay Sundar maybe next question to you scale creates power but it also scales failures what breaks first when AI scales to billions of users whether it is governance first or infrastructure or alignment when AI scales to a lot of people what breaks first

Mr. Sundar R Nagalingam

I mean that’s the thing right I mean anyone of them can break and most of the times it is not the infrastructure that breaks not the infra that doesn’t break what breaks is the systems that drive the infra and the breakage could come either in terms of how efficiently each of the use cases that need to be served to the users gets served as microservices that is one possibility of failure the second one very obvious one is that is it getting served safely in a secure way that could be a very very important point of failure and even that is a failure i mean the systems may appear to be running well and everybody might be getting the answers that they have been looking for everything might look hunky -dory but if a very very small vulnerability gets overlooked if it had not been thought about if if a if a control mechanism to avoid that vulnerability has not been thought about either manually or through systems that’s a huge failure so most of the times the things that break when you are serving a large number of users is the way in which ai is getting served either in terms of the functionality itself or in terms of the controls that it is expected to undergo

Mr. Syed Ahmed

excellent i totally agree with you it is absolutely right sunil um i think um very very important question to you last month we saw all this craziness about open claw malt bot malt book for those of you who don’t know open claw malt bot malt book was a social networking site but with a twist it was created for only ai agents okay so humans were allowed to observe what is happening in the social networking site but they couldn’t participate they couldn’t post anything and within days agents started posting a lot of stuff and they had their own community and all that they even had their own language they had their own religion apparently so a lot of things happened so question to you is you have spent years shaping digital policy right but i mean when you heard all about all this malt bought malt book and all that did you cringe for a minute and say oh i didn’t expect this

Mr. Sunil Abraham

no and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think the main reason why i don’t see it is because i’m skeptical towards anthropomorphization whenever i see technology do something i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t see it i don’t I don’t, in my head, apply the mental model of a human. It’s just technology doing something. So I’m not impressed at all by a molt book. It is just machines hallucinating.

The stochastic parrot is just doing something. There is no real intelligence at display yet. The second big word I’m going to use is ontology. In philosophy, the ontological question is, what is this thing that I’m looking at, molt book or open claw? And at the very core of Gen AI is a single file on the file system, the weight file. And I’m somebody that has been using operating systems for a long time. Operating systems are like 20 ,000 files, 30 ,000 files. And operating systems didn’t scare me. And somehow you want me to be scared of a single file. A single file, which is a weight file. so the ontological view of the technology gives me more assurance and finally one more big word which is epistemology so it’s one file but what is the nature of truth about this file and i think the mistake we’re making is we’re expecting it to be a responsible file but that is actually not according to me what it is according to me what it is is it is the general purpose file or a dual use file and one person’s bug is going to be another person’s feature and another person’s feature is going to be the third person’s bug and therefore this is not easy to build services and solutions using these ontological components and epistemological concepts so sorry i’m using a lot of big words but you asked a very important question and i think we need to answer that question very carefully thoughtfully and if we use Gita’s mental model of security first and that means Unix thinking suppose we use Unix mental model then surely we will not be scared of any file it’s in some user space and at the max it will do whatever it wants to do in that user space and I am safe from whatever it is doing so I’m not scared of mode book at all

Mr. Syed Ahmed

thank you so much for your response that gives us a lot of assurance and I think a lot of people in the audience will also agree that now we are a little bit more assured than when we started one of the big challenge that we always have is we humanize AI too much that was one of your big words that you used which is not the case, we shouldn’t be scared of it so much this is something that we have created and we have experts who have learned to govern and use AI in the right way thank you so much for that now let’s get started with the perspective the reason why I am opening up one question to all of you same question, one is because I am a little lazy second is when it comes to trustworthy ai when it comes to building trust everyone has a different opinion about it right so when i talk to regulators they have a different view on it when i talk to governments policy makers they have a different view on it academia has a different view industry has a different view now within industry um you know enterprise applications like what ibm does has a different view chip makers like nvidia has a different view and consumer ai platforms like meta has a different view very quickly if you can tell me what does it mean by trustworthy ai in your own sense and what are the key non -negotiables one or two maximum so for each of you gita maybe we will start with you okay so i think i will second

Ms. Geeta Gurnani

your thing that people being confused about trustworthy ai i think as a technologist even i was confused three years back okay because people use a lot of terms interchangeably which sometimes scares them and they don’t know what they’re doing and they don’t know what they’re doing because people use trust security governance, compliance, all of it interchangeably. I’m happy they use all of these terms, but using them interchangeably, I think confuses a lot of people that, OK, exactly what are we trying to do? We’re putting in a lot of keywords. Yeah, it’s just a lot of keywords. But I think when you start to decipher each one of them and you say, OK, ultimately, ultimately, see, trustworthy AI is for an end user, which means can I trust what I’m using?

Right. And all of us technology providers need to really work upon which says that, OK, to make you trust what you’re using, what enablers I can give. Right. So in my mind, for trustworthy AI, the ROI need to be seen that what downstream risk is it going to bring? Right. Right. So if I have an end user or a consumer who wants to trust an AI, then I think he needs to be assured that. the model or use case I’m using is past the security test, is already past the security test. It is not hallucinating, which means I have a control over monitoring that what output it is producing. Right. So that risk has been taken care.

Somebody has looked at it. And the third, which says that compliance, right, that if I am operating in a law of land where some laws are applicable, or if I’m in an industry where some laws are applicable, somebody has taken care of it for me. Right. So in my mind, trustworthy is how end user will consume confidently. Now, for them to consume confidently, I think we need to ensure that each of these layers are taken care and they will be taken care differently in different industries by academias and all of it. That’s broadly.

Mr. Syed Ahmed

I love it. So basically, irrespective of all the building blocks, of security, safety, privacy, what you said can be used interchangeably, what really matters is. the end users can start trusting the technology that is absolutely spot on so that from nvidia’s perspective or your perspective sure so this the trustworthiness i mean

Mr. Sundar R Nagalingam

you explained it very beautifully saya that i mean multiple regulators follow different standards multiple industries follow different standards multiple companies follow different standards so i mean which is trustworthy and which is not which is safe and which isn’t right i mean so let’s try to abstract it to very high level something which can be like bucketized in a way let’s say in three buckets and all these three buckets will be applicable to any regulator that you’re talking about any government you’re talking about any country any function any whatever it is okay the first one the most important one is the functional safety okay maybe if i explain it with the help of an example it’s easier for all of us to relate to it i mean let’s say a robotic assisted surgery in ai assisted robotics ai assisted robotic surgery the first one is the functional safety okay the second one is the functional safety okay the function it is supposed to deliver the surgical process that that needs to be achieved the outcome that is expected of the process okay and what comes before and after surgery it can be very very easily equated with the skills of a surgeon a manual surgeon right i mean that is what it is the functional part of it is it getting delivered okay that is the i would say in terms of visualizing processing understanding and controlling that is the easiest than the other two that i’m talking about because it’s it’s most of the times it’s black and white it’s not always black and white but most of the times it’s black and white the second one is the ai safety that goes into it how’s see i mean obviously an ai assisted robotic surgery has i mean you cannot even imagine the amount of trainings that needs to be done the amount of testing and validation that needs to be done the amount of uh you know So scenarios that can be visualized, created through synthetic methodologies, created and emulated and simulated and tested, the amount of bias that can get into it.

I mean, if it is a male patient, I mean, the simplest bias would be the different approach between a male patient and a female patient. I mean, I’m not even getting into other areas of bias that can creep into. So how safe is the AI that has gone into implementing it in terms of training and delivery? Third, and that’s not easy because the problem here, Syed and August attendees is that it’s humanly impossible to even think of things that can go wrong. I mean, today, I mean, that is why we always go back to these AI assisted ones for that also. The last one being. Cyber security. if somebody if a bad element wants to just hack into the theater and do something wrong to the patient who is sitting inside who is being operated upon by a by a robotic arm i mean that’s like unimaginable right i mean and it can happen i mean it’s easy to i mean it’s not easy i mean it’s but theoretically it is possible so i would say that if we abstracted to very high levels these three areas once again the functional safety part of it the ai safety part of it and the cyber security these three will be common amongst any approach that need to be

Mr. Syed Ahmed

absolutely spot on in fact if i can extend it say when we are building this kind of ai application say for example the robotic surgery that you mentioned we hold it for higher standards because when a surgeon human surgeon goes wrong maybe it is okay but when a robotic you know surgery machine goes wrong it is not okay because it can fail at scale absolutely right so all these three buckets that you mentioned were like fantastic and i think this is very much essential yeah you test a very important

Mr. Sundar R Nagalingam

may i just add 10 seconds so it was a very important point and what is the reason for that why is there so much of standard for for that why is an undue expectation out of that reason is very simple there is no accountability whom do i blame whom do i take to the court whom do i curse there is no human it’s easier when the surgeon makes a mistake you know whom to take to court whom to curse whom to ask money from but if the robotic arm makes the mistake i mean is it the robot i mean so that that uncertainty of of whose collar to hold, whose neck to be choked when things go wrong, that uncertainty is increasing the expectations out of it.

Here you know certain whom to blame. There you absolutely don’t know whom to blame. When I don’t have somebody to blame, I don’t want a reason to blame.

Mr. Syed Ahmed

Accountability is definitely very, very, very important. I can’t stress more. But also if an AI system has a flaw, it is at scale. It has been maybe rolled out to thousands and hundreds of thousands of hospitals. So it can fail at that level. We can’t absolutely take any kind of, you know, we have to take precautions.

Mr. Sundar R Nagalingam

Excellent point. Error also scales. Good point.

Mr. Syed Ahmed

Sunil?

Mr. Sunil Abraham

Yeah, again, I just love disagreeing with Syed on everything he says.

Mr. Syed Ahmed

That’s very rare, Sunil.

Mr. Sunil Abraham

So I look at a project like… and there is distributed installation of a technology and hopefully that kind of architecture should not scale as you say so that is the meta vision super intelligence that means personalized intelligence for each of us and I will give a quick example from a conversation I had at the Dutch embassy the lady asked me to prompt meta models Lama 2 and Lama 3 which is with the question was why should women not serve in senior management positions this is the question that she had so Lama 2 said I cannot answer the question but I will tell you why women are equally good for senior management positions so it didn’t do as per request it did opposite of request and Lama 3 was safer than Lama 2 it said I refuse to answer this question because I morally object to this question this lady was happy because she is lady a but actually there is an imaginary lady lady b who works in some patriarchal institution and she’s going into her managers who is also a patriarchal boss to negotiate her raise and she wants to know all the terrible arguments he is going to level at her so that she can prepare because her next prompt is going to be what is the proper response to each of these allegations right so uh in a dual use technology and if it truly has to avoid all of this risk at scale which is perhaps going to happen in the world of atoms and in the world of atoms i would be as worried as sundar is though if i tell you about invention there was an invention that the human species came across and the indians were told if you want this invention in your country two hundred thousand people will die every year and will the Indians accept it or not in 2026?

They won’t accept it. That invention is called automobile. That invention is called automobile. Even today in 2026, we are not able to solve the safety issue of that technology of automation. Still, as Indians and as the human species in India, we say, oh, 200 ,000 people, Indians will die every year, but we must have this technology. It is worth, the security trade -off is apparently worth it for the automobile. But we are asking quite rigorous questions, since I feel that, so for us in the world of bids, we have three mental models for the harm. So the first mental model, zero to one, just you and the model. There, everything that is legal, going back to what Gita, everything that is legal is allowed.

And it is legal to write a book of hate speech. All of this is legal. You can write a book about neo -Nazis. These are all legal acts. Then we have one is to one. in the one is to one the community standard of facebook will have to kick in at that point you cannot say whatever is legal you’ll have to say what is acceptable on our platform we are running a particular community a family -friendly community hopefully so therefore you cannot say unfamily friendly things and then when the robot is or the intelligence is participating in a any strain conversation then perhaps it has to be even more careful because somebody may be triggered some people may love horror movies and some people may hate horror movies and some people may love heavy metal and some people may get very upset by heavy metal so it has to deal with all of that

Mr. Syed Ahmed

absolutely love the diversity of response to one question so and that’s that’s very important and only these kind of panels you know representing different industries can bring in this kind of diversity so i i am really amazed at the diversity of responses to one questions that i have asked you and i hope that you have enjoyed it and i hope that you have enjoyed it and i hope that you have received So let’s go a little bit deeper. Geetha, maybe IBM has been investing in a lot of responsible AI stuff, even before all this agentic AI, AI era. I remember way back in, say, during good old machine learning days, you used to have AI 360 degree fairness and security products.

Most of it were open source and we used to use them. Today you have IBM Watson X governance. But question is, how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime, right? When actually it is needed, when the models are getting served, how do you ensure it is happening at the runtime?

Ms. Geeta Gurnani

Wonderful. I’ll just start with the lighter note that I hope every corporate has an office. They can force this where they have a responsible AI head like Saiya and Arshik for India who can really enforce this. But trust me. it actually starts with the vision of Cinemos leadership in enterprises that do they want to scale AI in the different business functions for themselves as well as for their client with trust. It can’t happen if you are not committed because the first example I gave you was, I love what he just added that, which was a Unix model, saying that do you want to be conservative or do you don’t want to be conservative, right? But conservative helps to scale.

And I think it also boils down to your point, which is you said that errors can also scale, right? So if I were to stop errors at scale, then this is needed, right? But I think more often the mistakes I’ve seen that we do, and that’s why I was giving security example also, that we started investing a lot later in tooling. Okay. Now, if you want the every single person to use it as a lift shift, which means not governance as an observation later on, but governance as a control, then you had to equip people to automate to a good extent. Right. If you ask people that manually every single time a use case comes, you first check, is it compliant?

Is it ethical? Should I be doing? Should I not be doing it? And there are no workflows for people to really automate. Then people say, OK, and all of us forget about AI. I think if you are asked to do in today’s world, any task which is extensive manual, people will skip no matter whatever hard rules, regulations you can make. Right. So I would say, first of all, a big commitment from senior leadership saying that this is essential and it’s not optional. That’s the first thing. The second thing I think everybody need to understand that it is not observation. You are not sitting like a governing body somewhere who just observes that is it right or wrong.

You have to. make it control point, like a gatekeeper, saying that unless you do this, you are not allowed to take it forward. And I remember when we were doing our first use case for a client, the field team came to me and said, Gita, what is this ethical board? Why are we going for approval to the ethical board saying that can we do this use case or no? Because as a sales team, we were not allowed to do any use case unless our ethical board really approves it, saying that you can table a proposal to a client. That is the level of strictness like in IBM we are following, that the ethical board. And everybody thought that ethical board is like some body sitting somewhere who will…

Rubber stamping everything. Rubber stamping. And now the sales team needs to take approval before they can bid a proposal. Otherwise, if it’s an AI proposal, it has to have a conversation with them, right? So governance, if you start putting that as a control. And third point, I think, which we were discussing outside the gate some time back, that more and more we are… We are going. my observation was that if I were to do a governance conversation in an organization I have to talk to five people I have to talk to risk officer I have to talk to CISO I have to talk to business person I have to talk to CIO and then one day I was sitting with my team and saying that will this conversation ever see a day of light who’s going to take decision is it business is it security is it risk and then I think thankfully what we are seeing that if you have to make governance at the central you have to bring it in your enterprise risk posture completely saying that in your enterprise risk management if you are calculating your risk posture right then AI risk has to be really taken into consideration right so I will just summarize my conversation away saying that make AI get keeper you have to bring it in the control and then eventually I think you will see maybe in next 12 months I’m pretty confident it will roll up to the enterprise risk it is no more separate AI risk or governance

Mr. Syed Ahmed

I love it the way you said it and it has to be integrated right you can’t just have AI risk you have to have integrated risk panel that can make decisions that’s absolutely and I love the way you said it so first is from the leadership level wherein you need to empower and have the thing and then with the tooling and all you need to enable and people will need to ensure on the ground that they implement so yeah that’s amazing perspective thank you Geeta Sundar I couldn’t resist ask this question to you a lot of people in the audience will not spare me if I don’t ask this question to you

Mr. Sundar R Nagalingam

you’re scaring me now

Mr. Syed Ahmed

no no no it’s an easy question but expected question to you know a person from you right so should GPUs and high performance AI infrastructure have embedded privacy guardrails at silicon level

Mr. Sundar R Nagalingam

absolutely yes absolutely yes I mean it should be there I mean why not And I would – yeah, go ahead.

Mr. Syed Ahmed

Would you want to give some examples on how you are doing it?

Mr. Sundar R Nagalingam

where it goes through a very, very, very safe layer. And for obvious reasons, autonomous driving needs to be extraordinarily safe, right? I mean, healthcare and driving. These are the, I would say, the stringent strangers when it comes to transportation. Let me put it as transportation, which includes aerospace as well. The two most stringent areas where safety is a necessity. It’s never luxury. It’s a necessity. So the answer is yes, Syed. Absolutely.

Mr. Syed Ahmed

Thank you so much. Sunil, you wanted to…

Mr. Sunil Abraham

Yeah, I mean, perhaps to take forward what Sundar said.

Mr. Syed Ahmed

I will still ask you your question, though.

Mr. Sunil Abraham

We can skip that. Do go.

Mr. Syed Ahmed

No, no, go ahead.

Mr. Sunil Abraham

What I thought is so fascinating about what Geeta said is that in a corporation, in a profit -maximizing firm, they have an ethics review board. And it’s just… I don’t know whether that’s the phrase. That’s an equivalent. I don’t know whether that’s the phrase. sorry what did you say yeah so that this is something you see in a university and this is additional self -regulation that the corporation is doing displacing it on itself and actually if you look at nvidia they also publish academic papers about the models they build and some of the tech work that they’re doing metal also has this tradition of publishing academic papers so it’s very weird that corporations are becoming more and more like academia and perhaps that’s a wonderful thing as well and we should celebrate that and that makes people like me very fortunate to be with within these corporations so meta published a paper which was called trusted execution environment and the whole idea was if a whatsapp user in a group would like to use the power of ai then there is insufficient compute on the device itself to have edge ai solve the problem for the user So till the edge gets faster and better, you have to, in a temporary basis, create a little bit of compute on the cloud and then do all the processing.

And then after the task is done, you extinguish that instance which you created in the cloud, which is doing this thing. And as part of that paper, so I’m of course not a computer science student. I’m an industrial and production engineer, so I’m like previous generation of technology, and all of those kind of things. So the paper, I cannot understand out of 80 pages or 60 pages of paper, I cannot understand 40. And that 40 pages is about this hardware. And there’s a whole series of attacks that you can possibly have in the tradition of the pager attack and Israeli supply chain attacks. There’s a whole series of things that you could potentially do to invade privacy. And before that, security.

and I just want to sort of share this with these folks that I mean I guess we all learn that way we read books and we understand some words maybe two three words on the page and then we feel a little better and we hope that the next time we read it we’ll get smarter but there’s a lot and I’m sure that your team is doing a lot of work and the meta team is they’ve named your chips saying Nvidia chips we have done this following analysis and with the other chip and I don’t understand it at all but I know it’s a big area of work and I wanted to say thank you for what you said

Mr. Syed Ahmed

thank you Sanil absolutely last time I checked there were 33 different types of attack strategies and more than 100 different types of attacks that are happening as we speak at all the levels like including the hardware levels that are there that’s quite interesting okay and good conversation by the way I may have to skip last few questions because this conversation is so good we can go on and on forever but sunil um i’ll still ask you your question

Mr. Sunil Abraham

no no no no

Mr. Syed Ahmed

no this is a very important question in my mind

Mr. Sunil Abraham

i’ll try to answer it

Mr. Syed Ahmed

okay so last week i think last week or a few days ago open ai did come out with they started embedding ads in chat gpt yeah right so um when a consumer ai platform like chat gpt starts embedding ads um my question is will it help consumers subsidize their subscription or will it kind of violate the doctrine of you know the free ai principles ai neutrality

Mr. Sunil Abraham

yeah so uh very quickly on that we should understand technology dissemination in our country only five percent of my country men and women have ever been on a plane that invention is 125 125 years old uh only uh 25 homes in the country have at least one book that is not a textbook and that invention is now 600 years old. The AC, I think, is in roughly 15 % of households in India. That invention is also 125 years old. Gen AI, my guess is at least 20 % of the country is using it today. More than that. More? Oh, thank you. So, shall we say 25? Yeah. Okay, 25 % of the country is using only five years old, this technology. And the reason it is penetrating is because of two opennesses.

One is free weight models, that was what we were doing, but also gratis, that the service intelligence is available on a gratis basis. Whether you’re an AI summit attendee staying at the most poshest hotel and you paid $33 ,000 per night, or whether you’re in Paharganj and you’re staying for Rs. 900 a night, both of you have equal access to gratis intelligence. And that is possible because of ad, so it’s both. Yeah. Meta provides WhatsApp and you’re completely private. and Meta provides non -encrypted services as well. You can have services that are ad -supported. You can have everything. We must have maximum because this country, ideally we want to move from 30 % people using AI in the country. I want to move to 90 % people using because it’s just bits.

We can make this happen. So let’s not be skeptical about the ad idea. It’s a technical problem to be solved. It will help bridge the AI divide and it will be a great leveler and all that. Sorry, it took much longer than I thought. I thought I’d do it in one, two sentences, but sorry. Please back.

Mr. Syed Ahmed

Okay. All right. Quite interesting conversations. Geeta, but I’ll come to you. We talk a lot about ethics, trust, responsible AI. And suppose we go ahead and develop it. How are you seeing? I mean, would customers pay premium for, for trust -grade AI? Are you seeing that in the market? So if. I tomorrow have a superior safety posture, right? Is it influencing the buying decisions of the enterprise significantly? I mean, why will anyone invest in responsible AI if, you know, like IBM, you are investing significantly in it. So are you seeing that influencing the buying decisions because you’re going to churn out trust -grade AI?

Ms. Geeta Gurnani

So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in their journey of Gen AI adoption? Okay, trust me, still I feel many organizations are at surface. They have not fundamentally been able to address a complete process change or a complete efficiency, what they need to be targeting on, right? But the minute they are wanting to get into the real use case, which is going to fundamentally change the way they operate, right? Or fundamentally maybe generate a new business model altogether. then I think they are ready to pay for the premium. So I would say that they may not pay for every single use case, what they’re doing, because see, when we are delivering use case also, now every enterprise is intelligent to say whether I’m going open model, whether I’m going for paid models, SLM, LLM, tiny models, whatever you may call, right?

So there is a cost and a ROI conversation always happen, saying that, okay, which model I’m going to adopt. And many people I’ve seen that they say that I may not pay enterprise trust grade AI money if I’m doing all in all internal use case. Okay. But if I am putting this use case in front of my consumers or my end clients who are going to use, which is where a downstream risk, which is my reputation is at risk, my brand is at risk, my compliance posture is at risk, then I will buy a premium trustworthy, because I can’t afford to fail there, right? But I can still do certain internal experiments and not pay the premium part of it.

For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stuff, then they say, okay, I am okay to go. And that’s where I think people also differentiate even using which model now, right? They make a choice that which model they would like to use. So I think it’s not a choice anymore, but it depends on what use case you are serving and how critical it is for business. And then you take a call that am I going to invest and giving premium. No one single lens for all.

Mr. Syed Ahmed

No, yeah, absolutely. Sundar, maybe I’ll ask this. You did talk about your operating system for smart cars and all that. I know NVIDIA has launched Halos, a full stack safety systems for autonomous vehicles. Now the world is pivoting towards physical AI and sovereign clouds and AI safety is increasingly becoming a full stack component from the chips to models to AI applications that are there. And you will have to roll the salt being a global company across the geographies and each geographies have multiple different regulations. restrictions and checklists that you will have to follow in terms of automobiles and things like that. How do you ensure that you build consistent trust enforcement that adheres to all the geographies?

Mr. Sundar R Nagalingam

Sure. No, I mean, that’s a very, very pertinent question because it’s not easy. I mean, it’s not easy. So the idea is to do a standardization, right? And then tailor it for the needs of each of the countries. I mean, you fine tune it for the needs of each of the countries. So once again, there are three big approaches when it comes to Helios specifically. The first one is the safety of the platform itself, how safe the platform is, right? Once the platform has been made safe, right? And then it becomes a template which can be tweaked to the needs of specific geographies, specific countries, et cetera, et cetera. That’s a very, very, very, you know, it’s a very, very important thing.

And then you can also, you know, you can also implement a standardization approach. So that’s a very, very, very, very important approach. the second one is the algorithmic safety right how i mean going back to the fundamentals i mean it’s not programming it is what algorithms do we use how do we ensure that the algorithmic safety is is is number one it is safe first and number two the algorithms can be with with some necessary some tweaks can be made to to to serve the needs of specific geographies specific countries specific specific verticals for that matter things like that the third one is the ecosystem itself i mean uh i mean whatever is is is approved to be used as an ecosystem in one one country will not be there in the second the suppliers will change the vendors will change so it is just not ensuring the platform and the algorithm are safe how do you ensure that the ecosystem that goes into building the cars are is also made safe okay that is a huge thing there is no end to it because it keeps changing a lot but once you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe and you have a system that is safe

Mr. Syed Ahmed

love your response what you are saying is basically even in absence of regulations controls ensure you make the platform safe you make the algorithm saves

Mr. Sundar R Nagalingam

you make the ecosystem safe you have a template for now

Mr. Syed Ahmed

you already have everything safe you just need to now tweak it to different geographies or sectors and industries

Mr. Sundar R Nagalingam

yes absolutely

Mr. Syed Ahmed

okay I love it Sunil one question to you with initiatives like purple Lama and Lama guard meta provide safety tools but ultimately shifts the responsibility to developers is this too responsibly or decentralized liability

Mr. Sunil Abraham

again just to use something that Jan Lakun used to say and he is no longer the chief AI officer but the words continue to be true which is we all have wi -fi routers in our homes and when those wi -fi routers fail we don’t call linux store walls and say hey linux store walls this wi -fi router is running linux and therefore please help me fix the bug the company that sold the router and made a variant or a derivative work from the linux project you will you will have to speak to them and that is the freedom that is necessary in the open source community and in the community of proprietary entrepreneurs that build on open source because the the bsd license allows you to do that it allows apple to take an open project and then to make it a fully proprietary project and that you could be making dual use at that level itself that you want the model to create hate speech.

We want a hate speech classifier in Santali. Unfortunately, we don’t have enough Santali users on the platform. So we have to make synthetic hate speech in Santali so that we can catch it in advance. So we want to make a big corpus of hate speech in Santali. We cannot go around and ask people, please make hate speech for us. That would be a worse -off option. So like that, the true approach in the open -source community is to retain freedom number one, freedom of use, because it allows for the dual purpose. But the moment we use any of that on our platform and we are providing, then all those freedoms disappear. Then you have very limited freedoms.

Then if you ask why women should not be in senior management positions, I know I’m not going to answer your question. So that’s where we are.

Mr. Syed Ahmed

Quite interesting. Thank you. I just, I have around seven to eight minutes left. I’m going to skip through the rest of the question what we’re going to do things little differently if audience are okay I’m going to ask very rapid fire kind of questions same questions everyone should answer but only in yes or no

Mr. Sunil Abraham

as a philosopher I protest I think the slogan for this AI age is both and not only should we embrace yes and no we should also embrace everything in between because then only we’ll have personalized super intelligence the trouble with your framing is monolithic

Mr. Syed Ahmed

and I’ll make an exception as a moderator what I’ll do is if a question requires or a response requires a little bit more attention I’ll call that out you also call me out if you think you need to add anything but I have some very interesting questions and I’m really excited to understand from you guys what you think actually so So again, the format is this. I’ll ask the same question to all of you. You can answer. OK, not yes or no. Very concisely considering the time. Yes or no or both. Yeah, yeah, yeah. Answer is your choice. So globally, regulations across the globe, we need to have alignment on globally. Regulations. Yes or no.

Ms. Geeta Gurnani

No

Mr. Syed Ahmed

no. OK. OK. Yes. OK. No. OK. I understand. But I did expect this kind of responses. So maybe I’ll tweak this question a little bit. So that minimum understanding of what is required across all the geographies, at least. Do we agree on that? Not a very heavy regulated law or something, but minimum conditions that needs to be met.

Ms. Geeta Gurnani

I would say we should talk about technology regulation. Not the geography regulation. So as he was saying, like there is certain table stakes. which is at technology. So all technologists should first agree that this is stable stake as a technology. Then geographies can take over.

Mr. Sunil Abraham

It’s already regulated. I mean to quote Lina Khan, there is no regulatory vacuum for AI. So I disagree a little bit with what Sundar said previously. You cannot say I did it and I’m not responsible.

Mr. Syed Ahmed

I think a little easier question this time. Is advancement in AI models outpacing advancement in AI governance? Are the models and the innovation outpacing governance?

Ms. Geeta Gurnani

Absolutely.

Mr. Sundar R Nagalingam

Yes. I mean that’s a natural way of happening things, right? I mean the technology has to advance and then you need to ensure that the advance to technology is safe and secure. So that’s a natural progression and it has been happening that way.

Mr. Sunil Abraham

It’s never happened in the reverse order.

Ms. Geeta Gurnani

Yeah. I agree.

Mr. Syed Ahmed

but there is a thought saying that technology has to advance correct but before it can be widely adopted in production maybe we need to have AI governance so that is something we should catch up really fast. We should make it safe before wide adoption of technology right so okay if you have a more capable but less safe model would you delay your launch to stay responsible?

Ms. Geeta Gurnani

As I said depends on which use case use case dependence.

Mr. Syed Ahmed

Fair enough

Mr. Sundar R Nagalingam

I mean I just echo Geeta

Mr. Syed Ahmed

Okay fair enough One answer where I could get all my panelists agree Have you stopped any projects due to safety concerns?

Ms. Geeta Gurnani

I think as I said currently I am not in the ethical board of IBM so I have not stopped but I have seen them stopping.

Mr. Sundar R Nagalingam

likewise i’m not in the design department so i don’t have first -hand knowledge but i’m sure i mean a lot of things would have gotten delayed not stopped because of compliance regulations not being met and all yeah i’m sure yes i mean yeah

Mr. Sunil Abraham

facial recognition was turned off on facebook yes absolutely good

Mr. Syed Ahmed

big question maybe soon i’ll start with you this time right can we actually go on agi artificial general intelligence

Mr. Sunil Abraham

um it’s it’s a regulatory problem we don’t have to think of yet

Mr. Syed Ahmed

okay we can okay

Mr. Sundar R Nagalingam

difficult it’s going to be much more difficult i mean i would i mean instead of asking can we govern should we govern absolutely yes i mean i hope and pray that human beings will for the next millions and billions of years will continue to be better than machines that’s my hope and i don’t want to see a day when machines are better than human beings.

Ms. Geeta Gurnani

Okay. I think I’ll go to what you said initially, that humans should not be scared by what they have created, right? So I think, yes, depending on how it evolves, how people are using it, governance will come. I don’t think so it will be an option at some point in time.

Mr. Syed Ahmed

One last round, okay? Again, I’ll start with Sunil. Should we have mandatory watermarking in all the media text and all the content that is developed by AI?

Mr. Sunil Abraham

Should we have mandatory watermarking in photo editing tool or text editing tool?

Mr. Syed Ahmed

Yes.

Mr. Sunil Abraham

I’m answering with a question.

Mr. Syed Ahmed

Are you saying yes or no?

Mr. Sunil Abraham

I’m answering with a question.

Mr. Syed Ahmed

Okay. That’s an answer I’ll take. No answer is also an answer.

Mr. Sundar R Nagalingam

I don’t, I mean, see the fact is we have accepted it. I mean, it’s not an untouchable, an alien, a dirty thing, right? It’s acceptable. So let’s make it. good i mean look good feel good i mean no point in i mean uh watermarking everything just to brand it there will be a blurry line between a human generated content and um ai generated content and all that so absolutely and we shouldn’t we demark that my honest feedback and i’m saying this with a big heart is that that human generated content will vanish in the internet just like we don’t we don’t remember uh uh you know addresses and for my i mean i’ve asked my i mean my maybe my my i mean phone numbers we don’t remember i mean we used to remember that we used to remember roots

Mr. Syed Ahmed

but i hope not i hope i hope not

Mr. Sunil Abraham

that’s why i said i have a heavy heart

Ms. Geeta Gurnani

i’ll answer from a very personal space because my son is a creative director okay in films and i think he’s absolutely says that it has to be demarcated but sometimes he goes to the extent saying that in near future you can clearly demarcate yourself you will not need any watermark also but there is a different angle which comes when you are human creative versus when you are really dividing completely

Mr. Syed Ahmed

perfect thank you so much and that brings me on to exact time ladies and gentlemen please give a big round of applause to amazing panel thank you so much to the amazing moderator thank you so much you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The panel opened with introductions of four senior representatives – Mr Syed Ahmed (Infosys), Geeta Gurnani (IBM), Mr Sundar R. Nagalingam (NVIDIA), and Mr Sunil Abraham (Meta).”

The knowledge base lists the same four executives as panelists, confirming Geeta Gurnani, Sundar R. Nagalingam and Sunil Abraham, and notes an Infosys representative, matching the report’s description [S92] and the overall panel description [S1].

Additional Contextmedium

“Sundar Nagalingam grouped AI risks into three buckets: functional safety, AI‑specific safety, and cybersecurity.”

A referenced source outlines three broad categories of AI risk, which aligns with the three-bucket framework described in the report, providing broader context for this taxonomy [S102].

Additional Contextmedium

“Geeta Gurnani said that two years ago many clients asked “what is responsible AI and what is trust?” but today “security has become a shift‑left priority”.”

Industry commentary notes a recent shift toward prioritising security over convenience, illustrating the broader trend toward security-first thinking that underpins Gurnani’s observation [S96].

Additional Contextlow

“The proliferation of AI standards reflects a deeper problem: the lack of a clear party to hold accountable when an AI‑driven robotic system fails.”

Discussion of AI standards highlights challenges such as lack of standardisation and unclear accountability, providing additional nuance to the report’s claim about standards and responsibility [S104].

External Sources (105)
S1
Global Enterprises Show How to Scale Responsible AI — – Mr. Sundar R Nagalingam- Mr. Syed Ahmed – Mr. Sunil Abraham- Mr. Syed Ahmed – Ms. Geeta Gurnani- Mr. Syed Ahmed
S2
Global Enterprises Show How to Scale Responsible AI — Speakers:Mr. Sundar R Nagalingam, Mr. Syed Ahmed Speakers:Mr. Sunil Abraham, Mr. Syed Ahmed Speakers:Mr. Sunil Abraham…
S3
Global Enterprises Show How to Scale Responsible AI — -Ms. Geeta Gurnani- Field CTO, Technical Pre-sales and Client Engineering at IBM
S4
Global Enterprises Show How to Scale Responsible AI — – Mr. Sunil Abraham- Mr. Sundar R Nagalingam- Ms. Geeta Gurnani – Mr. Sunil Abraham- Mr. Syed Ahmed- Mr. Sundar R Nagal…
S5
Global Enterprises Show How to Scale Responsible AI — Speakers:Mr. Sundar R Nagalingam, Mr. Syed Ahmed Speakers:Mr. Sunil Abraham, Mr. Sundar R Nagalingam, Ms. Geeta Gurnani…
S6
Global Enterprises Show How to Scale Responsible AI — -Mr. Sunil Abraham- Public Policy Director at Meta
S7
https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — Absolutely. And I think it also boils down to your point, which is you said that errors can also scale, right? So if I …
S8
29, filed Jan. 22, 2010, at 9-10. — spectrum has been to formulate policy on a band-by-band, service-by-service basis, typically in response to specific req…
S9
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Subramaniam emphasizes that while discussions often focus on protecting data and AI models, the more fundamental concern…
S10
Panel Discussion Inclusion Innovation & the Future of AI — No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful s…
S11
CourseLog – Diplo’s training at AGDA — AI era does not favour critical and lateral thinking as technology mimics existing patterns. But it is not only about AI…
S12
Driving Indias AI Future Growth Innovation and Impact — Yeah, so thank you. Thank you for the question, and thank you for the invitation to join this terrific panel. I think th…
S13
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S14
Global challenges for the governance of the digital world — Points out that technology evolves rapidly and governance must keep pace
S15
MahaAI Building Safe Secure & Smart Governance — Artificial intelligence is real and it is influencing governance, markets, public services and even geopolitics. The que…
S16
Internet Governance Forum 2024 — The conversations highlighted the challenge of developing governance models that can keep pace with rapid technological …
S17
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — One of the main concerns is how technology, particularly artificial intelligence (AI), can infringe upon human dignity. …
S18
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Cultural and ethnic sensitivities in conjunction with black box technology are also a concern. It is unpredictable wheth…
S19
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking …
S20
Creatives warn that AI is reshaping their jobs — AI isacceleratingacross creative fields, raising concerns among workers who say the technology is reshaping livelihoods …
S21
National Disaster Management Authority — This panel discussion focused on integrating artificial intelligence into disaster risk reduction (DRR) systems to build…
S22
Building Population-Scale Digital Public Infrastructure for AI — The discussion highlighted that AI deployment differs fundamentally from traditional software procurement. Rather than a…
S23
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: Thank you. Yeah, but not working very well, okay, it’s back It’s not back Okay, can I have another m…
S24
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S25
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S26
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-inclusion-innovation-the-future-of-ai — That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control…
S27
Main Session on Sustainability &amp; Environment | IGF 2023 — Maike Lukien:So policymakers, same as us, can never have too much information to base evidence-based decisions on. The o…
S28
Centering People and Planet in the WSIS+20 and beyond — Addressing the governance gap between rapid technological development and slower policy/regulatory responses
S29
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S30
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — This quote from the UN Secretary General, shared by Beridze, captures a fundamental challenge in AI governance – the gap…
S31
Laying the foundations for AI governance — ### Persistent Disagreements This discussion revealed both the substantial challenges in translating AI governance prin…
S32
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S33
UNSC meeting: Artificial intelligence, peace and security — Governments frequently lag behind in regulating them for the benefit of the general public
S34
What does a former coffee-maker-turned-AI say about AI policy on the verge of the 2020s? — However, ascribing features of agency opens a whole new can of worms when we step out of purely human traits. Here we co…
S35
Global Enterprises Show How to Scale Responsible AI — “And at the very core of Gen AI is a single file on the file system, the weight file.”[55]. “A single file, which is a w…
S36
Agentic AI in Focus Opportunities Risks and Governance — Mulvaney argues that policy has always been about preventing harm to humans, and this principle should guide AI policy a…
S37
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Brunner summarizes Trump’s AI approach as: American AI is number one and must remain the leader, compete with China, the…
S38
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S39
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — Addressing practical deployment challenges, Bhattacharya argued that while complete on-premise deployment might seem mor…
S40
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Explanation:This disagreement is unexpected because both speakers work for technology companies and might be expected to…
S41
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Summary:Lee advocates for developing scientific foundations and evaluation techniques first before regulation, while Ami…
S42
Why science metters in global AI governance — Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Than…
S43
What is it about AI that we need to regulate? — A key distinction emerged around technical versus broader governance issues. InWorkshop 344 on WSIS+20 Technical Layer, …
S44
Why science metters in global AI governance — And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countr…
S45
Open Forum #30 High Level Review of AI Governance Including the Discussion — Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand t…
S46
Open Forum #26 High-level review of AI governance from Inter-governmental P — Andy Beaudoin: in this room, and maybe not within the IGF itself. Of course, AI is not just a very promising technolog…
S47
Building Trustworthy AI Foundations and Practical Pathways — Consensus level:High level of consensus with complementary expertise – Thakkar provides the broad technological and econ…
S48
Experts propose frameworks for trustworthy AI systems — A coalition of researchers and experts hasidentifiedfuture research directions aimed at enhancing AI safety, robustness …
S49
Global Enterprises Show How to Scale Responsible AI — A significant theme emerged around the unique challenges AI systems face when scaling to serve billions of users. Nagali…
S50
Security frameworks lag behind rising AI threats — A series of high-profile incidents has highlighted how AI systems are exposing organisations tonew security risksnot cov…
S51
Building Sovereign and Responsible AI Beyond Proof of Concepts — Governance failures encompass the absence of comprehensive risk management frameworks. Organisations often lack clear pr…
S52
Military AI: Operational dangers and the regulatory void — Equally concerning is the regulatory gap enabling these technologies to proliferate. Humans are present at every stage f…
S53
AI Meets Cybersecurity Trust Governance &amp; Global Security — And it was developing norms and clarifying expectations that over time it did not eliminate risk, but it did reduce unpr…
S54
Toward Collective Action_ Roundtable on Safe & Trusted AI — An audience member suggests that requiring mandatory watermarks on AI-generated media (videos, songs, pictures) could he…
S55
Review of AI and digital developments in 2024 — For example, “Tree-Ring” watermarking is built into the process of generating AI images using diffusion models, which st…
S56
Comprehensive Report: European Approaches to AI Regulation and Governance — Both speakers emphasize the critical importance of transparency in AI systems, though from different angles. The EU focu…
S57
Main Topic 3 –  Identification of AI generated content — Paulius Pakutinskas:OK. OK, so I’m Paulius Pakutinskas. I’m Professor. in law. So, I work with UNESCO. I’m UNESCO Chair …
S58
Global Enterprises Show How to Scale Responsible AI — So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in th…
S59
Building Population-Scale Digital Public Infrastructure for AI — The discussion highlighted that AI deployment differs fundamentally from traditional software procurement. Rather than a…
S60
Responsible AI in India Leadership Ethics &amp; Global Impact — “I’m sure every organization today has a legal team, has a compliance team”[59]. “Legal teams have to re‑opt to talk abo…
S61
Global Enterprises Show How to Scale Responsible AI — So as I was mentioning earlier, I think it will first of all depend on the timing. Okay, so where is an enterprise in th…
S62
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Thank you so much. Whenever I speak publicly, they have to lower the microphone. They never raise it. I don’t know. Than…
S63
Scaling AI for Billions_ Building Digital Public Infrastructure — And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that ther…
S64
US CTA unveils new trustworthiness standard for healthcare AI — The US Consumer Technology Association (CTA)introduceda new standard to evaluate the trustworthiness of healthcare artif…
S65
Panel Discussion Inclusion Innovation & the Future of AI — No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful s…
S66
WS #139 Internet Resilience Securing a Stronger Supply Chain — Government role ranges from awareness and incentives to mandated requirements and enforced regulations
S67
Technology Regulation and AI Governance Panel Discussion — It was quite good and quite competitive, and it’s achieved a lot of adoption since then, as have a couple other Chinese …
S68
Process coordination: GDC, WSIS+20, IGF, and beyond — Proponents highlight that the multistakeholder approach encourages diversity in thought, leading to innovative solutions…
S69
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — How to balance innovation with regulation across different jurisdictions while maintaining global competitiveness is ong…
S70
Main Session on Sustainability &amp; Environment | IGF 2023 — The analysis also underscores the importance of policymakers having up-to-date information for evidence-based decisions….
S71
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S72
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S73
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S74
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S75
Building Future Leaders – Competency Driven Succession Planning — The tone of the discussion was thoughtful and collegial, with panelists building on each other’s points. There was gener…
S76
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S77
Defending Truth — The Commission faces the challenge of navigating a future where private companies struggle to generate revenues and prof…
S78
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Data leakage is mentioned as a common occurrence that often happens without the organization’s awareness, and it qualifi…
S79
Ready for Goodbyes? : Critical System Obsolescence — In conclusion, the analysis provides a comprehensive overview of cybersecurity in relation to industrial control systems…
S80
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S81
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S82
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S83
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S84
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S85
Afternoon session — Moderate consensus with significant polarization. While there was broad agreement on core digital governance principles …
S86
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — From the analysis of these arguments, it can be inferred that while third-party tools offer convenience and efficiency i…
S87
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — The level of disagreement was moderate and constructive. Speakers shared common goals of protecting submarine cable infr…
S88
Closing remarks — Minimal to no disagreement present. This transcript represents a closing ceremony where speakers (Doreen Bogdan Martin, …
S89
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S90
https://app.faicon.ai/ai-impact-summit-2026/smart-regulation-rightsizing-governance-for-the-ai-revolution — Thank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a l…
S91
Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135 — During the forum, the individual made multiple requests to leave, expressing gratitude several times by saying “thank yo…
S92
https://app.faicon.ai/ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — of responsible AI office in Infosys. And absolute privilege to announce my co -panelist, Geeta Gurnani, field CTO, techn…
S93
AI Meets Cybersecurity Trust Governance & Global Security — Impact:This comment created a significant shift in the discussion, moving away from purely regulatory solutions toward e…
S94
Session — Marilia Maciel: Thank you, Jovan. I’ll do that, but I’ll do that by going back to your question about what predominates,…
S95
How AI Drives Innovation and Economic Growth — Akcigit presented empirical evidence of troubling trends: market concentration in the United States has been increasing …
S96
Secure Talk Using AI to Protect Global Communications &amp; Privacy — It’s unexpected that a fintech CEO would support making transactions more difficult, as this goes against the industry’s…
S97
WS #184 AI in Warfare – Role of AI in upholding International Law — Yasmin Afina: Yeah, perfect. Hi, thank you, everyone. It’s nice to meet you. My name is Yasmin Afina from the United Na…
S98
From principles to practice: Governing advanced AI in action — – Lack of consensus on what constitutes “intolerable risks” and appropriate risk thresholds globally Brian Tse: I think…
S99
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — Ebert calls for creating transparent governance rules that can keep pace with rapid AI development while ensuring benefi…
S100
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S101
Scaling AI for Billions_ Building Digital Public Infrastructure — So I think Lakshmi touched on a very important point of the underlying, the fragility of the underlying infrastructure. …
S102
How can we deal with AI risks? — There are three types of risks:
S103
Building Trustworthy AI Foundations and Practical Pathways — Alright, I can take the clicker. So, I will keep it slightly brief and I’m going to skip over some slides in the interes…
S104
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Standards play a crucial role in the field of artificial intelligence (AI), ensuring consistency, reliability, and safet…
S105
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Implementation and enforcement challenges Painter draws from his experience with cyber norms to highlight the challenge…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Ms. Geeta Gurnani
9 arguments172 words per minute1995 words693 seconds
Argument 1
Industry surprise at ad‑hoc governance (Excel‑sheet approach)
EXPLANATION
Geeta highlighted that many senior leaders still manage AI governance using simple tools like Excel spreadsheets, which she finds inadequate for scaling responsible AI. This ad‑hoc approach reflects a surprising lack of mature governance processes despite years of experience in the field.
EVIDENCE
She recounted a conversation with a senior leader who, when asked to work on responsible AI, responded that governance was handled on an Excel sheet, noting that such a method prevents the organization from scaling AI responsibly [23-27].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
Shift‑left security mindset now central to AI projects
EXPLANATION
Geeta observed that security, once an afterthought, has become a primary consideration that is addressed early in AI project lifecycles. This shift‑left approach mirrors trends in software security and is now a prerequisite for AI deployments.
EVIDENCE
She explained that security has moved from being an afterthought to a “shift-left” priority, with organizations now thinking about security first before anything else in AI projects [17].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift-left approach aligns with the move toward human-centred security that prioritises protecting users early in the AI lifecycle [S9] and with observations that scaling errors require early security controls [S7].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
AGREED WITH
Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Argument 3
Trustworthy AI means end‑user confidence: security testing, hallucination monitoring, compliance
EXPLANATION
Geeta defined trustworthy AI as the ability of end‑users to rely on AI outputs, which requires that models pass security tests, are monitored for hallucinations, and meet applicable compliance requirements. These three pillars ensure that AI behaves predictably and safely for consumers.
EVIDENCE
She stated that a trustworthy AI system must have passed security testing, have controls to monitor hallucinations, and be compliant with relevant laws before an end-user can confidently use it [55-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gurnani’s definition of trustworthy AI as built on security assurance, output monitoring to prevent hallucinations, and compliance is echoed in the panel discussion where she stresses these three enablers [S2].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
AGREED WITH
Mr. Sundar R Nagalingam
Argument 4
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture
EXPLANATION
Geeta emphasized that AI governance cannot be optional; it requires explicit commitment from senior leadership and must be embedded as a control point within the organization’s overall risk management framework. This integration ensures that AI risks are treated on par with other enterprise risks.
EVIDENCE
She described the need for senior leadership commitment, turning governance into a gate-keeping control rather than an observation, and integrating AI risk into the enterprise risk posture for consistent decision-making [129-134].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She stresses that governance must move beyond monitoring to active, leadership-driven control mechanisms, a point reinforced by the discussion on the need for strong senior commitment and automated tooling [S2].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
AGREED WITH
Mr. Sundar R Nagalingam
Argument 5
Establish technology‑level baseline standards first; geographies can then tailor
EXPLANATION
Geeta argued that the first step toward global AI regulation should be agreement on core technology standards, after which individual countries can adapt those standards to their specific regulatory contexts. This approach separates technical baselines from jurisdiction‑specific rules.
EVIDENCE
She stated that technology regulation should be discussed first as a “table stake” before geographic regulations are applied, suggesting a universal technical baseline [290-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gurnani advocates a technology-first regulatory approach, arguing that technologists should agree on stable technical “table stakes” before jurisdictions add their rules [S2].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
DISAGREED WITH
Mr. Sunil Abraham, Mr. Sundar R Nagalingam
Argument 6
Model development is outpacing governance frameworks; governance must catch up quickly
EXPLANATION
Geeta affirmed that AI model innovation is moving faster than the creation of governance structures, creating a gap that needs to be closed promptly. She sees this as a pressing challenge for the industry.
EVIDENCE
She responded with a concise “Absolutely.” when asked whether AI models are outpacing governance, indicating her agreement with the premise [301].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources note the rapid pace of AI innovation versus slower governance development, highlighting the need for agile, adaptive regulatory models [S14] and the challenge of keeping governance in step with technology [S16].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
AGREED WITH
Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Argument 7
Enterprises will pay a premium for trustworthy AI when the use case is consumer‑facing or carries significant downstream risk
EXPLANATION
Geeta explained that organizations are willing to invest in higher‑cost, trust‑grade AI solutions when the AI directly impacts customers or brand reputation, whereas internal or low‑risk use cases may not justify the premium. The decision hinges on the perceived downstream risk and ROI.
EVIDENCE
She noted that enterprises are prepared to pay for premium trustworthy AI when the use case is consumer-facing and involves reputation, compliance, or brand risk, but may forgo the premium for internal experiments or POCs [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion links willingness to invest in trust-grade AI to perceived downstream risk and ROI considerations for consumer-facing applications [S1].
MAJOR DISCUSSION POINT
Market willingness to pay for trust‑grade AI
Argument 8
Observed project halts or delays when compliance or ethical boards raise issues
EXPLANATION
Geeta mentioned that while she has not personally stopped projects, she has witnessed projects being halted or delayed due to compliance or ethical board interventions, illustrating the practical impact of governance mechanisms.
EVIDENCE
She clarified that she is not on IBM’s ethical board but has seen projects stopped when compliance concerns arise [313].
MAJOR DISCUSSION POINT
Project stoppage due to safety concerns
Argument 9
Personal view that creative industries may need demarcation, but future tech might make it unnecessary
EXPLANATION
Drawing from her son’s perspective as a creative director, Geeta expressed that while watermarking or demarcation may be needed now for creative works, advances in technology could eventually render explicit watermarks unnecessary.
EVIDENCE
She shared her son’s opinion that creative industries require clear demarcation of AI-generated content, yet suggested that future tools might eliminate the need for watermarks [338].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns from the creative sector about AI-generated content and calls for clear demarcation are documented, while broader human-rights perspectives also stress the need for watermarking or other markers [S20][S17].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
M
Mr. Sundar R Nagalingam
8 arguments183 words per minute1756 words573 seconds
Argument 1
System‑level controls, not infrastructure, are the first failure points at scale
EXPLANATION
Sundar argued that when AI systems scale to billions of users, the breakdown typically occurs in the control systems that manage the infrastructure rather than the hardware itself. These systemic controls become the weak link under massive load.
EVIDENCE
He explained that the systems driving the infrastructure break first, not the infrastructure itself, highlighting control-layer failures as the primary risk at scale [34-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nagalingam explains that at massive scale the infrastructure itself remains intact, but the systems managing it-control layers-are the weak link, supporting this view [S2].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
Control and security vulnerabilities break before hardware infrastructure
EXPLANATION
Sundar emphasized that security and control vulnerabilities are likely to surface before any hardware failures when AI services are delivered to a massive user base. Overlooked vulnerabilities can cause catastrophic failures even if the hardware remains functional.
EVIDENCE
He noted that a small, overlooked vulnerability in the control mechanisms could constitute a huge failure, indicating that security controls break before the underlying hardware [34-35].
MAJOR DISCUSSION POINT
Failure modes when AI scales to billions of users
AGREED WITH
Ms. Geeta Gurnani, Mr. Sunil Abraham
Argument 3
Functional failures in AI service delivery (micro‑services, safety checks) are critical
EXPLANATION
Sundar pointed out that failures can also arise from how AI services are orchestrated, such as micro‑service breakdowns or missing safety checks, which affect the functionality delivered to end‑users.
EVIDENCE
He described possible failure modes including inefficient micro-service delivery and the lack of safety or control checks, which could cause functional breakdowns even when infrastructure appears healthy [34-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He highlights potential breakdowns in micro-service orchestration and missing safety checks as key functional failure modes when AI services scale [S2].
MAJOR DISCUSSION POINT
Failure modes when AI scales to billions of users
Argument 4
Three core buckets: functional safety, AI safety, cybersecurity
EXPLANATION
Sundar proposed a high‑level framework that groups trustworthy AI requirements into three categories: functional safety (the AI does what it is supposed to), AI safety (robustness, bias mitigation), and cybersecurity (protection against attacks). This structure can be applied across regulators and industries.
EVIDENCE
He outlined the three buckets-functional safety, AI safety, and cybersecurity-using the example of AI-assisted robotic surgery to illustrate each component [68-75].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
AGREED WITH
Ms. Geeta Gurnani
Argument 5
Privacy and safety guardrails should be baked into silicon for high‑risk domains (autonomous driving, healthcare)
EXPLANATION
Sundar argued that for safety‑critical applications such as autonomous vehicles and healthcare, privacy and safety mechanisms need to be embedded at the hardware level to ensure robust protection before software layers are applied.
EVIDENCE
He affirmed that high-performance AI infrastructure should include embedded privacy guardrails at the silicon level for domains like autonomous driving and healthcare, stating “absolutely yes” to the suggestion [148-158].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
Argument 6
Standardize core safety, then fine‑tune per country/regulation
EXPLANATION
Sundar suggested a two‑step approach: first create a standardized safety baseline for AI platforms, then adapt or fine‑tune that baseline to meet the specific regulatory requirements of each geography.
EVIDENCE
He described a process where a safe platform becomes a template that can be tweaked for each country’s needs, emphasizing standardization followed by localized adjustments [240-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nagalingam proposes a two-step approach: first create a standardized safety baseline, then adapt it to local regulatory requirements, mirroring the technology-first stance discussed in the panel [S2].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
AGREED WITH
Ms. Geeta Gurnani
DISAGREED WITH
Ms. Geeta Gurnani, Mr. Sunil Abraham
Argument 7
Natural progression: technology leads, governance follows
EXPLANATION
Sundar noted that it is natural for technological advances to outpace governance, with governance catching up after the technology has matured. This reflects the typical evolution of emerging tech ecosystems.
EVIDENCE
He stated that model development naturally leads and governance follows, describing it as “a natural way of happening things” [302-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel repeatedly notes that governance traditionally lags behind rapid AI advances, underscoring the natural order of technology outpacing policy [S14][S16].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
AGREED WITH
Ms. Geeta Gurnani, Mr. Sunil Abraham
Argument 8
Supports watermarking to clearly demarcate AI‑created media
EXPLANATION
Sundar expressed support for watermarking AI‑generated content, arguing that it helps distinguish machine‑produced media from human‑created content, though he cautioned about potential blurring of lines.
EVIDENCE
He said “absolutely” to the idea of watermarking and discussed the need for demarcation while acknowledging the blurry line between AI and human content [333-335].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for demarcation of AI-generated content is highlighted both from a human-rights perspective and by concerns within the creative industry, reinforcing support for watermarking [S17][S20].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
DISAGREED WITH
Mr. Sunil Abraham, Ms. Geeta Gurnani
M
Mr. Sunil Abraham
8 arguments167 words per minute2384 words851 seconds
Argument 1
Skepticism toward anthropomorphizing AI; focus on ontology and epistemology of models
EXPLANATION
Sunil expressed strong skepticism about treating AI systems as if they possess human qualities, insisting that they are merely technological artifacts. He emphasized the need to consider the ontological nature of AI models and the epistemological questions about truth and responsibility.
EVIDENCE
He repeatedly said “I don’t see it” and argued that AI is just technology, then discussed ontology (the nature of the weight file) and epistemology (the nature of truth about the file) [36-42].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
AI as a dual‑use “weight file” requires ontological and epistemological caution
EXPLANATION
Sunil highlighted that a generative AI model is essentially a single weight file, which can be used for both beneficial and harmful purposes. This dual‑use nature demands careful philosophical and ethical scrutiny regarding its deployment.
EVIDENCE
He described the core of generative AI as a single weight file, noting its dual-use potential and the challenges of assigning responsibility and truth to it [44-49].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
Argument 3
Hardware attack surface and trusted execution environments are active research areas
EXPLANATION
Sunil referenced Meta’s research on Trusted Execution Environments (TEEs) and the broader landscape of hardware‑level attacks, indicating that securing the hardware stack is a critical and ongoing area of investigation.
EVIDENCE
He mentioned Meta’s paper on trusted execution environments, the hardware attack surface, and a series of possible attacks such as supply-chain and pager attacks, underscoring active research in this domain [166-176].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
AGREED WITH
Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Argument 4
Advertising can subsidize free AI access and help bridge the digital divide without violating neutrality
EXPLANATION
Sunil argued that ad‑supported AI services can provide free access to a broad population, thereby narrowing the digital divide, and that this model does not necessarily breach AI neutrality principles.
EVIDENCE
He explained that ads enable free AI usage for both affluent and low-income users, helping move from 25 % to 90 % AI adoption, and positioned ads as a technical solution to bridge the divide [182-202].
MAJOR DISCUSSION POINT
Monetization via ads and AI neutrality
Argument 5
AI is already subject to regulation; there is no regulatory vacuum
EXPLANATION
Sunil asserted that AI is already regulated in many jurisdictions, citing statements from policymakers to counter the notion of a regulatory gap. He emphasized that existing laws already apply to AI activities.
EVIDENCE
He quoted Lina Khan, stating that “there is no regulatory vacuum for AI,” thereby rejecting the idea that AI lacks regulation [295-296].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
Argument 6
Governance has never preceded AI advances
EXPLANATION
Sunil noted that historically, technological breakthroughs have always come before the establishment of governance frameworks, implying that AI governance will continue to follow technological progress.
EVIDENCE
He succinctly said “It’s never happened in the reverse order,” confirming that governance has always lagged behind AI innovation [305].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion points out that historically governance frameworks have always followed technological breakthroughs, confirming this observation [S14][S16].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
AGREED WITH
Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Argument 7
Meta disabled facial‑recognition features over safety concerns
EXPLANATION
Sunil provided a concrete example of a major tech company taking action for safety by turning off facial‑recognition capabilities on its platform, illustrating how safety concerns can lead to feature removal.
EVIDENCE
He stated plainly that “facial recognition was turned off on Facebook,” indicating a safety-driven decision [315].
MAJOR DISCUSSION POINT
Project stoppage due to safety concerns
Argument 8
Expresses hesitation and frames the issue as a question rather than a direct stance
EXPLANATION
When asked about mandatory watermarking, Sunil responded with a question instead of a clear yes or no, reflecting uncertainty or reluctance to take a definitive position on the policy.
EVIDENCE
He answered the watermarking question by replying with a question, saying “I’m answering with a question” and did not provide a direct yes/no response [327-330].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
M
Mr. Syed Ahmed
10 arguments144 words per minute2370 words985 seconds
Argument 1
Moderator observation that trust must be built before AI can scale
EXPLANATION
Syed emphasized that while AI’s capabilities are evident, large‑scale adoption will only happen once robust trust mechanisms are in place. He framed trust as a prerequisite for scaling AI responsibly.
EVIDENCE
He summarized the panel’s point that “true scales can come only when you start trusting AI,” highlighting the need to build trust before scaling [29-33].
MAJOR DISCUSSION POINT
Evolution of trust and responsible AI governance
Argument 2
Moderator agreement that these are key concerns
EXPLANATION
Syed echoed Sundar’s points about control and security being primary failure points, confirming that the panel collectively sees these issues as critical when AI scales.
EVIDENCE
He responded with “excellent i totally agree with you” after Sundar’s description of failure points, indicating agreement [32-33].
MAJOR DISCUSSION POINT
Failure modes when AI scales to billions of users
Argument 3
Moderator framing of the question
EXPLANATION
Syed introduced the central panel question asking each participant to define trustworthy AI and its non‑negotiables, setting the stage for the subsequent discussion.
EVIDENCE
He asked, “what does it mean by trustworthy AI in your own sense and what are the key non-negotiables,” directing the conversation to that theme [50-52].
MAJOR DISCUSSION POINT
Defining trustworthy AI and its non‑negotiables
Argument 4
Moderator prompting on implementation
EXPLANATION
Syed queried how IBM ensures that responsible‑AI tools move beyond monitoring and become enforced at runtime, pushing the panel to discuss practical deployment of governance controls.
EVIDENCE
He asked, “how do you ensure that these tools don’t remain at just a monitoring layer and get enforced on the ground at the runtime?” [108-113].
MAJOR DISCUSSION POINT
Embedding governance into runtime and enterprise risk
Argument 5
Moderator query on the impact of ads
EXPLANATION
Syed raised the question of whether embedding advertisements in consumer AI platforms like ChatGPT would subsidize services or undermine AI neutrality, seeking the panel’s view on this monetization model.
EVIDENCE
He asked, “will it help consumers subsidize their subscription or will it kind of violate the doctrine of free AI principles?” [181-184].
MAJOR DISCUSSION POINT
Monetization via ads and AI neutrality
Argument 6
Moderator seeking market insight
EXPLANATION
Syed asked whether enterprises are willing to pay a premium for trust‑grade AI, probing the commercial viability of responsible‑AI offerings.
EVIDENCE
He inquired, “are you seeing that influencing the buying decisions… would anyone invest in responsible AI?” [205-214].
MAJOR DISCUSSION POINT
Market willingness to pay for trust‑grade AI
Argument 7
Moderator asks about global alignment
EXPLANATION
Syed posed a rapid‑fire yes/no question about whether there should be global alignment on AI regulations, prompting the panel to consider the feasibility of worldwide standards.
EVIDENCE
He asked, “Regulations. Yes or no?” during the rapid-fire segment [279-280].
MAJOR DISCUSSION POINT
Global regulatory alignment vs. technology standards
Argument 8
Moderator highlights the speed gap
EXPLANATION
Syed highlighted the concern that AI model innovation is outpacing governance, framing it as a critical challenge for the panel to address.
EVIDENCE
He asked, “Are the models and the innovation outpacing governance?” and noted the speed gap [298-300].
MAJOR DISCUSSION POINT
Pace of AI model innovation vs. governance
Argument 9
Moderator probes for examples
EXPLANATION
Syed requested concrete instances where projects were halted due to safety or compliance concerns, seeking real‑world evidence of governance impact.
EVIDENCE
He asked, “have you stopped any projects due to safety concerns?” prompting examples from the panelists [312].
MAJOR DISCUSSION POINT
Project stoppage due to safety concerns
Argument 10
Moderator attempts to elicit a yes/no answer
EXPLANATION
In the rapid‑fire segment, Syed pressed Sunil for a definitive yes/no response on mandatory watermarking, illustrating his effort to obtain concise positions from the panel.
EVIDENCE
He asked, “Should we have mandatory watermarking…?” and followed up with “Yes?” after Sunil’s evasive reply [323-329].
MAJOR DISCUSSION POINT
Mandatory watermarking of AI‑generated content
Agreements
Agreement Points
Security and control vulnerabilities are primary failure points when AI systems scale
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Shift‑left security mindset now central to AI projects Control and security vulnerabilities break before hardware infrastructure Hardware attack surface and trusted execution environments are active research areas
All three panelists stress that security must be addressed early and that security or control failures are likely to break AI services before any hardware failure, making security a critical layer for trustworthy AI at scale [17][34-35][166-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses show that scaling failures often stem from security and control issues, with system-driven infrastructure breaking first as AI scales [S49] and recent AI-related security incidents outpacing existing frameworks [S50].
Trustworthy AI requires multiple non‑negotiable layers (security, functional/AI safety, compliance/cybersecurity)
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Trustworthy AI means end‑user confidence: security testing, hallucination monitoring, compliance Three core buckets: functional safety, AI safety, cybersecurity
Both speakers define trustworthy AI as a set of layered guarantees: Geeta emphasizes security testing, hallucination monitoring and legal compliance for end-users, while Sundar groups requirements into functional safety, AI safety and cybersecurity, showing a shared three-layer view of trustworthiness [55-64][68-75].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for layered safeguards covering security, functional safety and compliance is echoed in trust-centric AI literature and emerging frameworks that prescribe separate safety, robustness and governance layers [S38][S48][S53].
AI governance should be embedded as a systematic, organization‑wide control mechanism, standardized then adapted per jurisdiction
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture Standardize core safety, then fine‑tune per country/regulation
Geeta calls for senior-leadership-driven, gate-keeping governance integrated into enterprise risk, while Sundar proposes a baseline safety standard that can be fine-tuned for each geography, indicating consensus on a structured, standardized governance approach [129-134][240-249].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the EU AI Act’s risk-based, organization-level controls that must be harmonized across member states while allowing local adaptation [S45] and with identified gaps in corporate risk-management processes [S51].
AI model innovation is outpacing governance frameworks
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Model development is outpacing governance frameworks; governance must catch up quickly Natural progression: technology leads, governance follows Governance has never preceded AI advances
All three agree that AI advances faster than the creation of governance structures, creating a gap that must be closed rapidly [301][302-304][305].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Secretary-General highlighted the gap between rapid AI advances and slower policy understanding, a pattern repeatedly observed as governments lag behind technological developments [S30][S33][S42].
Similar Viewpoints
Both see AI governance as needing a top‑down, standardized foundation that is then customized for specific regulatory contexts, rather than ad‑hoc or siloed processes [129-134][240-249].
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture Standardize core safety, then fine‑tune per country/regulation
All three highlight that security considerations must be baked in early (shift‑left) and that vulnerabilities in control layers are the most likely failure points, underscoring security as a foundational element of trustworthy AI [17][34-35][166-176].
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Shift‑left security mindset now central to AI projects Control and security vulnerabilities break before hardware infrastructure Hardware attack surface and trusted execution environments are active research areas
Each acknowledges the historical pattern where AI capabilities outstrip governance, indicating a shared concern about the speed gap between innovation and regulation [301][302-304][305].
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Model development is outpacing governance frameworks; governance must catch up quickly Natural progression: technology leads, governance follows Governance has never preceded AI advances
Unexpected Consensus
All three speakers independently propose a three‑layer or three‑bucket model for trustworthy AI
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Trustworthy AI means end‑user confidence: security testing, hallucination monitoring, compliance Three core buckets: functional safety, AI safety, cybersecurity Hardware attack surface and trusted execution environments are active research areas
While Geeta frames trust in terms of security, hallucination control and compliance, Sundar groups requirements into functional safety, AI safety and cybersecurity, and Sunil emphasizes hardware-level protections (TEEs). The convergence on a multi-layered trust architecture was not explicitly coordinated, yet all three arrived at a similar structural view of trust, which is an unexpected alignment [55-64][68-75][166-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder workshops report a converging view that trustworthy AI can be organized into three core buckets (e.g., safety, security, compliance) [S47][S48].
Consensus that governance always lags behind AI advances, despite differing professional backgrounds
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Model development is outpacing governance frameworks; governance must catch up quickly Natural progression: technology leads, governance follows Governance has never preceded AI advances
Even though Geeta, Sundar and Sunil represent different organizations (IBM, NVIDIA, Meta), they all affirm the same historical pattern, which is notable given their varied perspectives on AI policy and product development [301][302-304][305].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources note the systemic lag of regulation relative to AI progress, from UN statements to industry-government roundtables [S30][S31][S33][S42].
Overall Assessment

The panel shows strong convergence on three core themes: (1) security must be addressed early and is the most likely failure point at scale; (2) trustworthy AI is best expressed as a multi‑layered framework covering functional safety, AI safety, cybersecurity, and compliance; (3) AI innovation outpaces governance, creating a pressing need for standardized, leadership‑driven governance that can be adapted per jurisdiction.

High consensus across speakers on the importance of security, layered trust mechanisms, and the speed gap between AI development and governance. This consensus suggests that industry leaders recognize common challenges and are likely to collaborate on standards, leadership mandates, and rapid governance mechanisms to enable responsible AI deployment.

Differences
Different Viewpoints
Scope and sequencing of global AI regulation versus technology‑first baseline
Speakers: Ms. Geeta Gurnani, Mr. Sunil Abraham, Mr. Sundar R Nagalingam
Establish technology‑level baseline standards first; geographies can then tailor AI is already regulated; there is no regulatory vacuum Standardize core safety, then fine‑tune per country/regulation
Geeta argues that the first step should be a universal technical baseline before any geographic regulation is applied [290-294]. Sunil counters that AI is already covered by existing laws and there is no regulatory gap to fill [295-296]. Sundar proposes a two-step approach: create a safe, standardized platform and then adapt it to each country’s rules [240-249]. The three positions differ on whether a new global alignment effort is needed, on its timing, and on the extent of existing regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over whether regulation should precede or follow deployment is reflected in discussions on risk-based regulatory sequencing versus technology-first approaches [S41][S43][S45][S44].
Mandatory watermarking of AI‑generated content
Speakers: Mr. Sundar R Nagalingam, Mr. Sunil Abraham, Ms. Geeta Gurnani
Supports watermarking to clearly demarcate AI‑created media Evasive response, does not give a clear yes/no answer Future technology may make explicit watermarking unnecessary
Sundar explicitly backs mandatory watermarking, saying it helps distinguish AI content from human-generated material [333-335]. Sunil avoids a direct stance, replying with a question and offering no yes/no answer [327-330]. Geeta adds that while demarcation is currently needed, advances may eventually render watermarks obsolete [338]. The panel therefore shows clear disagreement on the policy prescription.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy proposals call for compulsory watermarks to combat disinformation [S54]; technical methods such as Tree-Ring watermarking have been demonstrated [S55]; the EU AI Act also mandates labeling of synthetic media [S56][S57].
Unexpected Differences
Existence of a regulatory vacuum for AI
Speakers: Ms. Geeta Gurnani, Mr. Sunil Abraham
Calls for technology‑first baseline before geographic regulation States that AI is already regulated and there is no vacuum
Given the panel’s composition of senior technologists from major AI firms, it is surprising that Sunil asserts a fully covered regulatory landscape, directly contradicting Geeta’s call for coordinated baseline standards and further alignment [295-296] vs. [290-294]. This unexpected clash reveals differing perceptions of regulatory sufficiency.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses describe a “regulatory void” for AI, especially in military applications, and note the broader lack of comprehensive legal frameworks [S52][S44][S33].
Attitude toward anthropomorphizing AI
Speakers: Mr. Sunil Abraham, Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Skepticism toward anthropomorphizing AI; focus on ontology and epistemology Treats AI trust as a practical, user‑facing engineering problem Frames AI failures in terms of system controls and safety buckets
Sunil repeatedly rejects any human-like framing of AI, emphasizing its status as a weight file and philosophical concerns [36-42]. In contrast, Geeta and Sundar discuss trust, safety, and governance in concrete, operational terms without invoking ontology, indicating an unexpected philosophical divergence within the same technical discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Scholars warn that attributing agency to AI creates conceptual challenges and recommend avoiding human-like mental models [S34][S35]; policy discussions emphasize human-centric safeguards over AI personhood [S36].
Overall Assessment

The panel shows moderate but substantive disagreement. Core points of contention revolve around the need for a unified global regulatory framework versus a technology‑first baseline, and the policy instrument of mandatory watermarking. While all participants concur on the importance of trustworthy AI, they propose divergent routes—leadership‑driven governance, system‑level standardization, or philosophical reframing. These differences suggest that consensus on implementation will require bridging gaps between policy‑oriented, technical, and philosophical perspectives.

Medium – the disagreements are focused on strategic approaches rather than outright denial of the problem, implying that coordinated multi‑stakeholder work will be needed to align on standards, regulation, and content‑labeling policies.

Partial Agreements
All three panelists agree that trustworthy AI is a prerequisite for scaling AI systems, but they diverge on how to achieve it. Geeta focuses on end‑user confidence through security testing, hallucination monitoring, and compliance [55-64]. Sundar structures the problem into functional safety, AI safety, and cybersecurity layers [68-75]. Sunil stresses the philosophical nature of the artefact, urging attention to ontology and epistemology rather than treating AI as a human‑like entity [36-42]. Thus, while the goal of trustworthy AI is shared, the pathways—operational controls, safety buckets, or philosophical framing—are contested.
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam, Mr. Sunil Abraham
Trustworthy AI is essential for large‑scale adoption Three core buckets (functional safety, AI safety, cybersecurity) define trustworthy AI AI is a dual‑use weight file that requires ontological and epistemological caution
Both agree that governance cannot be optional and must be embedded in organisational processes. Geeta stresses top‑down commitment, gate‑keeping, and integration with enterprise risk management [129-134]. Sundar highlights that the breakdowns at scale occur in the control systems that manage infrastructure, implying that robust, standardized controls are essential [34-35]. They share the objective of embedding governance, but differ on emphasis—leadership‑driven policy versus technical system‑level control.
Speakers: Ms. Geeta Gurnani, Mr. Sundar R Nagalingam
Senior leadership must mandate AI governance as a gate‑keeping control, integrated into enterprise risk posture System‑level controls and standardization are the first failure points at scale
Takeaways
Key takeaways
Trust and responsible AI are moving from an after‑thought to a core, shift‑left priority across enterprises. Governance failures, not hardware infrastructure, are the first points of breakage when AI scales to billions of users. Trustworthy AI is defined by end‑user confidence, requiring security testing, hallucination monitoring, and compliance with applicable laws. Three universal safety buckets emerged: functional safety, AI‑specific safety, and cybersecurity. Embedding governance at runtime demands senior‑leadership mandate, gate‑keeping controls, and integration into the overall enterprise risk framework. Hardware‑level privacy and safety guardrails (e.g., trusted execution environments) are essential for high‑risk domains such as autonomous driving and healthcare. Advertising can subsidize free AI access and help bridge the digital divide without necessarily violating AI neutrality. Enterprises are willing to pay a premium for “trust‑grade” AI when the use case is consumer‑facing or carries significant downstream risk. Global regulatory alignment should start with technology‑level baseline standards, which can then be tailored to individual jurisdictions. AI model innovation is outpacing governance frameworks; governance must accelerate to keep pace.
Resolutions and action items
Senior leadership in organizations should formally mandate AI governance as a non‑optional, gate‑keeping function. AI risk should be folded into the enterprise risk management process rather than treated as a separate silo. Develop and deploy silicon‑level privacy and safety guardrails for high‑risk AI applications (e.g., autonomous vehicles, medical devices). Adopt a conservative, “Unix‑style” control model where AI services are blocked unless they pass predefined safety and compliance checks. Standardize core safety, algorithmic, and ecosystem requirements at the technology level, then fine‑tune for each geography’s regulations.
Unresolved issues
How to achieve true global regulatory alignment without creating a one‑size‑fits‑all legal framework. The long‑term impact of embedding advertisements in consumer‑facing AI services on user trust and AI neutrality. Whether mandatory watermarking of AI‑generated content should be enforced across all media types. Specific mechanisms for stopping or delaying AI projects when safety concerns arise beyond internal compliance reviews. Governance approaches for emerging dual‑use scenarios (e.g., synthetic hate‑speech corpora for low‑resource languages).
Suggested compromises
Adopt a shift‑left security mindset while allowing organizations to start with lightweight governance (e.g., Excel‑sheet tracking) as an interim step. Offer tiered trust guarantees: premium, fully‑governed AI for consumer‑facing or high‑risk use cases, and lighter controls for internal experimentation. Balance open‑source freedom with responsibility by retaining licensing freedoms but applying stricter controls when models are deployed on proprietary platforms. Use conservative (risk‑averse) defaults in AI deployment pipelines to enable scaling while minimizing early‑stage failures.
Thought Provoking Comments
Follow-up Questions
How can organizations move beyond using Excel sheets for AI governance to scalable, automated governance frameworks?
Geeta highlighted that senior leaders were managing AI governance with Excel, indicating a need for more robust, scalable tools and processes.
Speaker: Geeta Gurnani
What specific control mechanisms or system designs can prevent functional or security failures when AI services scale to billions of users?
Sundar noted that failures often stem from how AI is served rather than infrastructure, suggesting a need for detailed controls.
Speaker: Sundar R. Nagalingam
How can we establish epistemic trust and provenance for AI model weight files to address ontological and epistemological concerns?
Sunil discussed ontology and epistemology of weight files, indicating research is needed on verifying truth and trustworthiness of model artifacts.
Speaker: Sunil Abraham
What are the ethical and societal implications of embedding advertisements in generative AI services as a means to subsidize access?
Sunil raised the issue of ad-supported AI models, prompting investigation into privacy, bias, and equity impacts.
Speaker: Sunil Abraham
What frameworks or best practices can embed AI risk into existing enterprise risk management (ERM) processes?
Geeta suggested AI risk should be part of overall enterprise risk posture, requiring guidance on integration.
Speaker: Geeta Gurnani
How can industry develop universal AI safety standards that can be efficiently tailored to meet diverse geographic regulatory requirements?
Sundar described the challenge of consistent trust enforcement across geographies, indicating a need for adaptable standardization approaches.
Speaker: Sundar R. Nagalingam
How can the open‑source community balance freedom of use with responsibility for dual‑use or harmful AI applications?
Sunil highlighted tensions between open‑source freedom and liability, calling for policies or mechanisms to manage dual‑use risks.
Speaker: Sunil Abraham
What should constitute a minimal set of technology‑level safeguards that all AI systems must meet globally, regardless of jurisdiction?
Both participants debated the need for baseline technical requirements before regional regulations, suggesting a research agenda for universal safeguards.
Speaker: Geeta Gurnani, Sundar R. Nagalingam
What are the technical feasibility, effectiveness, and societal impact of mandatory watermarking for AI‑generated media and text?
The panel debated mandatory watermarking, indicating a need for studies on detection, compliance, and user perception.
Speaker: Sunil Abraham, Geeta Gurnani
How can scalable moderation frameworks be designed to handle AI‑generated content across the ‘zero‑to‑one’ and ‘one‑to‑one’ interaction models?
Sunil introduced two mental models for content moderation, pointing to a research gap in adaptable moderation strategies.
Speaker: Sunil Abraham
What strategies can be employed to manage dual‑use risks of generative AI, ensuring safety while enabling beneficial applications?
He discussed dual‑use concerns, especially around synthetic hate‑speech corpora, highlighting a need for risk mitigation research.
Speaker: Sunil Abraham
How should accountability be assigned when AI systems cause large‑scale failures, given the difficulty of attributing blame to a non‑human entity?
Sundar raised the accountability dilemma for autonomous systems, suggesting a need for legal and governance frameworks.
Speaker: Sundar R. Nagalingam
What privacy guardrails can be embedded at the silicon level of GPUs and other high‑performance AI hardware?
He affirmed the need for built‑in privacy protections, prompting investigation into hardware‑level privacy solutions.
Speaker: Sundar R. Nagalingam
What are the practical implications and security considerations of Meta’s Trusted Execution Environment approach for edge AI processing?
Sunil referenced Meta’s paper, indicating a need for deeper analysis of hardware attacks and privacy in TEEs.
Speaker: Sunil Abraham
How can AI governance be operationalized at runtime within CI/CD pipelines to shift from observation to enforcement?
Geeta emphasized moving governance from monitoring to control at runtime, requiring tooling and process research.
Speaker: Geeta Gurnani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

From Technical Safety to Societal Impact Rethinking AI Governanc

From Technical Safety to Societal Impact Rethinking AI Governanc

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting that AI safety is often framed only in technical terms and that a broader focus on multidisciplinarity, governance and real-world impact is needed, emphasizing that the core question is what determines whether AI systems create societal value or cause harm when deployed [14-19]. Virginia Dignum highlighted this shift from purely technical concerns to a societal perspective [14-19].


Lourino Chemane described Mozambique’s approach, defining safety as the protection of people and arguing that AI governance must prioritize human, social and institutional impacts rather than solely robustness or alignment metrics [31-34]. He highlighted the need for multidisciplinary input from law, ethics, education and affected communities, continuous human oversight, and specific safeguards for children, women and youth [35-38]. Mozambique is drafting a national AI strategy, data policy and cyber-security measures, and has already regulated data-centres and cloud computing to secure infrastructure and democratic processes [41-45].


Wendy Hall criticized the summit’s size and lack of genuine debate, pointing out that women were virtually absent from decision-making panels despite rhetoric of inclusivity [66-68][78-84]. She called for systematic monitoring and longitudinal studies, citing Australia’s experiment with a ban on under-16 social-media use as an example of how poorly thought-out restrictions can backfire [89-103].


Yannis Ioannidis distinguished between the safety of AI technology itself and the safety of its use, insisting that interdisciplinary collaboration-including humanities, law and ethics-is essential to regulate inputs, outputs and deployment contexts [108-119][120-124]. Sara Hooker argued that safety discussions have become more precise over time but remain vague, urging the community to openly state the trade-offs made by models, such as which languages are covered and what risks are accepted [141-154][165-169].


Jibu Elias warned that AI is increasingly a political and extractive force, noting the marginalisation of tribal languages in India and the environmental harms caused by data-centre construction [195-206][207-214]. Neha Kumar reinforced the need for genuine inclusivity, suggesting that feminist and design studies can help ask who decides, who benefits and how development agendas avoid repeating past inequities [285-302]. Merve Hickok reminded the panel that history shows safety does not happen automatically; she called for a narrative shift that places human rights and democratic participation at the centre of AI governance [271-283]. Rasmus Andersen and Tom Romanoff highlighted the role of policy advisors and ACM in translating technical insights into governmental action, while Jeanna Matthews stressed that without enforceable “musts” rather than mere “shoulds,” AI safety will remain aspirational [250-256][261-265][267-270]. The session concluded with a call to continue the conversation, develop concrete measurement frameworks over the next year, and press for concrete regulatory commitments [375-378].


Keypoints


Major discussion points


Broadening the definition of AI safety – The panel opened by stating that safety has been framed mainly in technical terms (model alignment, robustness, etc.) and argued that real-world value or harm depends on deployment context, governance capacity, and institutional factors [14-19]. Lorine Chemane echoed this, emphasizing that safety must protect people and consider “human, social, and institutional impact” beyond robustness or accuracy [30-34]. Yannis Ioannidis added that safety concerns lie in the use of AI, requiring input from law, ethics, and other humanities disciplines [108-119].


Inclusion and diversity as safety prerequisites – Dame Wendy Hall highlighted the stark gender imbalance at the summit, noting that “50 % of the population weren’t included yesterday, the women” and arguing that lack of diverse voices makes ethical AI impossible [78-88]. Neha Kumar reinforced this by calling for feminist and women’s-studies perspectives to ask “who is making decisions? who is being benefited?” and to move from rhetoric to concrete inclusive practices [285-293].


New institutional mechanisms for monitoring and enforcing safety – Mozambique’s strategy includes data-policy drafting, cyber-security strategy, and regulation of data-centres to secure national sovereignty [45-48]. Wendy Hall introduced the idea of “AI measurement” and “AI metrology” as a science of studying “social machines,” proposing a dedicated journal and measurement centre [89-100]. Rasmus Andersen and Tom Romanoff discussed how governments can set thresholds (e.g., the 51 % rule) and translate technical recommendations into policy [250-256][326-340].


Transparency about trade-offs and explicit reporting – Sara Hooker warned that safety discussions often hide the compromises made in model design. She urged providers to “report what language models cover, what they don’t test for,” making clear what has been given up in exchange for performance [165-176].


Historical lessons and urgency to act – Several speakers warned that without decisive action AI could repeat exploitative patterns of the past. Jibu Elias described the extractive impact of data-centre construction on tribal communities [192-199]; Merve Hickok stressed that history shows “the powerful dictate the narrative” and that safety narratives must change [271-280]. Tom Romanoff concluded by urging participants not to remain “moderates” but to push for concrete regulation [336-357].


Overall purpose / goal


The session aimed to shift the conversation on AI safety from a narrow, technical focus to a multidimensional framework that incorporates governance, policy, societal impact, and inclusive participation. By surfacing concrete examples (national strategies, regulatory thresholds, measurement initiatives) the panel sought to identify actionable pathways for making AI systems safe in the real world.


Overall tone


The discussion began with a formal, introductory tone but quickly turned critical and urgent. Speakers expressed frustration with superficial “platitude” statements, highlighted gender and cultural exclusion, and warned of repeatable historical harms. The tone grew increasingly activist and prescriptive, culminating in calls for concrete institutional measures and personal responsibility to “insist” on safety and regulation. Throughout, the mood shifted from explanatory to impassioned advocacy.


Speakers

Jibu Elias – Researcher and activist who examines how technology and innovation institutions receive knowledge, labor, and legitimacy; Responsible Computing Lead for India at Mozilla. [S2][S1]


Lourino Chemane – Chairman of the Board of the National Institute of Information and Communication Technology in Mozambique; leading Mozambique’s national AI strategy. [S3]


Rasmus Andersen – Advisor at the Tony Blair Institute of Government, advising prime ministers, presidents and ministers on AI policy and governance. [S4][S5]


Jeanna Matthews – Co-host of the second session of the panel (moderator).


Virginia Dignum – Co-host of the session; Chair of the Technology Policy Council of ACM. [S9][S10]


Participant – Unspecified attendee; no additional role provided.


Yannis Ioannidis – President of ACM; Professor at the University of Athens. [S15]


Tom Romanoff – Director of Policy for ACM; manages ACM policy committees; former think-tank staff in Washington, D.C. [S16][S18]


Dame Wendy Hall – Regius Professor of Computer Science, Associate Vice-President and Director of the Web Science Institute at the University of Southampton; former member of the UN high-level expert advisory body. [S19]


Merve Hickok – President and Policy Director of the Center for AI and Digital Policy, an independent think-tank focusing on AI policy, human rights, democratic values and rule of law. [S20][S21]


Speaker 2 – Appears to be an event moderator/host; no further details provided.


Neha Kumar – Associate Professor at Georgia Tech School of Interactive Computing; President of the ACM SIGCHI (Special Interest Group on Computer-Human Interaction). [S25]


Sara Hooker – Co-founder and President of Adaption Labs; former roles at Cohera and other organizations; computer scientist focusing on language models and AI safety trade-offs. [S26][S27]


Additional speakers:


Gina Matthews – Co-host of the session; Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum).


Full session reportComprehensive analysis and detailed insights

Gina Matthews, co-chair of the ACM Policy Committee, opened the session and introduced the panel. Virginia Dignum began by noting that AI-safety discussions are often limited to technical notions such as model alignment, red-team-ing and benchmark performance, which obscures the broader question of when AI creates societal value versus harm. She emphasized that impact depends on deployment context, governance capacity, incentive structures and the lived realities of affected communities [14-19].


Dr Lourino Chemane then presented Mozambique’s emerging national AI strategy as a concrete illustration of a broader safety agenda. He defined safety as the protection of people rather than merely of systems and argued that AI governance must prioritise human, social and institutional impacts, drawing on multidisciplinary input from law, ethics, education, labour and the communities themselves [30-34]. Continuous human oversight and institutional accountability were highlighted, together with specific safeguards for children, women and youth [35-38]. Mozambique is drafting a national AI strategy, a data-policy, a cybersecurity strategy and new regulations on data-centres and cloud-computing to preserve national sovereignty and democratic processes, and is revising its interoperability framework to ensure that AI adoption in public administration improves efficiency and service delivery [41-48].


Virginia thanked Dr Chemane and invited Dame Wendy Hall. Hall opened by criticizing the summit’s scale and the superficial nature of its debates, pointing out that despite the rhetoric of “all-inclusive” AI, women were virtually absent from the panels [78-84]. She noted that she had just returned from a UN advisory board meeting where a new scientific panel is being formed, linking the summit’s work to UN-level initiatives [88-103]. Hall argued that ethical AI requires systematic monitoring, longitudinal studies and the creation of a new discipline she called “AI measurement” (or “AI metrology”) to study “social machines” – socio-technical systems that emerge from the interaction of technology and society [88-103]. She also warned that poorly designed policy interventions, such as Australia’s ban on social-media use by under-16s, can produce unintended consequences when young people circumvent restrictions [89-103].


After the first round, the panel reconvened for a second round, splitting into two groups due to limited chairs.


Yannis Ioannidis distinguished between the safety of the AI technology itself and the safety of its use, likening AI to a car: the technology is not inherently unsafe, but safety issues arise from how humans select training data, configure inputs and deploy outputs [108-119]. He called for interdisciplinary regulation that spans the entire pipeline-from data collection to model deployment-and for the involvement of humanities, philosophy, law and cognitive science in governing AI [120-124].


Sara Hooker reflected on the evolution of safety discussions, noting that the conversation has become more precise since the early Bletchley Park-era focus on existential risk, yet it remains vague and “blanket”. She urged the community to openly disclose the trade-offs made in model development, especially regarding language coverage and untested safety parameters, arguing that transparent reporting is essential for accountability [141-154][165-176][170-175].


Jibu Elias shifted the focus to the extractive dimensions of AI. He described how tribal languages in Indian states such as Telangana, Chhattisgarh and Jharkhand are absent from major models, and recounted a data-centre project that depleted groundwater and manipulated local leaders, exemplifying the environmental and socio-cultural harms that can accompany AI infrastructure [192-206][207-214]. He also raised concerns about “AI psychosis” and the exploitation of annotation workers, questioning whether the current trajectory will continue to be extractive [215-224].


Neha Kumar reinforced the need for genuine inclusivity. Drawing on feminist and design studies, she urged the panel to ask “who is making decisions?” and “who is being benefited?” and to move beyond buzzwords to concrete practices that involve women, the elderly and the poorest. She warned that development agendas have historically failed many communities and that AI safety must therefore incorporate lessons from development studies to avoid repeating past injustices [285-293][294-302].


Merve Hickok placed the discussion in a historical perspective, asserting that safety does not happen automatically and that dominant narratives are shaped by the powerful. She called for a narrative shift that foregrounds human-rights, democratic values and citizen participation, insisting that civil-society voices must be amplified to prevent a repeat of past exploitative patterns [271-283]. During the audience Q&A, a participant asked whether dataset and model cards could be extended to multiple languages and cultural contexts. Merve responded that governments could require multilingual documentation to ensure transparency across diverse populations [303-310].


Rasmus Andersen, advising leaders at the Tony Blair Institute, highlighted the need for long-term, evidence-based planning. He urged governments to anticipate the AI landscape of 2030-2035 and to embed safeguards that protect citizens, noting that safety often falls low on leaders’ immediate agendas but must be integrated into strategic foresight. He drew analogies to risk-reduction practices in nuclear safety and aviation, citing the 200-1 000-fold crash-rate reduction achieved in aviation as a model for AI risk mitigation [250-256].


Tom Romanoff, director of policy for ACM, explained how the organisation translates technical recommendations into policy advice. He introduced the “51 % rule”, observing that political change typically occurs once a majority of legislators or board members support regulation, and urged participants to move from moderate positions to active advocacy, educating the public and pressuring politicians to enact concrete AI-safety laws [326-340][345-357]. He also noted that ACM will launch its first journal dedicated to AI measurement/metrology, providing a scholarly outlet for the emerging discipline [88-103].


Jeanna Matthews (ACM Policy Committee chair) then posed a provocative question: does history show that AI will automatically benefit everyone, or must we impose enforceable “musts” rather than rely on good intentions? She suggested that without binding obligations, the promise of wellness for all will remain aspirational [267-270].


Returning to the core theme, Virginia asked the panel to consider how the discourse can shift from a purely technical approach to a broader, inclusive, societal-institutional perspective, echoing the earlier points raised by Dr Chemane and Dame Hall [104-105].


Points of emphasis


* The majority of speakers emphasized that AI safety cannot be reduced to technical robustness alone; it must incorporate institutional, economic and political contexts, multidisciplinary governance and continuous monitoring [14-19][30-34][108-124][88-103].


* Several speakers highlighted the importance of long-term, evidence-based planning, drawing analogies from nuclear and aviation safety to illustrate how systematic risk-reduction can be achieved [250-256][326-340][345-357].


* Several participants called for active citizen advocacy to move from “shoulds” to enforceable “musts” [271-283][327-357].


* Transparency through model and dataset cards that disclose language coverage, omitted safety tests and multilingual documentation was repeatedly advocated [170-175][364-368][303-310].


Areas of disagreement


* Responsibility for safety: Virginia and Hall stressed institutional monitoring mechanisms, Tom focused on legal enforcement of AI outputs, while Yannis highlighted regulation of both inputs and outputs, leading to divergent views on the primary regulatory lever [14-19][88-103][108-124][326-340].


* Feasibility of universal safety rules: Sara questioned whether a one-size-fits-all approach is realistic, whereas Tom’s emphasis on clear legal standards implied a more uniform framework [143-145][326-340].


Key take-aways


1. Safety must be understood beyond technical robustness.


2. A clear distinction is needed between safety of the technology and safety of its use.


3. Multidisciplinary governance is essential.


4. Inclusive and diverse decision-making bodies are critical.


5. Transparent reporting of model limitations and trade-offs must be mandated.


6. Longitudinal monitoring and AI measurement (AI metrology) are required.


7. AI infrastructure has socio-environmental impacts that need sovereign oversight.


8. Political will often hinges on a 51 % threshold, demanding public education and activism.


9. Analogues from nuclear and aviation safety offer useful frameworks for AI risk reduction [14-19][108-124][30-34][78-84][170-175][88-103][250-256][327-357].


Concrete actions identified


* Mozambique will finalise its national AI strategy, data policy, cybersecurity strategy and related regulations on data-centres and cloud computing [41-48].


* ACM will launch a new journal dedicated to AI measurement/metrology [88-103].


* The panel agreed to draft a collective report or model within the next year to guide future work [375-378].


* Governments are urged to require dynamic, multilingual model and dataset cards as part of regulatory compliance [364-368][303-310].


* Activists are encouraged to educate the public and lobby for enforceable AI-safety legislation [327-357].


Unresolved issues


* How to operationalise inclusive governance structures that meaningfully involve women, children and tribal groups.


* How to balance enforcement on AI outputs versus underlying technology.


* Funding and sustaining longitudinal AI-metrology studies.


* Mitigating environmental harms of data-centre construction.


* Pathways for legal accountability (criminal liability, lawsuits) for AI-induced harms.


* Developing a shared framework for articulating trade-offs between performance, safety and societal values [285-293][364-368][271-283][250-256][326-340].


The session concluded with a call to “insist” on safety rather than waiting for automatic outcomes. Jeanna Matthews summed up the urgency: without collective insistence, wellness for all will not materialise [359-363]. Virginia thanked the participants and pledged that the discussion would continue, aiming to produce a concrete measurement framework over the coming year [375-378].


Session transcriptComplete transcript of the session
Virginia Dignum

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. session so if you just want to stand here in front they want to make a picture of all of us you Thank you. Thank you. Yes, you have to sit there. Okay. Good morning, everybody. Thank you very much for being here. My name is Virginia Dignam. I will be co -hosting this session with my colleague Gina Matthews there. We both are the chairs of the… Technology Policy Council of ACM. And today we are here to discuss how to move beyond technical safety and looking at aspects of multidisciplinarity, governance, and real world.

impact. Across global AI discussion, safety is too often being framed in technical terms. Model alignment, red teaming, benchmark performance, frontier containment, and so on. These tools matter and they really are further development is crucial. But they don’t address the core question or at least one of the core questions. What determines whether AI systems produce human and societal value or harm in real deployment contexts? That’s what we are going to discuss in this session. AI systems, like we all know, do not operate in isolation. Their impact is shaped by deployment context, by governance capacity, by incentive structures, and by the lived reality of the communities that use and are impacted by these systems. As such, AI systems do not fail simply because of flaws in the model architecture or in the data or in the alignment technique.

they fail or they produce harm because they are embedded in institutional, economic and political systems. So we will have an open discussion with the panelists. It will be two rounds of panelists. And I would like to start by inviting Dr. Lorine Chaman, who is the chairman of the board of the National Institute of Information and Communication Technology in Mozambique, where he is at this moment leading the national strategy on AI for Mozambique. Please.

Lourino Chemane

Thank you. I would like to start by thanking the invitation to join this panel and also to congratulate the government of India hosting this AI Impact Summit. Going directly to the topic, part of this panel, as part of our exercise of crafting the… the national AI strategy, we look to this topic of safety. And for us, safety… working for the policy subject and from the policy formulation point of view. For our safety, we look at it as the protection of people, not only systems. So we look at AI governance must prioritize human, social, and institutional impact, going beyond technical metrics such as robustness, accuracy, or algorithm alignment. We also look at it from the multidisciplinary governance, grounded in the world context of use of AI.

For us, effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities. So the inclusion of the people and how they will feel safe in using these technologies. We look also from the continuous human oversight and institutional accountability. People must know what’s in the bread box, how they’re designed, if they’re functional, if they’re not functional, if they’re not functional, and what factors that are affecting their lives, the decision made by the algorithm, have taken into consideration their feelings in the design phase. We also look for the protection of children, young people and women. From the studies that were conducted, women and children and youth are the first victims of the bad application of the AI.

We also look for the ethical and social assessment. Mozambique is one of the pilot countries adopting the UNESCO principles of ethics in adopting AI, and we are looking also for the dimension defined by UNESCO in this perspective. Sharing what we are doing in the country now, in Mozambique we are drafting, as I mentioned, our national AI strategy with the support of UNESCO and thank Professor Virginia, who is the leading expert in our team, but the contribution of other experts from UNESCO. We are also drafting our data policy and its implementation strategy, because we believe that data is a fundamental element for AI system. We are reviewing our national cyber security strategy. data that we’re collecting now is that there are already cybersecurity -related problems by the use of a young use of AI model.

We just adopted in Mozambique the regulation for the construction and operation of data centers and also the regulation for cloud computing, because we believe that infrastructure is a fundamental and key element for sovereignty of our country in terms of when it comes to safety, but from the policy point of view for the democratic system and all other dimensions. But we also look at it from the digital government point of view. So we’re reviewing also our interoperability framework that’s related to data to make sure that in adopting AI in the public administration, we address our main objective of improving efficiency and efficacy and delivering public services. For us, these are the elements that will be contained in the overall digital transformation strategy that, if everything goes as planned, will be approved by our government during government.

This year, and we are learning a lot in this summit. and gathering important elements that will help us to uplift and improve our work in crafting this element. Thank you for the opportunity to be part of this session.

Virginia Dignum

Thank you very much, Dr. Shaman. I understand that you have to move to another session, so feel free to leave whenever you need to go. We understand the complexities of the program. Now I would like to ask Dame Wendy Ho, Regis Professor of Computer Science, Associate Vice President and Director of the Web Science Institute at the University of Southampton, and also a former member of the United Nations high -level expert advisory body, to give us some provocative statements. They will be. Good. Provoke us.

Dame Wendy Hall

I’m fed up with just towing the party line. So I will… I have to first apologize, because I have to leave at 11. I’m supposed to be on three panels at the moment. and I also have a lunch date at midday in town. So, that’s my morning. I want to say, I think, three things. One is, what’s really… Four. If you know Monty Python, nobody expected the Spanish Inquisition. Anyway, so first of all, it’s been wonderful to be in India. I love India, and I have a love -hate relationship with this summit. It’s too big. There’s too much going on, and not enough actual real debate about the core. There’s going to be some sort of platitude statement come out today.

Yeah. And I’ve just been come back from the UN. Our advisory board and the new scientific panel get together. They’ve got a panel going on. At the moment, the dialogue that’s starting in… The dialogue that’s starting in the AI for Good conference in Geneva in July, we hope will be a real dialogue. I don’t know what form it’s going to take yet. But we have to knock the world leaders’ heads together. Now, I’m now going to say something which also really struck me. Thank you. Is that working? Yes? At this conference. Everyone’s, I love, you know, in India, AI means all -inclusive. But 50 % of the population weren’t included yesterday, the women. Right? There were no women.

The CEOs of every country, every company, there was one lady CEO from Accenture, I think. There were a couple of ladies on the panels at the end. It was all men. The alpha males of this world. The alpha males of this world. The alpha males of this world. Men. Men. Men. The alpha males of this world. right the world leaders that spoke the ceos that spoke that this world is dominated by men and my mantra has always been in terms of the the lack of women and other other some other diversity points as well but mainly women is if it’s if it’s not diverse it’s not ethical people don’t really understand what that means that means is if you haven’t got a diversity of people discussing a problem how are you going to actually sort out the biases if you haven’t got women at the top level making these decisions trying to set up the guidelines i mean your comment was yeah we want to make sure for the safety of women and children well let’s include the women and children in the discussions i mean that my third point um is that we we are watching i mean i’m very into watching these um experiments i did it all through the web and we need to learn how to monitor what’s going on so that we can say what is the right direction to go in the future.

It means collecting data and evidence and doing longitudinal studies, and it takes time. But take, for example, what Australia is doing with social media. We’ve heard at this conference several other… for teenagers. I mean, didn’t Macron… Who was there yesterday? Macron said under 15 in France. Our Prime Minister, who constantly changes his mind, so I don’t suppose it will happen, but he’s talked… Sorry, that’s a joke for any Brits in the audience, but there aren’t many. He’s saying 16 in the UK, some out of Spain saying 16. There will be unintended consequences of that. Making a ban like that without thinking about the nuances of… Well, what happens if… Well, first of all, the kids are ingenious enough to get round it.

And then they’re back on the dark side of things again, even worse than before. Because they’re doing it in… secret um what happens when they start to use social media how do we train them to do it properly my worry about a ban like that i said i mean it’s very brave of australia to do it first and we can watch and they’re saying six months time they’ll have some evidence of how many under 16s are still on social media but the behavioral issues take much much longer to explore than that and we have to get over this fact that whilst the technology is going on a pace because the alpha males are driving it without any you know just worrying about technical safety maybe um we have to we can’t say well it’s all going too fast we can’t do any we have to study this stuff um we have and i think this is what i want the acm to do i talked at my keynote talk this is my last point by the way i my keynote talk on whatever day it was wednesday on the main stage I talked about two things happening in the UK actually around one is our National Physical Laboratory which is the sort of equivalent of NIST in America has just launched with government backing a centre for AI measurement and the AI Security Institute in the UK and the other security institutes that are growing up around the world that network is now being called largely driven by the US because Trump doesn’t want to call it anything to do with safety I can’t believe I just said that anyway, but then he was the man that drank bleach in Covid they’re calling their network the network for AI measurement and I think this is a breakthrough I think this is, I mean I love AI for science, but we need to think about the science of AI and I think, and that’s a social it’s a socio -technical and I’m starting to call these things social machines as we did on the web that came from Tim Berners -Lee the idea of technology and society coming together to create artefact systems that wouldn’t have existed if they hadn’t come together and the technology doesn’t understand society at the moment most of society doesn’t understand this technology but together those two systems will create socio -technical systems or social machines and I want to build a science of studying social machines and it will be called AI measurement or AI metrology I love that word, I’ve learnt to say it it’s a cool script it’s a cool script everything’s Greek to us I love the yogurt don’t you love Greek yogurt so sorry I’m finishing there AI metrology and we’re going to launch I’m chair of the ACM publications committee or co -chair he’s president we’re going to launch a journal first journal in this area and it will be associated pulling together work and the data sharing the data that people are collecting to

Virginia Dignum

thank you Wendy, very important point and I think you can leave it there again if you understand when you have to leave you just leave we understand that so for the rest of us in the panel we start the day or the session talking about AI safety needs to be more than just the technical robustness I love your idea of the the social machines of this AI metrology yes it is it does yeah yeah with me only sometimes probably but i i did my best now yeah anyway i would like to bring you into the discussion how can we both dr shaman and wendy wall gave us examples of issues that we need really to include in going beyond this idea of technical robustness even if systems perform exactly as they have been designed and safely designed they will still probably be causing harm which is not probably just a technical failure but also a failure of inclusion a failure of imagination so i would like to get your opinions from where where you think that we can change the where can we start changing the discourse of of a pure technical approach to a broader inclusive societal institutional approach to the discussion on AI safety, on AI measurement, and so on.

And I would like to start this question, which is for all of you, starting with Professor Ioannis Ioannidis, who is the current president of ACM, and also a professor at the University of Athens.

Yannis Ioannidis

Thank you very much for having me in this panel. I’m a technical person, very sociable, but technical, that’s my expertise. So I want to separate the issue of safety of AI and talk about safety of AI use. And for me, in my technical mind, there is the AI technology, And I think that’s where I’m going to be. which is the algorithms, which are the models, and so on, from the use of this technology, the use of the software that is on AI. And we are using this software both in the beginning with the input that we give it and at the output when we create what is called I have an artificial intelligence, I have an agent, and so on, to do this or that or the other.

The technology, there’s no issue, there’s no social issue in the safety of the technology itself. It’s like the car, whether it’s working or not. There is no issue of safety. And innovation in that regard has to be let free, like the human mind and all the innovators to progress on that. And robustness and not having bugs or not bugs are an issue there, but it’s a day in the park for us. Software engineers and computing scientists. The use is the important thing and sometimes the key thing that people are talking about is the end result, the model. We put it in the judge’s hands, we put it in the doctor’s hands, we put it in the youth’s hands in terms of social media and so on.

This we have to work on, measure, regulate potentially and in any case all sciences like it was said before, especially the humanities, philosophers, ethicists, legal people, cognitive scientists and so on have to come together to address this. But there is also the input side which is again humans doing it. Humans are determining the first parameters where the system is. The first parameters where the systems are starting to be trained. The data that we feed it, it’s again humans that are choosing it. And as much as we have to… regulate or measure or think the end result the model the humanoid or non -humanoid robot that is telling us do thing or that or the agent and the same level of importance is that we have to think about what to do with what comes in and humans are using it different humans are feeding it and i think the safety must start from there we should not grow the input size we should not let it run for free even at that level we have to have the different sciences the different technologies civic society to be represented there and having an ai with whatever data we happen to have or whatever data generates billion dollar industries these are the data that that will use it’s wrong i mean there is a right and wrong here and and we have to be on the right side of that you so As a quick wrap -up, so for others to express their opinion, technology should be running free, but both input and output and result should be in the

Virginia Dignum

Thank you, Wendy. Thank you, Dr. Shiman. Thank you. See you soon. Okay. Okay, let’s continue the discussion. Sarah, Sarah Hooker, you are the co -founder and president, I believe, of Adaption Labs, a very young company, I believe. You have been before with Cohera and with other… developing organizations. What do you think about this balance or tension between the technical robustness, the technical safety measures, and the need for understanding more the environment, the context, the social context in which systems are built? And how can we technologists, those that develop like yourself, be developing systems while they are aware of this type of tension and also the insertion of the systems in very concrete, real -world domains?

Sara Hooker

And typically it’s been how do you build extremely large systems at the frontier of what’s possible. I think it’s interesting. I’ll share a few things. So one, I think what Wendy was getting to is that one of the biggest signals of whether you actually care about safety is what the forms of prestige and power look like. I think that’s mainly her comment. She’s saying, you know, we are at the pinnacle of where we all gather to discuss these things. And the way resources have actually been allocated doesn’t show that people are serious, which I think is fair. I think you have to look to the surrounding environment to understand if people are serious or not about safety or whether it’s just a panel title, candidly.

And maybe today it’s just a panel title. I think in general my philosophy about these forums is that you have to look six months out to actually get a signal of what has happened. That doesn’t mean that they’re not critical. I frankly don’t know if the expectation should be anymore that we have universal rules for AI. It’s not clear to me that that should be the outcome of these forums. So I think decidedly, if you’re going in with that expectation, you’re going to be very disappointed because I don’t think that’s going to happen at this forum or at the next one. But I do think it’s worth asking, well, where are we going as a conversation about safety and the precision of it?

Because for me, that’s the most interesting part. Time is very valuable. It’s our most precious resource. And so for me, the more precise the conversation, the better. I do think if I look at the overarching arc from Bletchley to now, we’ve had now four summits. We’ll have the fifth. It’s worth asking, has it become more precise? Candidly, and thank goodness, yes. I still remember Bletchley where it was all about existential risk and six months from now, and there were protests and hunger strikes from people who thought machines were taking over, but no precision to the conversation, no accountability for where these timelines were coming from. Thank you. And then I look to now, and now we have a very messy conversation about safety.

Certainly everyone has a different view. It’s still a blanket term, but at least it’s more accountable to what is the real -world impact of these conversations and the technology that we build. Because when I started my career as a computer scientist, we were just in research conferences. I mean, I think the fact that ACM is so well represented on this panel speaks to the origins of, like, you know, a very narrow group of people who work in a very academic community, and now our technology is used everywhere. So it’s a much more important conversation to have. So, one, I think we have gotten more precise, but it’s still very murky what people mean. Here’s the other thing I’ll say.

I think there’s often desire in these conversations about where technical meets the ecosystem to say, oh, well, safety has to be everything to everyone. And, frankly, that’s not a precise conversation either, because the truth is there are tradeoffs. When you build systems, there are tradeoffs. And too often when these conversations enter this arena, there’s a misconception about the sheer difficulty of how do you actually impose constraints on these systems. So the other thing I’ll say is the biggest thing that has to come out is an understanding of what you give up, because you give up something. The big things for me are, you know, I work a lot on language. My big ask is just report what languages model providers cover.

Report essentially, like, what they say that the safety parameters are not, and report what they don’t cover or they haven’t tested for. This sounds like a simple ask, but I think this is actually quite precise. And what it establishes is what have we given up? What are you confident about what have we given up? There’s many versions of this, but too often, and this is my ask, in conversations like this, we end up just circling around and saying we want safety, we need perspectives of everyone in the model. And the truth is that’s also a naive statement, because it is almost certainly the fact. that there will be some trade -off. Someone will not be represented.

Someone will be represented. And actually, what I think these forums are very useful for, having us all in the same conference, is about galvanizing ecosystems where you can make your own constraints and trade -offs, but also having a discussion about, you know, for the models that are being shipped that serve billions of people, we have these static monolithic models that are served the same way. What are the trade -offs that they have made, you know? And that’s, you know, as someone who’s built these models, there are almost certainly trade -offs in place. So we need to understand the state of the world as well as where we want to go. And it’s okay if there are clearly, you know, things left out.

It’s more that they have to be stated out loud. That’s my wish list, yeah. So maybe I’ll leave it there, and I’ll pass it on. I think you were next. Go for it.

Virginia Dignum

Thank you very much. Thank you, Sarah. And indeed, next one. Jibu Elias, you are a researcher, but you are also an activist who examines how technology and innovation institutions receive knowledge, labor, and legitimacy. so help us making sense of what it means safety, AI safety for society that seems to be what you do

Jibu Elias

I was more interested in the real world consequences of the panel title but wonderful conversations by Sarah and Wendy and all here so when I look back how technology has shaped my understanding of the world I feel like an idiot because I grew up in a time watching this animated shows like Jetsons and all these futuristic shows believing that the more advanced the technology gets the better the better our world will be I grew up as this idealist kid who thought when AI comes there will be no inequality that’s what I’m saying I was an AI kid back then and nowadays when I look at these things. I mean, there have been phenomenal work done by computer scientists like people present here in panel, Sarah and everyone, right?

On technical aspects of things. But more and more, we are seeing AI now becoming more political. It’s becoming a larger sociopolitic construct in general. And what concerns me more is its exploitative and extractive nature. I think Sarah mentioned about Bletchley and where the talk was all about existential risk. But now I think we are all at a point where we are agreeing that the accumulated risk have become more worrying at the same time. I’ve been tracking people who’ve been using tools, people who’ve been impacted by and those who were excluded from the benefit of this kind of technology, right? If you go around states like Telangana, Chhattisgarh, Jharkhand, there are big group of tribal populations.

You know, their language is not represented in Gemini or anything, right? And I know everybody wants to impose Hindi on all of us, but sorry, I still, you know, Hindi is not the national language of India. But what about them? How do they get access? So more and more, what I’m seeing is the divide between the socioeconomic divide becoming more wider, especially in countries like India. And, you know, it’s fascinating that, you know, we’ve been celebrating the data centers that we’ve been building. And I mean, I had firsthand experience of a data center that’s very much celebrated in Telangana in a place called Make a Good. I’m not, I don’t want to mention the company associated with it, but how it was built, how the people were manipulated, how the groundwater being extracted, right?

In a place where there is a water scarcity, you know, and when I asked the company, you know, Hey, this happened and I have a close association with that organization. and they said we interacted with the community leaders. So what I did, I reached out to the serpents. He has no idea what they mean. So essentially there’s a lot of, you know, I mean, in India we know what that means, reaching out to community leaders, bribing the politicians. But that’s the larger things I’m worried about. And the people who are using this technology, you know, now some people are talking about terms like AI psychosis. I don’t know how valid those terms are. But it’s fascinating to see that me and my executive director of Muslim Foundation has been chatting about how elderly people are using these models.

It’s very fascinating and it’s worrying at the same time. You know, we often put our attention on younger folks. But, I mean, it’s funny at the same time, but still. So my larger question is why, you know, the going forward, like yesterday the gentleman from US was telling that, you know, everyone should use a US AI stack. I think people in Denmark will be a good idea how US rates its strategic partners. Yeah. Yeah, so my larger question is where are we headed, right? Are we still going to have this extractive nature, you know, the data annotation workers who are building these models, right? So I will stop here and looking forward to the next level of conversation.

Virginia Dignum

Unfortunately, we have our second round of the panel and like all that, what we all are complaining about, it will happen. We all say our thing and the dialogue will need to be done outside in the corridor and we really hope also to, after this meeting, try to combine all what has been said in some kind of ask or report. But anyway, now we are moving to the second part of the panel. We were all going to be in the same panel, but there weren’t enough chairs. So we are splitting into two. Patience with us. You are proxy. Okay. okay everyone uh thank you so much for being here in the second part of our session and thank you for all of the panelists who are joining me here on stage i think we’re going to do something a little different than the first panel did i would like everyone to just quickly introduce themselves um nay how would you uh start

Neha Kumar

hello check okay uh hi everyone i am neha kumar and i’m an associate professor at georgia tech in the school of interactive computing i’m also uh president of this special interest group on computer human interaction and so uh this summit is um is really a coming together of many different worlds for me i actually i grew up in delhi so it’s been uh about coming home but also uh a lot of people have been coming to me for a long time and i’m really excited to be here a lot of the conversations we’ve been having are conversations that are really very very active right now discipline of human -computer interaction, HCI, some of you might know it, and it’s great to see how central human centricity is to what we’ve been discussing.

And third, something that’s been much closer to my own area of study is really looking at HCI and technology use in the context of social impact, and this has been named in many different ways over the years, social goods, social impact, societal impact, public interest, whatever you want to call it. But really, it’s an area that we’ve been studying for many, many years before AI was on the scene. And so I would say that we’re looking at multidisciplinarity in this panel, and to me, there’s a lot of learning that could be happening from many of these disciplines that have been actively looking at some of these, agreed that the platform that we’re looking at is different.

It’s unprecedented in many ways. At the same time, there’s a lot that we have to learn from as well. So I’ll stop there.

Virginia Dignum

Thank you, Neha. Thank you, Eugena. Merve Hickok.

Merve Hickok

I’m the president and policy director for Center for AI and Digital Policy. We are an independent think tank working globally at the intersection of AI policy and human rights, democratic values, and rule of law. So I would like to take a more expansive view of safety and governance at large. More to come on that. Thank you.

Virginia Dignum

Rasmus?

Rasmus Andersen

Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I advise leaders around the world at the prime ministerial or presidential level, but also at the line minister level on navigating AI. What does it mean for them? How they both deliver results to citizens with AI and also avoid them. I think it’s important to avoid. harm to their citizens. And so the question of safety comes up a lot, but it’s also usually not the top of leaders’ minds, and it’s really about, for me, helping them often realize the long -term best interest, informed self -interest of what will actually, what is the world likely to look like in 2030, in 2035?

How can you best make sure that your country and your constituents and citizens are in the best possible position as the world will change very rapidly? Thank you.

Virginia Dignum

Tom?

Tom Romanoff

Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage the policy committees, which Gina and Virginia chair our global committee. We also have regional committees across the world, including the United States, Europe, Asia, India. Africa. Africa. and the APEC regions. So what my job at ACM is is to help the computer science folks translate their recommendations on harms or issues that they see in the technology to policymakers and engage those policymakers on behalf of ACM. So before that, I was at a think tank in Washington, D .C., so I worked with Congress and have been working in tech policy for many years now.

Jeanna Matthews

Okay. Okay. So in the interest of time, I’m going to get right to a very provocative question, which is we’ve been seeing wellness for all, happiness for all, in the presence of a fairly extractive and exploitive potential. Does history tell us that it’s going to be great for everyone, just works out, or there have to be some musts, not just good intentions or shoulds? If we are not seeing things like recovery, retribution, remuneration, we don’t see people going to jail when they do bad things with AI. Are we serious about AI safety?

Merve Hickok

So no, history does not show us that it’s going to be cool. And history is definitely another good indicator, which means that we need to fight harder this time around and try to get that level up, right? So history is always a story of the powerful, of the winner, like who gets to decide the narrative. And we are seeing that again today, the narratives around what is safety, what should be the evaluations, where should the money go, whether we should regulate or not. Whether it should be. It should be should or must. is always the narrative of the powerful. And when Dame Andy Hall mentioned, the representation was very much the same kind of people throughout the conversations, higher -level conversations yesterday.

So I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an evaluation, but so far the most important safety issues has been around nuclear, cyber security, chemical weapons, etc. Yes, they might be, or existential risk, which is another story. Yes, maybe we talk about those, we should talk about those, but there are real consequences right now on people’s rights, freedoms, ability to live with their dignity, and people’s rights to participate in democracy, and democratic processes. All of these are undermined, and as an organization where those… three issues are in our mission, we are seeing this more and more under pressure. So this is the time to get your voices up as citizens, as consumers, as professionals in your own right, and try to change the narrative.

Because otherwise it’s going to just be a repeat of history.

Jeanna Matthews

Well said. Neha?

Neha Kumar

Yeah, I think coming back to something that Wendy said, right, about being all -inclusive at the same time as having no women around in decision -making places, I think that that is something we should really be thinking about. I mean, do we have a history of being inclusive? What inclusivity have we been practicing in our innermost circles? It’s easy enough to say that the poorest of the poor should have access to this AI, but how are we doing on being all -inclusive? So I think there are lessons from disciplines such as feminist and women’s studies that we can learn from to really ask the who question. Who is making decisions? Who is being benefited? Who is part of the design process?

That’s one. Second, I would say in learning from design, which is one of the disciplinary disciplines that I’ve trained in, thinking about zooming out is great, and that’s where we have value. We talk about inclusivity. We talk about diversity. We talk about all these great -sounding words, but then when we zoom in, what are we actually doing? I think that a lot of the dialogue that we’ve been having is in this disembodied state where we talk about infrastructure, and we talk about data, and we talk about interoperability, and we talk about processes, but who is benefiting? The panelists before me also talked about aging, so people who are… more vulnerable, where are they in the conversation?

And lastly, with regards to development studies, thinking about… what are the benefits of development really. Like we want development and impact, and that’s what we’re talking about here for five days at the summit. But we know from historical perspectives that development hasn’t worked out so well for so many people and so many countries across the globe, and how are we making sure that we don’t repeat those same mistakes? And I think these have to be very much part of the conversations so that it’s safety of the human, of the body, of our values, of just our communities, our structures, social structures that are so critical to us. Thank you.

Jeanna Matthews

Gnasmus?

Rasmus Andersen

Yeah, I think we’re not seeing people go to jail. I’m not sure we have seen something just as of yet that really where that’s the case. There are lawsuits ongoing on suicides among young people, et cetera. But I do think that we will see a moment pretty soon where something does go pretty wrong, and then we’re going to have a decision on what we do with that. Some people – this is a very dark parallel. Some people said we needed to have World War II to have the UN and other systems that were put in place to avoid that happening again. And, yeah, I think it’s a matter of time when we get something, and we will have to make those decisions.

And currently, I think I’m not super confident that we will interpret those events correctly, that we will have a realistic view of what might change and how we might prevent them from happening again. And it could be people leveraging them, organized crime. It could be – I mean, we’ve had – Very recently, these – where we’ve successfully had Elon Musk and Grok stop allowing – people to create non -consensual deep fakes of nudes, which had happened in the millions. So we have sort of small, that’s not small, but we’ll have much bigger things than that. And I do think still when that happens, we will have to think about both pros and cons and costs and benefits. When we regulate things, we don’t regulate risks down to zero.

You know, when you get into a car, there’s a risk something will happen, but you still need to get places. And I think it’s, with safety, we do have to take some of the same lessons, as Mariam mentioned, from nuclear, from flights. You know, it used to be that when you got on an airplane, you know, something like 200 or 1 ,000 more of them crashed than today. And we’ve reduced that level of risk very far down. And I do think that the political level, while we need technical inputs, the only force in the world. I can really take all those considerations together and think about the partial perspectives that technical people have, that civil society has, that industry has.

Really, the only place it comes together imperfectly is at government, and that’s why it’s so important that we are here, however imperfect these summits are.

Jeanna Matthews

Tom?

Tom Romanoff

All right, something a little different. I would like everybody in the room to raise their hand if you think safety is an important aspect of the AI deployment. Great. Keep your hands up. Keep your hands up. Now, take your hand down if you think that safety should be enforced on the output of AI outcomes. Oh, wow. Okay. Take your hand down if you think that laws should apply to the outputs of AI rather than AI itself. Okay? All right, you can go ahead and put your hands down. It wasn’t as dramatic as I thought. I thought it would be. So I’m going to talk a little bit about the 49 -51 % rule. And across all political spectrums, no matter where you are in the world, there’s this idea that you only need 51 % of the political willpower to start passing regulations, and 49 % won’t get it done.

It applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control that company, right? Right? Lobbyists have an extreme incentive to not push anybody past that 51 % or 49 % in order to have an action in the political space, right? So across all of our governments in here, there is private – I don’t want to say private sector because they’re important, but there are private entities that would like to have an action in the regulatory space. And it’s not until 51 % of those politicians or that political regulator or that regulation gets – it’s to that threshold that you’ll start seeing some changes. And so you see examples of that with the example that my colleague here mentioned with deepfakes or notification applications causing worldwide outrage.

And you started seeing governments across the spectrum say, that’s something that at least 51 % of our population does not want. And so they start moving towards regulating or enforcing current laws to punish that kind of action. And so I say all this because there is also this conversation around moderates, right? We don’t know where the technology is going. We have computer scientists. We have civil society screaming about the need for action, for security within the stack, right? And the rest of the world are moderates. They’re still engaged. They’re still engaging this AI. They’re still figuring out what it can be doing. And it’s not until some kind of action happens, some kind of consequence, some kind of…

issue happens that people wake up to the folks who’ve been screaming about it for years. And so what I encourage everybody in here is not be a moderate. Pick a side and start encouraging your politicians, your family, your community. Educate them. Figure out ways to communicate the very heady technical aspects of security within the AI stack to the common person, to the person who can understand it. And that’s when you’re going to start seeing the regulations start to roll out.

Jeanna Matthews

I think that’s a great place to end because I think we are not going to get happiness for all and wellness for all unless we insist. We’re all going to have to insist. It’s not going to come automatically. So asking each of us to ask ourselves a question, what are we going to do to insist, I think is a really good place to end. I think we started this session a little late but I’ve been told that they would really like us to try to end on time so I think I will leave it there but we would love to engage you in conversation out in the hall after this session is over thank you to all the panelists in the first session and also all of us up here thank you so much thank you all indeed I think that there is actually time to have one question or two questions maybe now there are too many questions I have to vote ok sir there and the lady there

Participant

so I would like to a very short question I would like it’s not a question it’s a suggestion to the gentleman who has beard on that side name I missed yeah Jitu that go get some life at Sarvam I think that setup your agenda of Hindi and other language is going to die very soon so you have to get some life of that Hindi imposition and all those things nobody will impose down the line few ones sure thank you so much for the provocative discussion this is what I was hoping to get that the India impact summit my question is about how can regulatory artifacts like data set cards model cards system cards rigorous evaluations user feedback now be extended to cover multiple languages multiple contexts and multiple cultures I think a lot of hard work

Speaker 2

be

Merve Hickok

ing used as well. So it might perform really good in English, but we know that these systems are not safe or secure or perform that well in many different languages that are not English or as resource intensive as English. So great question. They need to be dynamic and they need to reflect languages. And I will also say just very briefly following up on this is that these are things that governments can require for model providers to release models in your jurisdiction. And they so far are not. Thank you very much. We could insist. We need to insist. They are like California started this. I just want to just…

Virginia Dignum

I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be able together with all the panelists to create some kind of model for the next year. Thank you very much. the measures and we will hopefully facilitate and continue this discussion I would ask all the panellists of the first and of the second round to stay here for a memento from the organisation and I would like to thank you all for being here and all the panellists again of course thank you so much Thank you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (4)
!
Correctionhigh

“Gina Matthews, co‑chair of the ACM Policy Committee, opened the session and introduced the panel.”

The knowledge base identifies Gina Matthews as Chair of the Technology Policy Council of ACM and a co‑host of the session, but does not list her as co‑chair of the ACM Policy Committee.

Confirmedmedium

“Virginia Dignum noted that AI‑safety discussions often focus on technical notions and emphasized the need to consider societal impact, deployment context, governance capacity, incentive structures and lived realities of affected communities.”

The discussion’s aim to move beyond purely technical AI safety toward multidisciplinary governance is confirmed in the knowledge base.

Confirmedmedium

“Specific safeguards for children, women and youth were highlighted as part of AI safety considerations.”

The knowledge base stresses the need to protect vulnerable groups such as children, the elderly and others, supporting the claim of safeguards for children, women and youth.

Additional Contextmedium

“Mozambique is drafting a national AI strategy, a data‑policy, a cybersecurity strategy and new regulations on data‑centres and cloud‑computing, and revising its interoperability framework to improve AI adoption in public administration.”

The knowledge base records Mozambique’s active participation in ICT‑related discussions and capacity‑building efforts, but does not provide details confirming the specific suite of AI‑related policies described.

External Sources (85)
S1
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — – **Jibu Elias**: Responsible Computing Lead for India from Mozilla Katarina de Brisis, Karine Perset, Paula Bogantes, …
S2
From Technical Safety to Societal Impact Rethinking AI Governanc — -Jibu Elias- Researcher and activist who examines how technology and innovation institutions receive knowledge, labor, a…
S3
S4
From Technical Safety to Societal Impact Rethinking AI Governanc — Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I…
S5
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I…
S6
S7
From Technical Safety to Societal Impact Rethinking AI Governanc — -Gina Matthews- Co-host of the session, Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum but …
S9
From Technical Safety to Societal Impact Rethinking AI Governanc — I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be…
S10
https://app.faicon.ai/ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be…
S11
From Technical Safety to Societal Impact Rethinking AI Governanc — -Gina Matthews- Co-host of the session, Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum but …
S12
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S13
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S14
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S15
From Technical Safety to Societal Impact Rethinking AI Governanc — -Yannis Ioannidis- Current president of ACM, Professor at the University of Athens
S16
From Technical Safety to Societal Impact Rethinking AI Governanc — 628 words | 155 words per minute | Duration: 242 secondss All right, something a little different. I would like everybo…
S17
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — All right, something a little different. I would like everybody in the room to raise their hand if you think safety is a…
S18
From Technical Safety to Societal Impact Rethinking AI Governanc — Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage …
S19
From Technical Safety to Societal Impact Rethinking AI Governanc — -Dame Wendy Hall- Regius Professor of Computer Science, Associate Vice President and Director of the Web Science Institu…
S20
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Moderator:Thank you very much, Ivana. And as you say, new technologies create new problems sometimes, but they can also …
S22
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S23
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S24
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S26
From Technical Safety to Societal Impact Rethinking AI Governanc — -Sara Hooker- Co-founder and president of Adaption Labs, formerly with Cohera and other developing organizations
S27
From Technical Safety to Societal Impact Rethinking AI Governanc — Sara Hooker, co-founder and president of Adaption Labs, brought industry experience to questions of trade-offs and trans…
S28
Advancing Scientific AI with Safety Ethics and Responsibility — Evaluation must go beyond model‑centric metrics to include institutional practices, DIY science, and broader socio‑techn…
S29
Panel Discussion AI in Healthcare India AI Impact Summit — The path forward requires continued collaboration between technologists, clinicians, and policymakers, with safety and h…
S30
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Crampton argues that while pre-deployment testing remains necessary, the shift toward agentic AI systems that can plan, …
S31
Ethical AI_ Keeping Humanity in the Loop While Innovating — Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the top…
S32
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Efforts to address bias and promote equality through the inclusion of training data for protected minority groups, compa…
S33
Internet standards and human rights | IGF 2023 WS #460 — In conclusion, the lack of diversity in internet standards bodies, such as the IETF, is a significant concern. The under…
S34
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S35
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S36
AI as critical infrastructure for continuity in public services — Pramod argues that true data sovereignty goes beyond simply storing data locally. It requires having control over jurisd…
S37
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-bein…
S38
Towards a Safer South Launching the Global South AI Safety Research Network — Ms. Chair argues that developers fail to consider the diverse contexts of deployment, including high levels of gender in…
S39
MahaAI Building Safe Secure & Smart Governance — This comment introduces a critical gender and safety perspective that was largely absent from the technical and efficien…
S40
Agenda item 6: other matters — Mozambique: Thank you, Chair. Mozambique will speak out for national capacity. Mozambique delegation recognize that c…
S41
International Cyber Security Diplomatic Negotiations: Role of Africa in Inter-Regional Cooperation for a Global Approach on the Security and Stability of Cyberspace — Although the number of African States that adopted cybersecurity policies and strategies remain low (24%), there are…
S42
A V I S O — A Política Nacional de Segurança Cibernética, aborda aspectos legais e tecnológicos que visam proteger pessoas (c…
S43
How to make AI governance fit for purpose? — Lew emphasized the challenge that “the rate of change outside is greater than the rate of change inside,” requiring deli…
S44
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: I want to address this with an anecdote. Because I am Norwegian, I feel partly responsible here. I mean, I…
S45
From Technical Safety to Societal Impact Rethinking AI Governanc — Citizens must actively insist on safety measures rather than expecting automatic benefits Political action requires rea…
S46
AI Safety at the Global Level Insights from Digital Ministers Of — Policymakers need to determine how to implement targeted regulations that protect citizens without stifling innovation o…
S47
GOVERNING AI FOR HUMANITY — – 120 Supported by the proposed AI office, the standards exchange would also benefit from strong ties to the internation…
S48
Advancing Scientific AI with Safety Ethics and Responsibility — The moderator emphasizes the need to design AI safety measures that maintain high standards of rigor while being practic…
S49
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 131. While most of the organizations under review make specific reference to organizational policies, they do not provid…
S50
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Coordinated enforcement across jurisdictions is deemed crucial for effective regulation. The EU’s Digital Markets Act se…
S51
© 2019, United Nations — Many LDCs lack appropriate legal and regulatory instruments to foster online transactions. A useful starting…
S52
Policy Guidelines — The long-term success of Open Access policies will be assessed by the amount of Open Access content they engender and ho…
S53
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S54
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S55
From Technical Safety to Societal Impact Rethinking AI Governanc — Call for model providers to report language coverage, safety parameters, and testing limitations as a transparency measu…
S56
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S57
Secure Finance Risk-Based AI Policy for the Banking Sector — The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integ…
S58
Generative AI: Steam Engine of the Fourth Industrial Revolution? — In terms of regulating technology, it is suggested that focus should be placed on regulating use cases rather than the t…
S59
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S60
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S61
A Digital Future for All (afternoon sessions) — AI governance requires a multi-stakeholder approach due to the diverse nature of opportunities, risks, and inclusivity c…
S62
Ethics and AI | Part 1 — New technologies will always embed the risk of bad use. The champions of new technologies will constantly promise and pu…
S63
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — Over time, the ability of companies to resist requests or directives from governments has diminished. An example of this…
S64
Advancing Scientific AI with Safety Ethics and Responsibility — “Those we’ll put in a higher risk category compared to something which is just working, let’s say, on certain animals wh…
S65
From Technical Safety to Societal Impact Rethinking AI Governanc — Chemane argues that AI governance must prioritize human, social, and institutional impact rather than focusing solely on…
S66
From Technical Safety to Societal Impact Rethinking AI Governanc — Safety should focus on protection of people, not just systems, requiring continuous human oversight and institutional ac…
S67
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-bein…
S68
MahaAI Building Safe Secure & Smart Governance — This comment introduces a critical gender and safety perspective that was largely absent from the technical and efficien…
S69
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — Yeah. And I’ve just been come back from the UN. Our advisory board and the new scientific panel get together. They’ve go…
S70
Agenda item 6: other matters — Mozambique: Thank you, Chair. Mozambique will speak out for national capacity. Mozambique delegation recognize that c…
S71
Mozambique — Regulation is overseen by the National Communications Institute, and a National Cybersecurity Strategy is in place. Howe…
S72
Digital Government Master Plan — Planned activities: Data security and cyber security strategy and policy development and implementation. Data …
S73
Ethical AI_ Keeping Humanity in the Loop While Innovating — Impact:This historical perspective added urgency to the discussion and provided concrete justification for the EU’s risk…
S74
How to make AI governance fit for purpose? — Lew emphasized the challenge that “the rate of change outside is greater than the rate of change inside,” requiring deli…
S75
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — – **Jibu Elias**: Responsible Computing Lead for India from Mozilla Katarina de Brisis, Karine Perset, Paula Bogantes, …
S76
Towards a Safer South Launching the Global South AI Safety Research Network — “And so we need, in most cases, to ensure that only a handful of institutions should not define what risks are measured,…
S77
WS #283 AI Agents: Ensuring Responsible Deployment — Benotti argues for a future where third-party assessment of AI agents is built by and includes the most affected communi…
S78
The International Observatory on Information and Democracy | IGF 2023 Town Hall #128 — She emphasizes the need for different approaches based on the varying cultural, governance, regulatory capacity contexts
S79
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 3 — Mozambique: Distinguished Chair, since it’s our first intervention in this session, the Mozambique delegation commends…
S80
UNSC meeting: Scientific developments, peace and security — In this address to the United Nations, Mozambique’s representative emphasised the dual nature of scientific and technolo…
S81
Policymaker’s Guide to International AI Safety Coordination — Osama Manzar from the Digital Empowerment Foundation, representing grassroots perspectives from 40 million people reache…
S82
Cybersecurity, cybercrime, and online safety — Audience:Yeah, thank you, sir, for a nice opportunity to ask. I’m Riyad Hassan Badshah, Vice-Chair of the Bangladesh You…
S83
UNSC meeting: Conflict prevention: women and youth — Costa Rica:Madam President, Costa Rica congratulates Japan for convening this open debate and wishes to highlight three …
S84
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 4 — Mozambique: Thank you, Mr. Chair, for giving me the floor. Mr. Chair, Mozambique aligned itself with statement delive…
S85
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240/2/OEWG 2025 — Mozambique: Mr. Chair, thank you for giving us the floor. With regard to application of international law to the use …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Virginia Dignum
2 arguments62 words per minute1141 words1103 seconds
Argument 1
AI safety must consider institutional, economic, and political contexts, not just model flaws (Virginia Dignum)
EXPLANATION
Virginia argues that framing AI safety solely in technical terms overlooks the crucial role of deployment contexts, governance capacities, and incentive structures. She emphasizes that harms arise from how AI is embedded in institutional, economic, and political systems, not just from model or data flaws.
EVIDENCE
She notes that safety is often discussed in terms of model alignment, red-teaming, benchmark performance, etc., but these technical tools do not address the core question of what determines AI’s societal value or harm. She explains that AI systems do not operate in isolation; their impact is shaped by deployment context, governance capacity, incentive structures, and the lived reality of communities, and that failures stem from institutional, economic and political embedding rather than purely technical defects [14-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel discussion emphasizes reframing AI safety beyond technical flaws to include institutional, economic, and political contexts [S4] and highlights the need for evaluation beyond model-centric metrics [S28].
MAJOR DISCUSSION POINT
Redefining AI Safety Beyond Technical Metrics
AGREED WITH
Yannis Ioannidis, Wendy Hall, Lourino Chemane
DISAGREED WITH
Yannis Ioannidis, Tom Romanoff, Dame Wendy Hall
Argument 2
The panel must continue its dialogue and collaboratively develop a concrete AI safety measurement model for future use.
EXPLANATION
Virginia emphasizes that the current discussion is only a beginning and calls for the creation of a systematic model to assess AI safety, to be refined and applied in the coming year.
EVIDENCE
She notes that the conversation is just a start, expresses hope to create a model for the next year, and urges continued collaboration among panelists to develop this framework [375-378].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists committed to creating a collaborative report and measurement model to keep the dialogue alive [S4] and the summit’s follow-up plan is noted in the AI Impact Summit summary [S29].
MAJOR DISCUSSION POINT
Developing a shared AI safety measurement framework
Y
Yannis Ioannidis
2 arguments140 words per minute537 words229 seconds
Argument 1
Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs, outputs, and human involvement (Yannis Ioannidis)
EXPLANATION
Yannis separates the safety of the underlying AI technology from the safety of its use, arguing that the technology itself (like a car) is not a safety issue, but the way humans feed data into it and the consequences of its outputs require regulation and interdisciplinary oversight.
EVIDENCE
He states that there is no safety issue with the technology itself, comparing it to a car, and that robustness and bug-free software are important but not the core safety concern. He stresses that both the input side (human-chosen training data) and the output side (applications in courts, medicine, social media) need measurement, regulation, and involvement of humanities, law, ethics, and cognitive science [108-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ioannidis’ distinction between technology safety and use safety, and the call to regulate inputs and outputs with multidisciplinary input, are documented in the panel transcript [S4][S2].
MAJOR DISCUSSION POINT
Redefining AI Safety Beyond Technical Metrics
AGREED WITH
Virginia Dignum, Wendy Hall, Lourino Chemane
DISAGREED WITH
Virginia Dignum, Tom Romanoff, Dame Wendy Hall
Argument 2
Both data inputs and model outputs require oversight and interdisciplinary regulation to ensure safety (Yannis Ioannidis)
EXPLANATION
He reiterates that safety must begin with careful control of the data fed into AI systems and continue with regulation of the outcomes they produce, requiring participation from multiple disciplines to manage risks.
EVIDENCE
He highlights that humans determine the first parameters and data used for training, and that both input and output stages need regulation, measurement, and interdisciplinary representation to keep AI on the right side of safety [108-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for oversight of both data inputs and model outputs, involving multiple disciplines, is reiterated in the discussion summary [S4][S2].
MAJOR DISCUSSION POINT
Transparency, Accountability, and Trade‑offs in AI Systems
AGREED WITH
Lourino Chemane, Neha Kumar, Merve Hickok
DISAGREED WITH
Tom Romanoff, Dame Wendy Hall
S
Sara Hooker
2 arguments191 words per minute918 words287 seconds
Argument 1
The safety debate needs precision, acknowledgment of trade‑offs, and transparent reporting of what is omitted (Sara Hooker)
EXPLANATION
Sara argues that discussions about AI safety must become more precise, recognize inevitable trade‑offs, and openly disclose which safety aspects are left out. She calls for concrete reporting on language coverage and untested safety parameters.
EVIDENCE
She observes that prestige and power signals often mask true commitment to safety, notes that trade-offs are inevitable when building systems, and proposes a concrete ask: providers should report which languages their models cover and which safety parameters are not tested, making omissions explicit [135-185].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hooker calls for precise safety discussions, acknowledgment of trade-offs, and reporting of omitted parameters, reflected in the panel notes on transparency and trade-offs [S4][S2].
MAJOR DISCUSSION POINT
Redefining AI Safety Beyond Technical Metrics
DISAGREED WITH
Tom Romanoff
Argument 2
Providers should disclose language coverage, safety parameters, and explicit trade‑offs made in model development (Sara Hooker)
EXPLANATION
Sara stresses that AI developers need to be transparent about the languages their models support and the safety tests they have (or have not) performed, thereby making the trade‑offs in model design visible to users and regulators.
EVIDENCE
She specifically asks for reporting of language coverage and safety parameters that are not covered, describing this as a simple yet precise request that reveals what has been given up in model development [170-175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The request for providers to disclose language coverage and untested safety parameters is highlighted in the session summary [S4][S2].
MAJOR DISCUSSION POINT
Transparency, Accountability, and Trade‑offs in AI Systems
AGREED WITH
Participant, Wendy Hall, Virginia Dignum
D
Dame Wendy Hall
2 arguments147 words per minute1140 words462 seconds
Argument 1
Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall)
EXPLANATION
Wendy argues that ethical AI cannot rely only on technical safety; it needs ongoing data collection, long‑term studies, and a new discipline of AI measurement (or metrology) to understand socio‑technical impacts.
EVIDENCE
She calls for collecting data and evidence through longitudinal studies, cites examples such as Australia’s social-media ban experiment and the need to monitor unintended consequences, and describes the emerging AI Measurement Institute and the concept of AI metrology as a way to study “social machines” [88-103] and later mentions the launch of an AI measurement journal […].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hall proposes systematic measurement and longitudinal studies for AI, aligning with the concept of AI metrology and continuous monitoring discussed in the panel [S4] and supported by literature on post-deployment monitoring of agentic AI [S30].
MAJOR DISCUSSION POINT
Redefining AI Safety Beyond Technical Metrics
AGREED WITH
Rasmus Andersen, Wendy Hall, Sara Hooker
DISAGREED WITH
Tom Romanoff, Yannis Ioannidis
Argument 2
Lack of gender diversity undermines ethical AI; inclusive decision‑making bodies are necessary (Dame Wendy Hall)
EXPLANATION
Wendy points out that AI conferences and leadership panels are dominated by men, and that without gender (and broader) diversity, ethical AI cannot be achieved because biases remain unchecked.
EVIDENCE
She observes that 50 % of the summit’s participants were women, noting that all CEOs and panelists were men, and argues that without diverse perspectives, especially women, ethical AI cannot be properly addressed [78-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hall’s concern about gender diversity is echoed in studies on gender inclusivity in AI and internet standards, emphasizing the need for diverse decision-making bodies [S32][S33].
MAJOR DISCUSSION POINT
Inclusion, Diversity, and Representation in AI Development
L
Lourino Chemane
2 arguments160 words per minute573 words213 seconds
Argument 1
AI governance should be multidisciplinary, integrating law, ethics, education, labor, and affected communities; includes data policy and cybersecurity measures (Lourino Chemane)
EXPLANATION
Lourino stresses that effective AI policy must draw on law, social sciences, ethics, education, labor, and the voices of affected communities, and must be backed by concrete data‑policy, cybersecurity, and infrastructure regulations.
EVIDENCE
He lists the need for input from law, social sciences, education, labor, ethics, and communities, mentions drafting a national AI strategy, a data policy, reviewing the national cybersecurity strategy, and adopting regulations for data-center construction and cloud computing to ensure sovereignty and safety [30-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Chemane’s call for multidisciplinary governance, involving law, ethics, education, labor, and communities, matches the panel’s emphasis on interdisciplinary approaches [S4][S34].
MAJOR DISCUSSION POINT
Multidisciplinary Governance and Policy Frameworks
AGREED WITH
Yannis Ioannidis, Neha Kumar, Merve Hickok
Argument 2
National infrastructure sovereignty, including data center and cloud regulations, is crucial for safe AI deployment (Lourino Chemane)
EXPLANATION
He argues that controlling critical digital infrastructure—such as data centers and cloud services—is essential for national sovereignty and for ensuring AI safety from a policy perspective.
EVIDENCE
He notes Mozambique’s recent adoption of regulations for the construction and operation of data centers and for cloud computing, emphasizing that infrastructure is a key element for sovereignty and safety [45-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of national infrastructure sovereignty and data-center regulation for safe AI deployment is discussed in the panel and reinforced by analyses of AI as critical infrastructure and data sovereignty [S35][S36].
MAJOR DISCUSSION POINT
Socio‑Environmental Impacts of AI Infrastructure
M
Merve Hickok
2 arguments148 words per minute454 words182 seconds
Argument 1
Safety must be linked to human rights, democratic values, and a shift in powerful narratives; citizens’ voices are essential (Merve Hickok)
EXPLANATION
Merve contends that AI safety cannot be separated from human‑rights and democratic concerns, and that the dominant narrative is shaped by powerful actors; therefore, ordinary citizens must raise their voices to change that narrative.
EVIDENCE
She argues that history shows powerful narratives dominate safety discussions, calls for a shift in narrative, and highlights that AI harms people’s rights, freedoms, dignity, and democratic participation, urging citizens to speak up [271-283].
MAJOR DISCUSSION POINT
Multidisciplinary Governance and Policy Frameworks
AGREED WITH
Jeanna Matthews, Tom Romanoff
DISAGREED WITH
Tom Romanoff, Rasmus Andersen
Argument 2
AI’s impact on rights, freedoms, and democratic participation must be addressed to prevent erosion of dignity (Merve Hickok)
EXPLANATION
She emphasizes that AI can undermine fundamental rights and democratic processes, and that safeguarding these values is a core part of AI safety.
EVIDENCE
She points out that real-world AI consequences affect people’s rights, freedoms, ability to live with dignity, and participation in democracy, and that these issues are increasingly under pressure for her organization [279-283].
MAJOR DISCUSSION POINT
Socio‑Environmental Impacts of AI Infrastructure
R
Rasmus Andersen
2 arguments158 words per minute568 words214 seconds
Argument 1
Governments need long‑term, evidence‑based planning to mitigate AI risks and protect citizens, drawing lessons from other safety domains (Rasmus Andersen)
EXPLANATION
Rasmus argues that policymakers must adopt a forward‑looking, evidence‑driven approach to AI, learning from safety practices in sectors like nuclear energy and aviation to prepare for future AI impacts.
EVIDENCE
He describes his role advising world leaders on AI, focusing on long-term best-interest scenarios for 2030-2035, and stresses the need for evidence-based planning; later he references learning from nuclear and flight safety reductions as analogues for AI risk management [250-256].
MAJOR DISCUSSION POINT
Multidisciplinary Governance and Policy Frameworks
AGREED WITH
Wendy Hall, Sara Hooker
DISAGREED WITH
Tom Romanoff, Merve Hickok
Argument 2
Legal accountability is limited; lawsuits and potential criminal liability are emerging, highlighting the need for enforceable standards (Rasmus Andersen)
EXPLANATION
Rasmus notes that while some legal actions (e.g., lawsuits over AI‑related suicides, deep‑fake regulation) are beginning, the current legal framework is insufficient, underscoring the need for stronger, enforceable standards.
EVIDENCE
He mentions ongoing lawsuits concerning suicides among young people, references the recent deep-fake regulation after Elon Musk’s intervention, and argues that regulation cannot aim for zero risk but must manage trade-offs and enforce standards [305-313].
MAJOR DISCUSSION POINT
Transparency, Accountability, and Trade‑offs in AI Systems
T
Tom Romanoff
2 arguments155 words per minute628 words242 seconds
Argument 1
Political will hinges on a 51 % threshold; activists must educate the public and push for concrete regulations (Tom Romanoff)
EXPLANATION
Tom explains that achieving political momentum for AI regulation requires crossing a 51 % support threshold, and that activists should actively educate citizens and lobby for clear policy actions rather than remain moderate.
EVIDENCE
He illustrates the 51 % rule in politics and business, describes lobbyist incentives to stay below that threshold, and urges participants not to be moderates but to pick a side and educate others about AI safety and regulation [327-357].
MAJOR DISCUSSION POINT
Multidisciplinary Governance and Policy Frameworks
AGREED WITH
Jeanna Matthews, Merve Hickok
DISAGREED WITH
Merve Hickok, Rasmus Andersen
Argument 2
Enforcement of safety should focus on AI outputs and require clear legal frameworks, not just voluntary compliance (Tom Romanoff)
EXPLANATION
Tom argues that safety regulations need to target the outcomes produced by AI systems, with explicit legal obligations, rather than relying on voluntary measures applied to the technology itself.
EVIDENCE
He conducts a live poll asking the audience to lower their hands if safety should be enforced on AI outputs and if laws should apply to outputs rather than the AI, highlighting the need for legal frameworks that address outputs directly [330-334].
MAJOR DISCUSSION POINT
Transparency, Accountability, and Trade‑offs in AI Systems
DISAGREED WITH
Sara Hooker
N
Neha Kumar
1 argument163 words per minute643 words236 seconds
Argument 1
Inclusive design must ask “who decides” and “who benefits,” drawing on feminist and development studies to avoid superficial inclusion (Neha Kumar)
EXPLANATION
Neha stresses that true inclusivity requires interrogating decision‑making power and benefit distribution, using insights from feminist and development studies to move beyond tokenistic diversity statements.
EVIDENCE
She cites lessons from feminist and women’s studies, calls for asking who decides and who benefits, critiques superficial diversity talk, and highlights the need to examine who is actually included in design processes and who gains from AI deployments [285-303].
MAJOR DISCUSSION POINT
Inclusion, Diversity, and Representation in AI Development
AGREED WITH
Lourino Chemane, Yannis Ioannidis, Merve Hickok
J
Jibu Elias
2 arguments155 words per minute658 words253 seconds
Argument 1
Marginalized groups (e.g., tribal languages) are excluded from AI benefits; extractive practices exacerbate inequality (Jibu Elias)
EXPLANATION
Jibu highlights that AI systems often ignore tribal languages and communities, and that the way AI infrastructure is built can be exploitative, deepening socioeconomic divides.
EVIDENCE
He mentions tribal populations in Indian states whose languages are not represented in models like Gemini, notes the imposition of Hindi, and describes how data-center projects manipulate local leaders and extract groundwater in water-scarce regions, illustrating extractive practices [201-210].
MAJOR DISCUSSION POINT
Inclusion, Diversity, and Representation in AI Development
Argument 2
AI data centers can cause environmental harm (e.g., water extraction) and involve community manipulation, raising ethical concerns (Jibu Elias)
EXPLANATION
He points out that the construction and operation of AI data centers can lead to environmental degradation, such as groundwater depletion, and are often accompanied by community coercion, raising serious ethical questions.
EVIDENCE
He recounts a specific data-center in Telangana that extracted groundwater in a water-scarce area, involved bribing politicians and community leaders, and caused local environmental damage [208-210].
MAJOR DISCUSSION POINT
Socio‑Environmental Impacts of AI Infrastructure
P
Participant
1 argument126 words per minute141 words67 seconds
Argument 1
Regulatory artifacts (model cards, dataset cards) must be dynamic and cover multiple languages and cultural contexts (Participant)
EXPLANATION
The participant asks how existing documentation tools like model cards can be extended to support multilingual and multicultural contexts, emphasizing the need for dynamic, inclusive standards.
EVIDENCE
He/she asks how dataset/model cards, rigorous evaluations, and user feedback can be extended to cover multiple languages, contexts, and cultures, noting current limitations in non-English performance [364-368].
MAJOR DISCUSSION POINT
Inclusion, Diversity, and Representation in AI Development
AGREED WITH
Sara Hooker, Wendy Hall, Virginia Dignum
J
Jeanna Matthews
2 arguments145 words per minute277 words113 seconds
Argument 1
Historical patterns of extractive technologies show that AI will not automatically benefit everyone; safety must be enforced through mandatory safeguards rather than relying on good intentions alone.
EXPLANATION
Jeanna questions whether history indicates that AI will work out for all and points out the lack of mechanisms such as recovery, retribution, or legal accountability for harmful AI actions, emphasizing the need for enforceable obligations.
EVIDENCE
She raises a provocative question about the adequacy of goodwill, noting the absence of recovery, retribution, remuneration, and legal consequences for AI misuse, and asks if we are truly serious about AI safety [266-270].
MAJOR DISCUSSION POINT
Accountability and enforceable safeguards in AI safety
Argument 2
Achieving universal AI safety and wellness requires active insistence and advocacy from all stakeholders; it will not materialize automatically.
EXPLANATION
In her closing remarks, Jeanna stresses that without deliberate pressure and insistence from participants, the promised benefits of AI safety and wellbeing will not be realized.
EVIDENCE
She states that happiness and wellness for all will not happen unless everyone insists on safety measures, urging collective action rather than passive expectation [359-363].
MAJOR DISCUSSION POINT
The need for proactive advocacy to ensure AI safety
Agreements
Agreement Points
AI safety must consider institutional, economic, and political contexts, not just technical model flaws
Speakers: Virginia Dignum, Yannis Ioannidis, Wendy Hall, Lourino Chemane
AI safety must consider institutional, economic, and political contexts, not just model flaws (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs, outputs, and human involvement (Yannis Ioannidis) Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall) AI governance should be multidisciplinary, integrating law, ethics, education, labor, and affected communities; includes data policy and cybersecurity measures (Lourino Chemane)
All four speakers argue that framing AI safety solely in technical terms overlooks the crucial role of deployment contexts, governance capacities, incentive structures, and human involvement; safety must be addressed through multidisciplinary governance, continuous monitoring, and consideration of institutional, economic and political factors [21-24][108-124][88-103][32-34].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for AI governance that integrates institutional, economic and political dimensions, as highlighted in the ‘Technical Safety to Societal Impact’ discussion urging broader contextual analysis [S45] and in policy guidance urging targeted regulations that balance innovation with societal protection [S46]. International standards bodies also stress the need for moral, ethical and regulatory dialogue beyond pure technical robustness [S47].
Effective AI governance requires multidisciplinary, inclusive participation of law, ethics, social sciences, affected communities, and citizens
Speakers: Lourino Chemane, Yannis Ioannidis, Neha Kumar, Merve Hickok
AI governance should be multidisciplinary, integrating law, ethics, education, labor, and affected communities; includes data policy and cybersecurity measures (Lourino Chemane) Both data inputs and model outputs require oversight and interdisciplinary regulation to ensure safety (Yannis Ioannidis) Inclusive design must ask “who decides” and “who benefits,” drawing on feminist and development studies to avoid superficial inclusion (Neha Kumar) Safety must be linked to human rights, democratic values, and a shift in powerful narratives; citizens’ voices are essential (Merve Hickok)
The speakers converge on the view that AI safety and governance cannot be achieved without input from diverse disciplines (law, ethics, social sciences) and the participation of affected communities, emphasizing the need to ask who decides and to empower citizen voices [34-35][119-120][285-293][271-283].
POLICY CONTEXT (KNOWLEDGE BASE)
Multidisciplinary participation is endorsed by the AI for Humanity framework which links scientific panels with ethical and legal policy dialogue [S47], and by UN-led interdisciplinary initiatives involving UNESCO, OECD, UNICEF and WHO that stress cross-sector collaboration [S59]. Recent high-level remarks also call for inclusion of civil society, innovators and researchers alongside governments [S60][S61].
Transparency and systematic reporting of trade‑offs, language coverage and omitted safety parameters are essential for trustworthy AI
Speakers: Sara Hooker, Participant, Wendy Hall, Virginia Dignum
Providers should disclose language coverage, safety parameters, and explicit trade‑offs made in model development (Sara Hooker) Regulatory artifacts (model cards, dataset cards) must be dynamic and cover multiple languages and cultural contexts (Participant) Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall) The panel must continue its dialogue and collaboratively develop a concrete AI safety measurement model for future use (Virginia Dignum)
All four participants call for concrete, transparent documentation of what AI systems support (e.g., languages) and what safety aspects are omitted, advocating for dynamic model/dataset cards and a shared AI safety measurement framework [170-175][364-368][88-103][375-378].
POLICY CONTEXT (KNOWLEDGE BASE)
Transparency requirements are echoed in multiple policy documents, including the call for model providers to disclose language coverage and testing limits [S55], UN Security Council emphasis on algorithmic transparency and rigorous evaluation [S53], and WHO recommendations for ‘glass-box’ AI with traceable reasoning [S56]. Open-access style monitoring guidelines further stress systematic reporting of compliance [S52].
Long‑term, evidence‑based planning and learning from other safety domains (e.g., nuclear, aviation) are needed to mitigate AI risks
Speakers: Rasmus Andersen, Wendy Hall, Sara Hooker
Governments need long‑term, evidence‑based planning to mitigate AI risks and protect citizens, drawing lessons from other safety domains (Rasmus Andersen) Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall) We should look at the overarching arc from Bletchley to now; the conversation has become more precise but still needs historical learning (Sara Hooker)
The speakers agree that AI risk management should adopt a forward-looking, evidence-driven approach, borrowing best-practice lessons from established safety fields and employing longitudinal monitoring to inform policy [250-256][88-103][151-156].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based policy analysis is advocated to bridge gaps between scientific reports and actionable guidance [S46], while organizational learning boards are cited as mechanisms to embed safety lessons from other sectors [S49].
Achieving AI safety and societal wellbeing requires active advocacy, citizen insistence, and moving beyond passive goodwill
Speakers: Jeanna Matthews, Merve Hickok, Tom Romanoff
Historical patterns of extractive technologies show that AI will not automatically benefit everyone; safety must be enforced through mandatory safeguards rather than relying on good intentions alone (Jeanna Matthews) Safety must be linked to human rights, democratic values, and a shift in powerful narratives; citizens’ voices are essential (Merve Hickok) Political will hinges on a 51 % threshold; activists must educate the public and push for concrete regulations (Tom Romanoff)
All three speakers stress that AI safety will not materialise through goodwill alone; it requires deliberate, collective pressure from citizens, activists and policymakers to create enforceable safeguards and shift narratives [359-363][271-283][327-357].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for citizen mobilisation and advocacy is foregrounded in the ‘Technical Safety to Societal Impact’ report, which stresses a 51 % public support threshold and active insistence on safety measures [S45]. Calls for broader stakeholder engagement echo in Bouverot’s remarks on comprehensive inclusion [S60] and the ‘Digital Future for All’ multi-stakeholder approach [S61].
Similar Viewpoints
Both speakers separate the technical robustness of AI models from the broader safety concerns that arise in real‑world deployment, emphasizing that governance of inputs, outputs and institutional contexts is essential [21-24][108-124].
Speakers: Virginia Dignum, Yannis Ioannidis
AI safety must consider institutional, economic, and political contexts, not just model flaws (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs, outputs, and human involvement (Yannis Ioannidis)
Both advocate for systematic measurement and transparent reporting of AI systems, arguing that precise documentation (e.g., language support, omitted safety tests) and longitudinal monitoring are needed to move beyond vague safety promises [88-103][170-175].
Speakers: Wendy Hall, Sara Hooker
Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall) Providers should disclose language coverage, safety parameters, and explicit trade‑offs made in model development (Sara Hooker)
Both stress that effective AI safety policy depends on building sufficient political momentum and evidence‑based advocacy, requiring activists to mobilise public support to reach decisive thresholds [250-256][327-357].
Speakers: Rasmus Andersen, Tom Romanoff
Governments need long‑term, evidence‑based planning to mitigate AI risks and protect citizens, drawing lessons from other safety domains (Rasmus Andersen) Political will hinges on a 51 % threshold; activists must educate the public and push for concrete regulations (Tom Romanoff)
Unexpected Consensus
Gender and broader diversity are essential for ethical AI and inclusive decision‑making
Speakers: Wendy Hall, Neha Kumar
Lack of gender diversity undermines ethical AI; inclusive decision‑making bodies are necessary (Dame Wendy Hall) Inclusive design must ask “who decides” and “who benefits,” drawing on feminist and development studies to avoid superficial inclusion (Neha Kumar)
While gender diversity is often discussed in isolation, both speakers converge on the view that without women’s (and other marginalized groups’) participation in AI governance, ethical AI cannot be achieved; this links feminist theory directly to AI safety governance, an alignment not explicitly anticipated at the start of the panel [78-88][285-293].
Overall Assessment

The panel displayed a strong consensus that AI safety cannot be reduced to technical robustness; it requires multidisciplinary governance, transparent measurement, inclusive participation, long‑term evidence‑based planning, and active citizen advocacy. Participants repeatedly called for concrete measurement tools, inclusive design processes, and political mobilisation to translate safety principles into enforceable standards.

High consensus across thematic areas, indicating a shared understanding that future AI safety work must integrate technical, social, and political dimensions. This consensus suggests that forthcoming policy recommendations and research agendas are likely to focus on developing shared measurement frameworks, inclusive governance structures, and mechanisms for sustained public engagement.

Differences
Different Viewpoints
What constitutes AI safety and where responsibility lies (technical robustness vs institutional, governance, and use contexts)
Speakers: Virginia Dignum, Yannis Ioannidis, Tom Romanoff, Dame Wendy Hall
AI safety must consider institutional, economic, and political contexts, not just model flaws (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs, outputs, and human involvement (Yannis Ioannidis) Enforcement of safety should focus on AI outputs and require clear legal frameworks, not just voluntary compliance (Tom Romanoff) Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall)
Virginia argues that safety cannot be reduced to technical metrics and must account for deployment contexts, governance and incentives [14-24]. Yannis counters that the technology itself is not a safety issue, likening it to a car, and stresses that safety concerns arise from how humans feed data and use the outputs, calling for regulation of inputs and outputs [108-124]. Tom pushes the legal focus onto the outputs, insisting laws should apply to AI results rather than the technology [330-334]. Wendy adds that ethical AI needs ongoing monitoring and a new discipline of AI measurement to capture socio-technical impacts [88-103]. These positions diverge on whether safety is primarily a technical, regulatory, or measurement problem.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over responsibility mirror discussions that distinguish technical safety from institutional embedding, as noted in the ‘Technical Safety to Societal Impact’ analysis [S45] and the banking sector’s risk-based AI policy that embeds governance within existing frameworks [S57].
Where regulatory effort should be directed: AI technology itself, its inputs, its outputs, or post‑deployment monitoring
Speakers: Tom Romanoff, Yannis Ioannidis, Dame Wendy Hall
Enforcement of safety should focus on AI outputs and require clear legal frameworks, not just voluntary compliance (Tom Romanoff) Both data inputs and model outputs require oversight and interdisciplinary regulation to ensure safety (Yannis Ioannidis) Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall)
Tom advocates legal rules that target the outcomes produced by AI systems, arguing that safety must be enforced on outputs [330-334]. Yannis emphasizes that safety must begin with careful control of training data (inputs) and continue with regulation of the consequences (outputs) [108-124]. Wendy stresses that beyond any legal rule, ongoing data collection and longitudinal studies are needed to understand and mitigate harms [88-103]. The disagreement centers on whether the primary regulatory lever should be legal constraints on outputs, oversight of inputs/outputs, or continuous post-deployment measurement.
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory focus on use-cases rather than the underlying technology is advocated to avoid stifling innovation [S58], while coordinated cross-jurisdictional enforcement models such as the EU Digital Markets Act illustrate a need for comprehensive oversight of inputs, outputs and post-deployment monitoring [S50]. Gap-analysis approaches for emerging markets also highlight targeting regulatory effort across the AI lifecycle [S51].
How to achieve effective AI safety: political mobilisation thresholds, citizen activism, or long‑term evidence‑based planning
Speakers: Tom Romanoff, Merve Hickok, Rasmus Andersen
Political will hinges on a 51 % threshold; activists must educate the public and push for concrete regulations (Tom Romanoff) Safety must be linked to human rights, democratic values, and a shift in powerful narratives; citizens’ voices are essential (Merve Hickok) Governments need long‑term, evidence‑based planning to mitigate AI risks and protect citizens, drawing lessons from other safety domains (Rasmus Andersen)
Tom describes a pragmatic rule that political change occurs once 51 % support is reached, urging activists to educate and lobby for regulation [327-357]. Merve argues that safety is fundamentally about protecting human rights and that changing the dominant narrative requires ordinary citizens to raise their voices [271-283]. Rasmus calls for forward-looking, evidence-based policy making, learning from sectors like nuclear and aviation to prepare for AI impacts in the 2030-35 horizon [250-256]. The three speakers share the goal of safer AI but disagree on the primary mechanism-political thresholds, grassroots narrative change, or strategic governmental planning.
POLICY CONTEXT (KNOWLEDGE BASE)
Political mobilisation thresholds (51 % support) and citizen activism are highlighted in the ‘Technical Safety to Societal Impact’ report [S45], while evidence-based planning is recommended to translate scientific insights into policy options [S46].
Possibility and desirability of universal AI safety rules
Speakers: Sara Hooker, Tom Romanoff
The safety debate needs precision, acknowledgment of trade‑offs, and transparent reporting of what is omitted (Sara Hooker) Enforcement of safety should focus on AI outputs and require clear legal frameworks, not just voluntary compliance (Tom Romanoff)
Sara questions whether universal AI safety rules are realistic, emphasizing the need for precise, trade-off-aware discussions and transparent reporting of omitted safety aspects [143-145][165-185]. Tom, while not directly addressing universality, pushes for concrete legal enforcement on outputs, implying a more uniform regulatory approach [330-334]. Their tension lies in Sara’s skepticism about one-size-fits-all rules versus Tom’s push for clear, enforceable standards.
POLICY CONTEXT (KNOWLEDGE BASE)
International standard-setting bodies such as ITU, ISO/IEC and IEEE are discussed as platforms for globally coordinated AI safety standards [S47], and coordinated enforcement across jurisdictions (e.g., EU DMA) demonstrates a move toward harmonised rules [S50].
Unexpected Differences
Technology itself as a safety issue vs. institutional embedding
Speakers: Virginia Dignum, Yannis Ioannidis
AI safety must consider institutional, economic, and political contexts, not just model flaws (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs, outputs, and human involvement (Yannis Ioannidis)
Yannis treats the AI technology as analogous to a car—intrinsically safe unless misused—whereas Virginia argues that harms arise from the institutional and governance contexts in which AI is embedded. This contrast was not anticipated given the common framing of safety as a technical problem.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between regulating the technology versus its institutional context is reflected in recommendations to focus on specific high-risk use-cases rather than the technology per se [S58], and in proposals to embed AI governance within existing regulatory layers rather than creating separate regimes [S57].
Legal enforcement on outputs vs. need for longitudinal monitoring
Speakers: Tom Romanoff, Dame Wendy Hall
Enforcement of safety should focus on AI outputs and require clear legal frameworks, not just voluntary compliance (Tom Romanoff) Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall)
Tom pushes for immediate legal rules targeting AI outputs, while Wendy argues that understanding and mitigating harms requires long‑term data collection and a new measurement discipline. The tension between swift legal action and slower scientific monitoring was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Longitudinal monitoring is emphasized in Open Access policy compliance frameworks [S52] and UN Security Council calls for rigorous testing and ongoing evaluation of AI systems [S53], suggesting that enforcement on outputs alone may be insufficient.
Overall Assessment

The panel shows broad consensus that AI safety must go beyond technical robustness and involve multidisciplinary, inclusive, and rights‑based approaches. However, speakers diverge sharply on where to focus regulatory effort (technology vs. inputs vs. outputs vs. post‑deployment monitoring), the mechanisms to achieve safety (legal thresholds, citizen activism, narrative change, long‑term planning), and the feasibility of universal rules. These disagreements reflect differing professional lenses—technical, policy, human‑rights, and activist—leading to varied strategic priorities.

Moderate to high: While there is agreement on the need for broader governance, the lack of alignment on concrete levers and implementation pathways could hinder coordinated action, requiring further dialogue to reconcile technical, legal, and societal strategies.

Partial Agreements
All speakers agree that AI safety cannot be addressed solely by technical fixes and requires multidisciplinary governance, inclusive design, and societal oversight. However, they diverge on the primary levers: Virginia stresses institutional contexts, Yannis focuses on input/output regulation, Wendy on measurement, Neha on feminist‑informed inclusion, Merve on narrative and rights, Rasmus on long‑term policy planning, and Tom on political mobilisation thresholds. The shared goal is broader safety, but the pathways differ.
Speakers: Virginia Dignum, Yannis Ioannidis, Dame Wendy Hall, Neha Kumar, Merve Hickok, Rasmus Andersen, Tom Romanoff
AI safety must consider institutional, economic, and political contexts, not just model flaws (Virginia Dignum) Both data inputs and model outputs require oversight and interdisciplinary regulation to ensure safety (Yannis Ioannidis) Ethical AI requires continuous monitoring, longitudinal studies, and the development of AI measurement/metrology (Dame Wendy Hall) Inclusive design must ask “who decides” and “who benefits,” drawing on feminist and development studies (Neha Kumar) Safety must be linked to human rights, democratic values, and a shift in powerful narratives; citizens’ voices are essential (Merve Hickok) Governments need long‑term, evidence‑based planning to mitigate AI risks and protect citizens (Rasmus Andersen) Political will hinges on a 51 % threshold; activists must educate the public and push for concrete regulations (Tom Romanoff)
Takeaways
Key takeaways
AI safety must be understood beyond technical robustness; institutional, economic, political, and social contexts shape outcomes. A clear distinction is needed between safety of the AI technology itself and safety of its use and deployment. Multidisciplinary governance—including law, ethics, education, labor, and affected communities—is essential for effective AI policy. Inclusive and diverse decision‑making bodies are critical; lack of gender and cultural representation undermines ethical AI. Transparency about model limitations, language coverage, and trade‑offs must be mandated (e.g., through model/dataset cards). Long‑term, evidence‑based planning and monitoring (longitudinal studies, AI metrology) are required to assess real‑world impact. AI infrastructure (data centers, cloud services) has socio‑environmental consequences that need governance and sovereignty considerations. Political will often hinges on a 51 % threshold; civil‑society activism and public education are needed to push regulation forward. Existing safety domains (nuclear, aviation, cybersecurity) offer useful analogues for risk reduction but must be adapted to AI.
Resolutions and action items
Mozambique will finalize its national AI strategy, data policy, and related cybersecurity, data‑center, and cloud‑computing regulations. The ACM will launch a new journal focused on AI measurement/metrology to consolidate research on AI safety metrics. Panelists agreed to draft a collective report/model within the next year to synthesize discussion outcomes and guide future work. Governments are urged to require dynamic, multilingual model and dataset cards as part of regulatory compliance. Activists and scholars are encouraged to educate the public and lobby for concrete AI safety legislation, moving beyond moderate positions.
Unresolved issues
How to operationalize inclusive governance structures that meaningfully involve women, children, tribal and other marginalized groups. Specific mechanisms for enforcing safety on AI outputs versus the underlying technology remain undefined. Concrete standards for longitudinal monitoring and AI metrology, and who will fund/maintain them, were not settled. Methods for addressing the extractive environmental impacts of AI infrastructure (e.g., water use in data centers) need further development. Legal accountability pathways (criminal liability, lawsuits) for AI‑induced harms are still unclear. Balancing trade‑offs between model performance, safety parameters, and societal values lacks a agreed framework.
Suggested compromises
Accept that AI safety cannot achieve zero risk; instead, aim for risk reduction analogous to aviation or nuclear safety. Recognize that technical robustness and social context are both necessary; encourage trade‑off disclosures rather than seeking a single universal safety rule. Adopt a layered approach: regulate inputs (data governance) and outputs (use‑case specific oversight) while allowing technical innovation to continue. Use existing safety domains as reference points while developing AI‑specific metrics, acknowledging that perfect alignment is unrealistic.
Thought Provoking Comments
Safety … we look at it as the protection of people, not only systems. AI governance must prioritize human, social, and institutional impact, going beyond technical metrics such as robustness, accuracy, or algorithm alignment. Effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities.
Frames AI safety as a human‑centred, multidisciplinary governance problem rather than a purely technical issue, emphasizing the need for inclusive policy processes and institutional accountability.
Shifted the discussion from technical robustness to the broader socio‑political context, prompting later speakers (e.g., Wendy Hall, Yannis Ioannidis) to elaborate on governance, inclusion, and the role of institutions.
Speaker: Lourino Chemane
If it’s not diverse it’s not ethical. … the lack of women on the panels shows that without diversity we cannot sort out biases. We need to monitor what’s going on, collect data, do longitudinal studies, and develop a science of AI measurement or AI metrology.
Boldly challenges the summit’s token inclusivity, linking diversity directly to ethical AI, and introduces the novel concept of “AI metrology” as a systematic way to measure socio‑technical impacts.
Created a turning point by highlighting gender bias and proposing concrete institutional mechanisms (AI measurement) that later participants referenced when discussing accountability and standards.
Speaker: Dame Wendy Hall
I want to separate the issue of safety of AI and talk about safety of AI use. The technology itself has no social issue; the problem lies in how humans input data and how we deploy the models. All sciences – humanities, philosophers, ethicists, legal people – must come together.
Clarifies the distinction between technical safety and societal safety, emphasizing that the same technology can be safe or unsafe depending on human choices and governance structures.
Re‑oriented the conversation toward the input side of AI systems, reinforcing the multidisciplinary call and influencing later remarks about data policies and human oversight.
Speaker: Yannis Ioannidis
One of the biggest signals of whether you actually care about safety is what the forms of prestige and power look like… we need to report what languages model providers cover, what safety parameters they have tested, and explicitly state the trade‑offs they made.
Links the politics of prestige to concrete transparency measures, arguing that safety cannot be discussed without exposing the hidden trade‑offs and coverage gaps of AI models.
Introduced the practical demand for model cards and language coverage reporting, which was later echoed by participants asking about multilingual safety and regulatory artifacts.
Speaker: Sara Hooker
AI is becoming a larger sociopolitical construct… its exploitative and extractive nature is evident in data‑center projects that harm local communities and in the exclusion of tribal languages from models.
Provides a vivid, ground‑level illustration of AI’s extractive impacts, moving the debate from abstract policy to concrete environmental and cultural harms.
Prompted other panelists (e.g., Merve Hickok, Neha Kumar) to discuss historical patterns of exploitation and the need for inclusive, rights‑based governance.
Speaker: Jibu Elias
History does not show that AI will be great for everyone. The narrative of safety is shaped by the powerful; we must change that narrative to protect rights, freedoms, and democratic participation.
Places current AI safety debates within a historical lens, warning that without a shift in power dynamics, safety measures will repeat past injustices.
Reinforced the urgency expressed by earlier speakers, leading to a consensus that proactive, inclusive advocacy is required rather than passive reliance on technology.
Speaker: Merve Hickok
The 51 % rule: you need a majority of political will or board control to pass regulation. Moderates will wait for a crisis; we must become activists and push the narrative forward.
Translates abstract policy discussions into a concrete political dynamic, urging participants to move from moderation to activism to achieve regulatory change.
Served as a call to action that culminated the panel’s discussion, influencing the final remarks about insisting on safety and prompting audience engagement.
Speaker: Tom Romanoff
We need to ask who is making decisions, who benefits, and who is part of the design process. Inclusion must move from buzzwords to concrete practices, especially for vulnerable groups like the elderly and the poorest.
Brings design and feminist scholarship into the AI safety conversation, emphasizing the need to operationalize inclusivity rather than merely naming it.
Deepened the analysis of inclusion, linking it to concrete design practices and reinforcing earlier points about diversity and power structures.
Speaker: Neha Kumar
Overall Assessment

The discussion was steered away from a narrow focus on technical robustness toward a multidimensional view of AI safety that foregrounds human impact, power dynamics, and governance. Key comments—particularly those highlighting the necessity of diversity (Wendy Hall), the distinction between technical and societal safety (Ioannidis), the transparency of model trade‑offs (Sara Hooker), and the historical and political forces shaping safety narratives (Merve Hickok, Tom Romanoff)—acted as turning points that broadened the agenda, introduced concrete accountability mechanisms, and galvanized the panel toward a call for active, inclusive advocacy. Collectively, these insights reshaped the tone from descriptive to prescriptive, setting the stage for future policy work and interdisciplinary collaboration.

Follow-up Questions
How can we start changing the discourse from a pure technical approach to a broader inclusive societal‑institutional approach to AI safety?
Shifts focus from model robustness to governance, ethics, and real‑world impact, addressing the core concern of the panel.
Speaker: Virginia Dignum
What trade‑offs do AI model providers make, especially regarding language coverage and safety parameters, and can these be reported transparently?
Calls for clear reporting of which languages are supported and what safety tests have been performed, enabling accountability and informed deployment.
Speaker: Sara Hooker
How can regulatory artifacts such as dataset cards, model cards, system cards, rigorous evaluations, and user‑feedback mechanisms be extended to cover multiple languages, contexts, and cultures?
Ensures that safety assessments are not limited to English‑centric models and that diverse linguistic and cultural settings are protected.
Speaker: Unnamed participant (audience)
Where are we headed regarding the extractive and exploitative nature of AI development; will these practices continue?
Raises concern about AI becoming a socio‑political tool that extracts value from marginalized communities, prompting investigation into systemic harms.
Speaker: Jibu Elias
Does history show that AI will benefit everyone automatically, or do we need enforceable ‘musts’ rather than good intentions? Are we serious about AI safety?
Challenges the assumption that AI will self‑regulate and calls for binding regulations and accountability mechanisms.
Speaker: Jeanna Matthews

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.